idnits 2.17.1 draft-morton-bmwg-multihome-evpn-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (December 31, 2018) is 1942 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC7432' is mentioned on line 107, but not defined == Unused Reference: 'RFC5180' is defined on line 390, but no explicit reference was found in the text == Unused Reference: 'RFC6201' is defined on line 395, but no explicit reference was found in the text == Unused Reference: 'RFC6985' is defined on line 411, but no explicit reference was found in the text == Unused Reference: 'OPNFV-2017' is defined on line 422, but no explicit reference was found in the text == Unused Reference: 'RFC8239' is defined on line 429, but no explicit reference was found in the text == Unused Reference: 'TST009' is defined on line 433, but no explicit reference was found in the text == Unused Reference: 'VSPERF-b2b' is defined on line 440, but no explicit reference was found in the text == Unused Reference: 'VSPERF-BSLV' is defined on line 446, but no explicit reference was found in the text ** Obsolete normative reference: RFC 1944 (Obsoleted by RFC 2544) Summary: 1 error (**), 0 flaws (~~), 11 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft J. Uttaro 4 Updates: ???? (if approved) AT&T Labs 5 Intended status: Informational December 31, 2018 6 Expires: July 4, 2019 8 Benchmarks and Methods for Multihomed EVPN 9 draft-morton-bmwg-multihome-evpn-01 11 Abstract 13 Fundamental Benchmarking Methodologies for Network Interconnect 14 Devices of interest to the IETF are defined in RFC 2544. Key 15 benchmarks applicable to restoration and multi-homed sites are in RFC 16 6894. This memo applies these methods to Multihomed nodes 17 implemented on Ethernet Virtual Private Networks (EVPN). 19 Requirements Language 21 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 22 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 23 "OPTIONAL" in this document are to be interpreted as described in BCP 24 14[RFC2119] [RFC8174] when, and only when, they appear in all 25 capitals, as shown here. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on July 4, 2019. 44 Copyright Notice 46 Copyright (c) 2018 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (https://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 62 2. Scope and Goals . . . . . . . . . . . . . . . . . . . . . . . 3 63 3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 4. Test Setups . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 5. Procedure for Throughput Characterization . . . . . . . . . . 5 66 5.1. Address Learning Phase . . . . . . . . . . . . . . . . . 5 67 5.2. Test for a Single Frame Size and Number of Flows . . . . 5 68 5.3. Test Repetition . . . . . . . . . . . . . . . . . . . . . 6 69 5.4. Benchmark Calculations . . . . . . . . . . . . . . . . . 6 70 6. Procedure for Mass Withdrawal Characterization . . . . . . . 6 71 6.1. Address Learning Phase . . . . . . . . . . . . . . . . . 6 72 6.2. Test for a Single Frame Size and Number of Flows . . . . 6 73 6.3. Test Repetition . . . . . . . . . . . . . . . . . . . . . 7 74 6.4. Benchmark Calculations . . . . . . . . . . . . . . . . . 7 75 7. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 7 76 8. Security Considerations . . . . . . . . . . . . . . . . . . . 8 77 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 8 78 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 8 79 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 9 80 11.1. Normative References . . . . . . . . . . . . . . . . . . 9 81 11.2. Informative References . . . . . . . . . . . . . . . . . 10 82 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 10 84 1. Introduction 86 The IETF's fundamental Benchmarking Methodologies are defined 87 in[RFC2544], supported by the terms and definitions in [RFC1242], and 88 [RFC2544] actually obsoletes an earlier specification, [RFC1944]. 90 This memo recognizes the importance of Ethernet Virtual Private 91 Network (EVPN) Multihoming connectivity scenarios, where a CE device 92 is connected to 2 or more PEs using an instance of an Ethernet 93 Segment. 95 In an all-active or Active-Active scenario, CE-PE traffic is load- 96 balanced across two or more PEs. 98 Mass-withdrawal of routes may take place when an autodiscovery route 99 is used on a per Ethernet Segment basis, and there is a link failure 100 on one of the Ethernet Segment links (or when configuration changes 101 take place). 103 Although EVPN depends on address-learning in the control-plane, the 104 Ethernet Segment Instance is permitted to use "the method best suited 105 to the CE: data-plane learning, IEEE 802.1x, the Link Layer Discovery 106 Protocol (LLDP), IEEE 802.1aq, Address Resolution Protocol (ARP), 107 management plane, or other protocols" [RFC7432]. 109 This memo seeks to benchmark these important cases (and others). 111 2. Scope and Goals 113 The scope of this memo is to define a method to unambiguously perform 114 tests, measure the benchmark(s), and report the results for Capacity 115 of EVPN Multihoming connectivity scenarios, and other key restoration 116 activities (such as address withdrawl) covering link failure in the 117 Active-Active scenario. 119 The goal is to provide more efficient test procedures where possible, 120 and to expand reporting with additional interpretation of the 121 results. The tests described in this memo address some key 122 multihoming scenarios implemented on a Device Under Test (DUT) or 123 System Under Test (SUT). 125 3. Motivation 127 The Multihoming scenarios described in this memo emphsize features 128 with practical value to the industry that have seen deployment. 129 Therefore, these scenarios derserve further attention that follows 130 from benchmarking activities and further study. 132 4. Test Setups 134 For simple Capacity/Throughput Benchmarks, the Test Setup MUST be 135 consistent with Figure 1 of [RFC2544], or Figure 2 when the tester's 136 sender and receiver are different devices. 138 +--------+ ,-----. +--------+ 139 | | / \ | | 140 | | /( PE ....| | 141 | | / \ 1 / | | 142 | Test | ,-----. / `-----' | Test | 143 | | / \ / | | 144 | Device |...( CE X | Device | 145 | | \ 1 / \ | | 146 | | `-----' \ ,-----. | | 147 | | \ / \ | | 148 | | \( PE ....| | 149 +--------+ \ 2 / +--------+ 150 `-----' 152 Figure 1 SUT for Throughput and other Ethernet Segment Tests 154 In this case, the System Under Test (SUT) is comprised of a single CE 155 device and two or more PE devices. The tester SHALL be connected to 156 all CE and PE, and be capable of simulateneously sending and 157 receiving frames on all ports in use. The tester SHALL be capable of 158 generating multiple flows (according to a 5-tuple definition, or any 159 sub-set of the 5-tuple). The tester SHALL be able to control the IP 160 capacity of sets of individual flows, and the presence of sets of 161 flows on specific interface ports. 163 Other mandatory testing aspects described in [RFC2544] MUST be 164 included, unless explicitly modified in the next section. 166 The ingress and egress link speeds and link layer protocols MUST be 167 specified and used to compute the maximum theoretical frame rate when 168 respecting the minimum inter-frame gap. 170 A second test case is where a BGP backbone implements MPLS-LDP to 171 provide connectivity between multiple PE - ESI - CE locations. 173 Test Test 174 Device Device 175 EVI-1 176 +---+ ,-----. +---+ 177 | | ESI / \ | | 178 | | 1 /( PE ..... ESI | | 179 | | / \ 1 / \ EVI 2 | | 180 | | ,-----. / `-----' \ ,-----. +--+| | 181 | | / \ / \ / \ | || | 182 | |...( CE X X...( PE ...|CE|| | 183 | | \ 1 / \ / \ 3 / | 2|| | 184 | | `-----' \ ,-----. / `-----' +--+| | 185 | | \ / \ / | | 186 | | \( PE ..../ | | 187 +---+ \ 2 / +---+ 188 `-----' 189 EVI-2 191 Figure 2 SUT with BGP & MPLS interconnecting multiple PE-ESI-CE 192 locations 194 All Link speeds MUST be reported, along with complete device 195 configurations in the SUT and Test Device(s). 197 Additional Test Setups and configurations will be provided in this 198 section, after review. 200 One capacity benchmark pertains to the number of ESI that a network 201 with multiple PE - ESI - CE locations can support. 203 5. Procedure for Throughput Characterization 205 Objective: To characterize the ability of a DUT/SUT to process frames 206 between CE and one or more PEs in a multihomed connectivity scenario. 207 Figure 1 gives the test setup. 209 The Procedure follows. 211 5.1. Address Learning Phase 213 "For every address, learning frames MUST be sent to the DUT/SUT to 214 allow the DUT/SUT to update its address tables properly." [RFC2889] 216 5.2. Test for a Single Frame Size and Number of Flows 218 Each trial in the test requires confiuring a number of flows (from 219 100 to 100k) and a fixed frame size (64 octets to 128, 256, 512, 220 1024, 1280 and 1518 bytes, as per [RFC2544]). 222 The Procedure SHALL follow section 5.1 of [RFC2889]. 224 5.3. Test Repetition 226 The test MUST be repeated N times for each frame size in the subset 227 list, and each Throughput value made available for further processing 228 (below). 230 5.4. Benchmark Calculations 232 For each Frame size, calculate the following summary statistics for 233 Throughput values over the N tests: 235 o Average (Benchmark) 237 o Minimum 239 o Maximum 241 o Standard Deviation 243 Comparison will determine how the load was balanced among PEs. 245 6. Procedure for Mass Withdrawal Characterization 247 Objective: To characterize the ability of a DUT/SUT to process frames 248 between CE and one or more PE in a multihomed connectivity scenario 249 when a mass withdrawal takes place. Figure 2 gives the test setup. 251 The Procedure follows. 253 6.1. Address Learning Phase 255 "For every address, learning frames MUST be sent to the DUT/SUT to 256 allow the DUT/SUT update its address tables properly." [RFC2889] 258 6.2. Test for a Single Frame Size and Number of Flows 260 Each trial in the test requires Confiuring a number of flows (from 261 100 to 100k) and a fixed frame size (64 octets to 128, 256, 512, 262 1024, 1280 and 1518 bytes, as per [RFC2544]). 264 The Offered Load SHALL be transmitted at the Throughput level 265 corrsponding to previously determined for the selected Frame size and 266 number of Flows in use. 268 The Procedure SHALL follow section 5.1 of [RFC2889] (except there is 269 no need to search for the Throughput level). 271 When traffic has been sent for 5 seconds one of the CE-PE links on 272 the ESI SHALL be disabled, and the time of this action SHALL be 273 recorded for further calculations. For example, if the CE1 link to 274 PE1 is disabled, this should trigger a Mass withdrawal of EVI-1 275 addresses, and the subsequent re-routing of traffic to PE2. 277 Frame losses are expected to be recorded during the restoration time. 278 Time for restoration may be estimated as described in section 3.5 279 of[RFC6412]. 281 6.3. Test Repetition 283 The test MUST be repeated N times for each frame size in the subset 284 list, and each restoration time value made available for further 285 processing (below). 287 6.4. Benchmark Calculations 289 For each Frame size and number of flows, calculate the following 290 summary statistics for Loss (or Time to return to Throughput level 291 after restoration) values over the N tests: 293 o Average (Benchmark) 295 o Minimum 297 o Maximum 299 o Standard Deviation 301 7. Reporting 303 The results SHOULD be reported in the format of a table with a row 304 for each of the tested frame sizes and Number of Flows. There SHOULD 305 be columns for the frame size with number of flows, and for the 306 resultant average frame count (or time) for each type of data stream 307 tested. 309 The number of tests Averaged for the Benchmark, N, MUST be reported. 311 The Minimum, Maximum, and Standard Deviation across all complete 312 tests SHOULD also be reported. 314 The Corrected DUT Restoration Time SHOULD also be reported, as 315 applicable. 317 +----------------+-------------------+----------------+-------------+ 318 | Frame Size, | Ave Benchmark, | Min,Max,StdDev | Calculated | 319 | octets + # | fps, frames or | | Time, Sec | 320 | Flows | time | | | 321 +----------------+-------------------+----------------+-------------+ 322 | 64,100 | 26000 | 25500,27000,20 | 0.00004 | 323 +----------------+-------------------+----------------+-------------+ 325 Throughput or Loss/Restoration Time Results 327 Static and configuration parameters: 329 Number of test repetitions, N 331 Minimum Step Size (during searches), in frames. 333 8. Security Considerations 335 Benchmarking activities as described in this memo are limited to 336 technology characterization using controlled stimuli in a laboratory 337 environment, with dedicated address space and the other constraints 338 [RFC2544]. 340 The benchmarking network topology will be an independent test setup 341 and MUST NOT be connected to devices that may forward the test 342 traffic into a production network, or misroute traffic to the test 343 management network. See [RFC6815]. 345 Further, benchmarking is performed on a "black-box" basis, relying 346 solely on measurements observable external to the DUT/SUT. 348 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 349 benchmarking purposes. Any implications for network security arising 350 from the DUT/SUT SHOULD be identical in the lab and in production 351 networks. 353 9. IANA Considerations 355 This memo makes no requests of IANA. 357 10. Acknowledgements 359 Thanks to Aman Shaikh for sharing his comments on the draft directly 360 with the authors. 362 11. References 364 11.1. Normative References 366 [RFC1242] Bradner, S., "Benchmarking Terminology for Network 367 Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242, 368 July 1991, . 370 [RFC1944] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 371 Network Interconnect Devices", RFC 1944, 372 DOI 10.17487/RFC1944, May 1996, 373 . 375 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 376 Requirement Levels", BCP 14, RFC 2119, 377 DOI 10.17487/RFC2119, March 1997, 378 . 380 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 381 Network Interconnect Devices", RFC 2544, 382 DOI 10.17487/RFC2544, March 1999, 383 . 385 [RFC2889] Mandeville, R. and J. Perser, "Benchmarking Methodology 386 for LAN Switching Devices", RFC 2889, 387 DOI 10.17487/RFC2889, August 2000, 388 . 390 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 391 Dugatkin, "IPv6 Benchmarking Methodology for Network 392 Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May 393 2008, . 395 [RFC6201] Asati, R., Pignataro, C., Calabria, F., and C. Olvera, 396 "Device Reset Characterization", RFC 6201, 397 DOI 10.17487/RFC6201, March 2011, 398 . 400 [RFC6412] Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 401 for Benchmarking Link-State IGP Data-Plane Route 402 Convergence", RFC 6412, DOI 10.17487/RFC6412, November 403 2011, . 405 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 406 "Applicability Statement for RFC 2544: Use on Production 407 Networks Considered Harmful", RFC 6815, 408 DOI 10.17487/RFC6815, November 2012, 409 . 411 [RFC6985] Morton, A., "IMIX Genome: Specification of Variable Packet 412 Sizes for Additional Testing", RFC 6985, 413 DOI 10.17487/RFC6985, July 2013, 414 . 416 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 417 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 418 May 2017, . 420 11.2. Informative References 422 [OPNFV-2017] 423 Cooper, T., Morton, A., and S. Rao, "Dataplane 424 Performance, Capacity, and Benchmarking in OPNFV", June 425 2017, 426 . 429 [RFC8239] Avramov, L. and J. Rapp, "Data Center Benchmarking 430 Methodology", RFC 8239, DOI 10.17487/RFC8239, August 2017, 431 . 433 [TST009] ETSI Network Function Virtualization ISG, "ETSI GS NFV-TST 434 009 V3.1.1 (2018-10), "Network Functions Virtualisation 435 (NFV) Release 3; Testing; Specification of Networking 436 Benchmarks and Measurement Methods for NFVI"", October 437 2018, . 440 [VSPERF-b2b] 441 Morton, A., "Back2Back Testing Time Series (from CI)", 442 June 2017, . 446 [VSPERF-BSLV] 447 Morton, A. and S. Rao, "Evolution of Repeatability in 448 Benchmarking: Fraser Plugfest (Summary for IETF BMWG)", 449 July 2018, 450 . 454 Authors' Addresses 455 Al Morton 456 AT&T Labs 457 200 Laurel Avenue South 458 Middletown,, NJ 07748 459 USA 461 Phone: +1 732 420 1571 462 Fax: +1 732 368 1192 463 Email: acm@research.att.com 465 Jim Uttaro 466 AT&T Labs 467 200 Laurel Avenue South 468 Middletown,, NJ 07748 469 USA 471 Email: uttaro@att.com