idnits 2.17.1 draft-morton-bmwg-multihome-evpn-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (July 2, 2019) is 1760 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC7432' is mentioned on line 109, but not defined == Unused Reference: 'RFC5180' is defined on line 430, but no explicit reference was found in the text == Unused Reference: 'RFC6201' is defined on line 435, but no explicit reference was found in the text == Unused Reference: 'RFC6985' is defined on line 451, but no explicit reference was found in the text == Unused Reference: 'OPNFV-2017' is defined on line 462, but no explicit reference was found in the text == Unused Reference: 'RFC8239' is defined on line 469, but no explicit reference was found in the text == Unused Reference: 'VSPERF-b2b' is defined on line 480, but no explicit reference was found in the text == Unused Reference: 'VSPERF-BSLV' is defined on line 486, but no explicit reference was found in the text ** Obsolete normative reference: RFC 1944 (Obsoleted by RFC 2544) Summary: 1 error (**), 0 flaws (~~), 10 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft J. Uttaro 4 Updates: ???? (if approved) AT&T Labs 5 Intended status: Informational July 2, 2019 6 Expires: January 3, 2020 8 Benchmarks and Methods for Multihomed EVPN 9 draft-morton-bmwg-multihome-evpn-02 11 Abstract 13 Fundamental Benchmarking Methodologies for Network Interconnect 14 Devices of interest to the IETF are defined in RFC 2544. Key 15 benchmarks applicable to restoration and multi-homed sites are in RFC 16 6894. This memo applies these methods to Multihomed nodes 17 implemented on Ethernet Virtual Private Networks (EVPN). 19 Requirements Language 21 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 22 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 23 "OPTIONAL" in this document are to be interpreted as described in BCP 24 14[RFC2119] [RFC8174] when, and only when, they appear in all 25 capitals, as shown here. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on January 3, 2020. 44 Copyright Notice 46 Copyright (c) 2019 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (https://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 62 2. Scope and Goals . . . . . . . . . . . . . . . . . . . . . . . 3 63 3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 4. Test Setups . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 5. Procedure for Full Mesh Throughput Characterization . . . . . 5 66 5.1. Address Learning Phase . . . . . . . . . . . . . . . . . 5 67 5.2. Test for a Single Frame Size and Number of Unicast Flows 5 68 5.3. Detailed Procedure . . . . . . . . . . . . . . . . . . . 6 69 5.4. Test Repetition . . . . . . . . . . . . . . . . . . . . . 6 70 5.5. Benchmark Calculations . . . . . . . . . . . . . . . . . 6 71 5.6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . 7 72 6. Procedure for Mass Withdrawal Characterization . . . . . . . 7 73 6.1. Address Learning Phase . . . . . . . . . . . . . . . . . 7 74 6.2. Test for a Single Frame Size and Number of Flows . . . . 7 75 6.3. Test Repetition . . . . . . . . . . . . . . . . . . . . . 7 76 6.4. Benchmark Calculations . . . . . . . . . . . . . . . . . 8 77 7. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 8 78 8. Security Considerations . . . . . . . . . . . . . . . . . . . 9 79 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 9 80 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 9 81 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 9 82 11.1. Normative References . . . . . . . . . . . . . . . . . . 9 83 11.2. Informative References . . . . . . . . . . . . . . . . . 10 84 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 11 86 1. Introduction 88 The IETF's fundamental Benchmarking Methodologies are defined 89 in[RFC2544], supported by the terms and definitions in [RFC1242], and 90 [RFC2544] actually obsoletes an earlier specification, [RFC1944]. 92 This memo recognizes the importance of Ethernet Virtual Private 93 Network (EVPN) Multihoming connectivity scenarios, where a CE device 94 is connected to 2 or more PEs using an instance of an Ethernet 95 Segment. 97 In an all-active or Active-Active scenario, CE-PE traffic is load- 98 balanced across two or more PEs. 100 Mass-withdrawal of routes may take place when an autodiscovery route 101 is used on a per Ethernet Segment basis, and there is a link failure 102 on one of the Ethernet Segment links (or when configuration changes 103 take place). 105 Although EVPN depends on address-learning in the control-plane, the 106 Ethernet Segment Instance is permitted to use "the method best suited 107 to the CE: data-plane learning, IEEE 802.1x, the Link Layer Discovery 108 Protocol (LLDP), IEEE 802.1aq, Address Resolution Protocol (ARP), 109 management plane, or other protocols" [RFC7432]. 111 This memo seeks to benchmark these important cases (and others). 113 2. Scope and Goals 115 The scope of this memo is to define a method to unambiguously perform 116 tests, measure the benchmark(s), and report the results for Capacity 117 of EVPN Multihoming connectivity scenarios, and other key restoration 118 activities (such as address withdrawl) covering link failure in the 119 Active-Active scenario. 121 The goal is to provide more efficient test procedures where possible, 122 and to expand reporting with additional interpretation of the 123 results. The tests described in this memo address some key 124 multihoming scenarios implemented on a Device Under Test (DUT) or 125 System Under Test (SUT). 127 3. Motivation 129 The Multihoming scenarios described in this memo emphsize features 130 with practical value to the industry that have seen deployment. 131 Therefore, these scenarios derserve further attention that follows 132 from benchmarking activities and further study. 134 4. Test Setups 136 For simple Capacity/Throughput Benchmarks, the Test Setup MUST be 137 consistent with Figure 1 of [RFC2544], or Figure 2 when the tester's 138 sender and receiver are different devices. 140 +--------+ ,-----. +--------+ 141 | | / \ | | 142 | | /( PE ....| | 143 | | / \ 1 / | | 144 | Test | ,-----. / `-----' | Test | 145 | | / \ / | | 146 | Device |...( CE X | Device | 147 | | \ 1 / \ | | 148 | | `-----' \ ,-----. | | 149 | | \ / \ | | 150 | | \( PE ....| | 151 +--------+ \ 2 / +--------+ 152 `-----' 154 Figure 1 SUT for Throughput and other Ethernet Segment Tests 156 In Figure 1, the System Under Test (SUT) is comprised of a single CE 157 device and two or more PE devices. 159 The tester SHALL be connected to all CE and every PE, and be capable 160 of simulateneously sending and receiving frames on all ports with 161 connectivity. The tester SHALL be capable of generating multiple 162 flows (according to a 5-tuple definition, or any sub-set of the 163 5-tuple). The tester SHALL be able to control the IP capacity of 164 sets of individual flows, and the presence of sets of flows on 165 specific interface ports. 167 The tester SHALL be capable of generating and receiving a full mesh 168 of Unicast flows, as described in section 3.0 of [RFC2889]: 170 "In fully meshed traffic, each interface of a DUT/SUT is set up to 171 both receive and transmit frames to all the other interfaces under 172 test." 174 Other mandatory testing aspects described in [RFC2544] and [RFC2889] 175 MUST be included, unless explicitly modified in the next section. 177 The ingress and egress link speeds and link layer protocols MUST be 178 specified and used to compute the maximum theoretical frame rate when 179 respecting the minimum inter-frame gap. 181 A second test case is where a BGP backbone implements MPLS-LDP to 182 provide connectivity between multiple PE - ESI - CE locations. 184 Test Test 185 Device Device 186 EVI-1 187 +---+ ,-----. +---+ 188 | | ESI / \ | | 189 | | 1 /( PE ..... ESI | | 190 | | / \ 1 / \ EVI 2 | | 191 | | ,-----. / `-----' \ ,-----. +--+ | | 192 | | / \ / \ / \ | | | | 193 | |...( CE X X...( PE ...|CE|.| | 194 | | \ 1 / \ / \ 3 / | 2| | | 195 | | `-----' \ ,-----. / `-----' +--+ | | 196 | | \ / \ / | | 197 | | \( PE ..../ | | 198 +---+ \ 2 / +---+ 199 `-----' 200 EVI-2 202 Figure 2 SUT with BGP & MPLS interconnecting multiple PE-ESI-CE 203 locations 205 All Link speeds MUST be reported, along with complete device 206 configurations in the SUT and Test Device(s). 208 Additional Test Setups and configurations will be provided in this 209 section, after review. 211 One capacity benchmark pertains to the number of ESIs that a network 212 with multiple PE - ESI - CE locations can support. 214 5. Procedure for Full Mesh Throughput Characterization 216 Objective: To characterize the ability of a DUT/SUT to process frames 217 between CE and one or more PEs in a multihomed connectivity scenario. 218 Figure 1 gives the test setup. 220 The Procedure follows. 222 5.1. Address Learning Phase 224 "For every address, learning frames MUST be sent to the DUT/SUT to 225 allow the DUT/SUT to update its address tables properly." [RFC2889] 227 5.2. Test for a Single Frame Size and Number of Unicast Flows 229 Each trial in the test requires confiuring a number of flows (from 230 100 to 100k) and a fixed frame size (64 octets to 128, 256, 512, 231 1024, 1280 and 1518 bytes, as per [RFC2544]). Frame formats MUST be 232 specified, they are as described in section 4 of [RFC2889]. 234 5.3. Detailed Procedure 236 The Procedure SHALL follow section 5.1 of [RFC2889]. 238 Specifically, the Throughput measurement parameters found in section 239 5.1.2 of [RFC2889] SHALL be configured and reported with the results. 241 The procedure for transmitting Frames on each port is described in 242 section 5.1.3 of [RFC2889] and SHALL be followed (adapting to the 243 number of ports in the test setup). 245 Once the traffic is started, the procedure for Measurements described 246 in section 5.1.4 of [RFC2889] SHALL be followed (adapting to the 247 number of ports in the test setup). The section on Throughput 248 measurement (5.1.4 of [RFC2889]) SHALL be followed. 250 In the case that one or more of the CE and PE are virtual 251 implementations, then the search algorithm of [TST009] that provides 252 consistent results when faced with host transient activity SHOULD be 253 used (Binary Search with Loss Verification). 255 5.4. Test Repetition 257 The test MUST be repeated N times for each frame size in the subset 258 list, and each Throughput value made available for further processing 259 (below). 261 5.5. Benchmark Calculations 263 For each Frame size, calculate the following summary statistics for 264 Throughput values over the N tests: 266 o Average (Benchmark) 268 o Minimum 270 o Maximum 272 o Standard Deviation 274 Comparison will determine how the load was balanced among PEs. 276 5.6. Reporting 278 The recommendation for graphical reporting provided in Section 5.1.4 279 of [RFC2889]) SHOULD be followed, along with the specifications in 280 Section 7 below. 282 6. Procedure for Mass Withdrawal Characterization 284 Objective: To characterize the ability of a DUT/SUT to process frames 285 between CE and one or more PE in a multihomed connectivity scenario 286 when a mass withdrawal takes place. Figure 2 gives the test setup. 288 The Procedure follows. 290 6.1. Address Learning Phase 292 "For every address, learning frames MUST be sent to the DUT/SUT to 293 allow the DUT/SUT update its address tables properly." [RFC2889] 295 6.2. Test for a Single Frame Size and Number of Flows 297 Each trial in the test requires Confiuring a number of flows (from 298 100 to 100k) and a fixed frame size (64 octets to 128, 256, 512, 299 1024, 1280 and 1518 bytes, as per [RFC2544]). 301 The Offered Load SHALL be transmitted at the Throughput level 302 corrsponding to previously determined for the selected Frame size and 303 number of Flows in use. 305 The Procedure SHALL follow section 5.1 of [RFC2889] (except there is 306 no need to search for the Throughput level). See section 5 above for 307 additional requirements, especially section 5.3. 309 When traffic has been sent for 5 seconds one of the CE-PE links on 310 the ESI SHALL be disabled, and the time of this action SHALL be 311 recorded for further calculations. For example, if the CE1 link to 312 PE1 is disabled, this should trigger a Mass withdrawal of EVI-1 313 addresses, and the subsequent re-routing of traffic to PE2. 315 Frame losses are expected to be recorded during the restoration time. 316 Time for restoration may be estimated as described in section 3.5 317 of[RFC6412]. 319 6.3. Test Repetition 321 The test MUST be repeated N times for each frame size in the subset 322 list, and each restoration time value made available for further 323 processing (below). 325 6.4. Benchmark Calculations 327 For each Frame size and number of flows, calculate the following 328 summary statistics for Loss (or Time to return to Throughput level 329 after restoration) values over the N tests: 331 o Average (Benchmark) 333 o Minimum 335 o Maximum 337 o Standard Deviation 339 7. Reporting 341 The results SHOULD be reported in the format of a table with a row 342 for each of the tested frame sizes and Number of Flows. There SHOULD 343 be columns for the frame size with number of flows, and for the 344 resultant average frame count (or time) for each type of data stream 345 tested. 347 The number of tests Averaged for the Benchmark, N, MUST be reported. 349 The Minimum, Maximum, and Standard Deviation across all complete 350 tests SHOULD also be reported. 352 The Corrected DUT Restoration Time SHOULD also be reported, as 353 applicable. 355 +----------------+-------------------+----------------+-------------+ 356 | Frame Size, | Ave Benchmark, | Min,Max,StdDev | Calculated | 357 | octets + # | fps, frames or | | Time, Sec | 358 | Flows | time | | | 359 +----------------+-------------------+----------------+-------------+ 360 | 64,100 | 26000 | 25500,27000,20 | 0.00004 | 361 +----------------+-------------------+----------------+-------------+ 363 Throughput or Loss/Restoration Time Results 365 Static and configuration parameters: 367 Number of test repetitions, N 369 Minimum Step Size (during searches), in frames. 371 8. Security Considerations 373 Benchmarking activities as described in this memo are limited to 374 technology characterization using controlled stimuli in a laboratory 375 environment, with dedicated address space and the other constraints 376 [RFC2544]. 378 The benchmarking network topology will be an independent test setup 379 and MUST NOT be connected to devices that may forward the test 380 traffic into a production network, or misroute traffic to the test 381 management network. See [RFC6815]. 383 Further, benchmarking is performed on a "black-box" basis, relying 384 solely on measurements observable external to the DUT/SUT. 386 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 387 benchmarking purposes. Any implications for network security arising 388 from the DUT/SUT SHOULD be identical in the lab and in production 389 networks. 391 9. IANA Considerations 393 This memo makes no requests of IANA. 395 10. Acknowledgements 397 Thanks to Sudhin Jacob for his review and comments on the bmwg-list. 399 Thanks to Aman Shaikh for sharing his comments on the draft directly 400 with the authors. 402 11. References 404 11.1. Normative References 406 [RFC1242] Bradner, S., "Benchmarking Terminology for Network 407 Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242, 408 July 1991, . 410 [RFC1944] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 411 Network Interconnect Devices", RFC 1944, 412 DOI 10.17487/RFC1944, May 1996, 413 . 415 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 416 Requirement Levels", BCP 14, RFC 2119, 417 DOI 10.17487/RFC2119, March 1997, 418 . 420 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 421 Network Interconnect Devices", RFC 2544, 422 DOI 10.17487/RFC2544, March 1999, 423 . 425 [RFC2889] Mandeville, R. and J. Perser, "Benchmarking Methodology 426 for LAN Switching Devices", RFC 2889, 427 DOI 10.17487/RFC2889, August 2000, 428 . 430 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 431 Dugatkin, "IPv6 Benchmarking Methodology for Network 432 Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May 433 2008, . 435 [RFC6201] Asati, R., Pignataro, C., Calabria, F., and C. Olvera, 436 "Device Reset Characterization", RFC 6201, 437 DOI 10.17487/RFC6201, March 2011, 438 . 440 [RFC6412] Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 441 for Benchmarking Link-State IGP Data-Plane Route 442 Convergence", RFC 6412, DOI 10.17487/RFC6412, November 443 2011, . 445 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 446 "Applicability Statement for RFC 2544: Use on Production 447 Networks Considered Harmful", RFC 6815, 448 DOI 10.17487/RFC6815, November 2012, 449 . 451 [RFC6985] Morton, A., "IMIX Genome: Specification of Variable Packet 452 Sizes for Additional Testing", RFC 6985, 453 DOI 10.17487/RFC6985, July 2013, 454 . 456 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 457 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 458 May 2017, . 460 11.2. Informative References 462 [OPNFV-2017] 463 Cooper, T., Morton, A., and S. Rao, "Dataplane 464 Performance, Capacity, and Benchmarking in OPNFV", June 465 2017, 466 . 469 [RFC8239] Avramov, L. and J. Rapp, "Data Center Benchmarking 470 Methodology", RFC 8239, DOI 10.17487/RFC8239, August 2017, 471 . 473 [TST009] Morton, R. A., "ETSI GS NFV-TST 009 V3.2.1 (2019-06), 474 "Network Functions Virtualisation (NFV) Release 3; 475 Testing; Specification of Networking Benchmarks and 476 Measurement Methods for NFVI"", June 2019, 477 . 480 [VSPERF-b2b] 481 Morton, A., "Back2Back Testing Time Series (from CI)", 482 June 2017, . 486 [VSPERF-BSLV] 487 Morton, A. and S. Rao, "Evolution of Repeatability in 488 Benchmarking: Fraser Plugfest (Summary for IETF BMWG)", 489 July 2018, 490 . 494 Authors' Addresses 496 Al Morton 497 AT&T Labs 498 200 Laurel Avenue South 499 Middletown,, NJ 07748 500 USA 502 Phone: +1 732 420 1571 503 Fax: +1 732 368 1192 504 Email: acm@research.att.com 506 Jim Uttaro 507 AT&T Labs 508 200 Laurel Avenue South 509 Middletown,, NJ 07748 510 USA 512 Email: uttaro@att.com