idnits 2.17.1 draft-ietf-bmwg-sdn-controller-benchmark-meth-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 25, 2018) is 2252 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-10) exists of draft-ietf-bmwg-sdn-controller-benchmark-term-08 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Bhuvaneswaran Vengainathan 2 Network Working Group Anton Basil 3 Intended Status: Informational Veryx Technologies 4 Expires: August 25, 2018 Mark Tassinari 5 Hewlett-Packard 6 Vishwas Manral 7 Nano Sec 8 Sarah Banks 9 VSS Monitoring 10 February 25, 2018 12 Benchmarking Methodology for SDN Controller Performance 13 draft-ietf-bmwg-sdn-controller-benchmark-meth-08 15 Abstract 17 This document defines the methodologies for benchmarking control 18 plane performance of SDN controllers. SDN controller is a core 19 component in software-defined networking architecture that controls 20 the network behavior. Terminology related to benchmarking SDN 21 controllers is described in the companion terminology documentI-D 22 sdn-controller-benchmark-term. SDN controllers have been implemented 23 with many varying designs in order to achieve their intended network 24 functionality. Hence, the authors have taken the approach of 25 considering an SDN controller as a black box, defining the 26 methodology in a manner that is agnostic to protocols and network 27 services supported by controllers. The intent of this document is to 28 provide a standard mechanism to measure the performance of all 29 controller implementations. 31 Status of this Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 Internet-Drafts are working documents of the Internet Engineering 37 Task Force (IETF). Note that other groups may also distribute 38 working documents as Internet-Drafts. The list of current Internet- 39 Drafts is at http://datatracker.ietf.org/drafts/current. 41 Internet-Drafts are draft documents valid for a maximum of six 42 months and may be updated, replaced, or obsoleted by other documents 43 at any time. It is inappropriate to use Internet-Drafts as reference 44 material or to cite them other than as "work in progress. 46 This Internet-Draft will expire on August 25, 2018. 48 Copyright Notice 50 Copyright (c) 2018 IETF Trust and the persons identified as the 51 document authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal 54 Provisions Relating to IETF Documents 55 (http://trustee.ietf.org/license-info) in effect on the date of 56 publication of this document. Please review these documents 57 carefully, as they describe your rights and restrictions with 58 respect to this document. Code Components extracted from this 59 document must include Simplified BSD License text as described in 60 Section 4.e of the Trust Legal Provisions and are provided without 61 warranty as described in the Simplified BSD License. 63 Table of Contents 65 1. Introduction...................................................4 66 2. Scope..........................................................4 67 3. Test Setup.....................................................4 68 3.1. Test setup - Controller working in Standalone Mode........5 69 3.2. Test setup - Controller working in Cluster Mode...........6 70 4. Test Considerations............................................7 71 4.1. Network Topology..........................................7 72 4.2. Test Traffic..............................................7 73 4.3. Test Emulator Requirements................................7 74 4.4. Connection Setup..........................................7 75 4.5. Measurement Point Specification and Recommendation........8 76 4.6. Connectivity Recommendation...............................8 77 4.7. Test Repeatability........................................8 78 5. Benchmarking Tests.............................................9 79 5.1. Performance...............................................9 80 5.1.1. Network Topology Discovery Time......................9 81 5.1.2. Asynchronous Message Processing Time................11 82 5.1.3. Asynchronous Message Processing Rate................13 83 5.1.4. Reactive Path Provisioning Time.....................15 84 5.1.5. Proactive Path Provisioning Time....................16 85 5.1.6. Reactive Path Provisioning Rate.....................18 86 5.1.7. Proactive Path Provisioning Rate....................19 87 5.1.8. Network Topology Change Detection Time..............21 88 5.2. Scalability..............................................23 89 5.2.1. Control Session Capacity............................23 90 5.2.2. Network Discovery Size..............................23 91 5.2.3. Forwarding Table Capacity...........................24 92 5.3. Security.................................................26 93 5.3.1. Exception Handling..................................26 94 5.3.2. Denial of Service Handling..........................27 95 5.4. Reliability..............................................29 96 5.4.1. Controller Failover Time............................29 97 5.4.2. Network Re-Provisioning Time........................30 98 6. References....................................................32 99 6.1. Normative References.....................................32 100 6.2. Informative References...................................32 101 7. IANA Considerations...........................................32 102 8. Security Considerations.......................................32 103 9. Acknowledgments...............................................33 104 Appendix A. Example Test Topology................................34 105 A.1. Leaf-Spine Topology......................................34 106 Appendix B. Benchmarking Methodology using OpenFlow Controllers..35 107 B.1. Protocol Overview........................................35 108 B.2. Messages Overview........................................35 109 B.3. Connection Overview......................................35 110 B.4. Performance Benchmarking Tests...........................36 111 B.4.1. Network Topology Discovery Time.....................36 112 B.4.2. Asynchronous Message Processing Time................37 113 B.4.3. Asynchronous Message Processing Rate................38 114 B.4.4. Reactive Path Provisioning Time.....................39 115 B.4.5. Proactive Path Provisioning Time....................40 116 B.4.6. Reactive Path Provisioning Rate.....................41 117 B.4.7. Proactive Path Provisioning Rate....................42 118 B.4.8. Network Topology Change Detection Time..............43 119 B.5. Scalability..............................................44 120 B.5.1. Control Sessions Capacity...........................44 121 B.5.2. Network Discovery Size..............................44 122 B.5.3. Forwarding Table Capacity...........................45 123 B.6. Security.................................................47 124 B.6.1. Exception Handling..................................47 125 B.6.2. Denial of Service Handling..........................48 126 B.7. Reliability..............................................50 127 B.7.1. Controller Failover Time............................50 128 B.7.2. Network Re-Provisioning Time........................51 129 Authors' Addresses...............................................54 131 1. Introduction 133 This document provides generic methodologies for benchmarking SDN 134 controller performance. An SDN controller may support many 135 northbound and southbound protocols, implement a wide range of 136 applications, and work solely, or as a group to achieve the desired 137 functionality. This document considers an SDN controller as a black 138 box, regardless of design and implementation. The tests defined in 139 the document can be used to benchmark SDN controller for 140 performance, scalability, reliability and security independent of 141 northbound and southbound protocols. These tests can be performed on 142 an SDN controller running as a virtual machine (VM) instance or on a 143 bare metal server. This document is intended for those who want to 144 measure the SDN controller performance as well as compare various 145 SDN controllers performance. 147 Conventions used in this document 149 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 150 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 151 "OPTIONAL" in this document are to be interpreted as described in 152 BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all 153 capitals, as shown here. 155 2. Scope 157 This document defines methodology to measure the networking metrics 158 of SDN controllers. For the purpose of this memo, the SDN controller 159 is a function that manages and controls Network Devices. Any SDN 160 controller without a control capability is out of scope for this 161 memo. The tests defined in this document enable benchmarking of SDN 162 Controllers in two ways; as a standalone controller and as a cluster 163 of homogeneous controllers. These tests are recommended for 164 execution in lab environments rather than in live network 165 deployments. Performance benchmarking of a federation of 166 controllers, set of SDN controllers managing different domains, is 167 beyond the scope of this document. 169 3. Test Setup 171 The tests defined in this document enable measurement of an SDN 172 controller's performance in standalone mode and cluster mode. This 173 section defines common reference topologies that are later referred 174 to in individual tests. 176 3.1. Test setup - Controller working in Standalone Mode 178 +-----------------------------------------------------------+ 179 | Application Plane Test Emulator | 180 | | 181 | +-----------------+ +-------------+ | 182 | | Application | | Service | | 183 | +-----------------+ +-------------+ | 184 | | 185 +-----------------------------+(I2)-------------------------+ 186 | 187 | (Northbound interfaces) 188 +-------------------------------+ 189 | +----------------+ | 190 | | SDN Controller | | 191 | +----------------+ | 192 | | 193 | Device Under Test (DUT) | 194 +-------------------------------+ 195 | (Southbound interfaces) 196 | 197 +-----------------------------+(I1)-------------------------+ 198 | | 199 | +-----------+ +-----------+ | 200 | | Network | | Network | | 201 | | Device 2 |--..-| Device n-1| | 202 | +-----------+ +-----------+ | 203 | / \ / \ | 204 | / \ / \ | 205 | l0 / X \ ln | 206 | / / \ \ | 207 | +-----------+ +-----------+ | 208 | | Network | | Network | | 209 | | Device 1 |..| Device n | | 210 | +-----------+ +-----------+ | 211 | | | | 212 | +---------------+ +---------------+ | 213 | | Test Traffic | | Test Traffic | | 214 | | Generator | | Generator | | 215 | | (TP1) | | (TP2) | | 216 | +---------------+ +---------------+ | 217 | | 218 | Forwarding Plane Test Emulator | 219 +-----------------------------------------------------------+ 221 Figure 1 223 3.2. Test setup - Controller working in Cluster Mode 225 +-----------------------------------------------------------+ 226 | Application Plane Test Emulator | 227 | | 228 | +-----------------+ +-------------+ | 229 | | Application | | Service | | 230 | +-----------------+ +-------------+ | 231 | | 232 +-----------------------------+(I2)-------------------------+ 233 | 234 | (Northbound interfaces) 235 +---------------------------------------------------------+ 236 | | 237 | ------------------ ------------------ | 238 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 239 | ------------------ ------------------ | 240 | | 241 | Device Under Test (DUT) | 242 +---------------------------------------------------------+ 243 | (Southbound interfaces) 244 | 245 +-----------------------------+(I1)-------------------------+ 246 | | 247 | +-----------+ +-----------+ | 248 | | Network | | Network | | 249 | | Device 2 |--..-| Device n-1| | 250 | +-----------+ +-----------+ | 251 | / \ / \ | 252 | / \ / \ | 253 | l0 / X \ ln | 254 | / / \ \ | 255 | +-----------+ +-----------+ | 256 | | Network | | Network | | 257 | | Device 1 |..| Device n | | 258 | +-----------+ +-----------+ | 259 | | | | 260 | +---------------+ +---------------+ | 261 | | Test Traffic | | Test Traffic | | 262 | | Generator | | Generator | | 263 | | (TP1) | | (TP2) | | 264 | +---------------+ +---------------+ | 265 | | 266 | Forwarding Plane Test Emulator | 267 +-----------------------------------------------------------+ 269 Figure 2 271 4. Test Considerations 273 4.1. Network Topology 275 The test cases SHOULD use Leaf-Spine topology with at least 1 276 Network Device in the topology for benchmarking. The test traffic 277 generators TP1 and TP2 SHOULD be connected to the first and the last 278 leaf Network Device. If a test case uses test topology with 1 279 Network Device, the test traffic generators TP1 and TP2 SHOULD be 280 connected to the same node. However to achieve a complete 281 performance characterization of the SDN controller, it is 282 recommended that the controller be benchmarked for many network 283 topologies and a varying number of Network Devices. This document 284 includes a sample test topology, defined in Section 10 - Appendix A 285 for reference. Further, care should be taken to make sure that a 286 loop prevention mechanism is enabled either in the SDN controller, 287 or in the network when the topology contains redundant network 288 paths. 290 4.2. Test Traffic 292 Test traffic is used to notify the controller about the asynchronous 293 arrival of new flows. The test cases SHOULD use frame sizes of 128, 294 512 and 1508 bytes for benchmarking. Tests using jumbo frames are 295 optional. 297 4.3. Test Emulator Requirements 299 The Test Emulator SHOULD time stamp the transmitted and received 300 control messages to/from the controller on the established network 301 connections. The test cases use these values to compute the 302 controller processing time. 304 4.4. Connection Setup 306 There may be controller implementations that support unencrypted and 307 encrypted network connections with Network Devices. Further, the 308 controller may have backward compatibility with Network Devices 309 running older versions of southbound protocols. It may be useful to 310 measure the controller performance with one or more applicable 311 connection setup methods defined below. For cases with encrypted 312 communications between the controller and the switch, key management 313 and key exchange MUST take place before any performance or benchmark 314 measurements. 316 1. Unencrypted connection with Network Devices, running same 317 protocol version. 319 2. Unencrypted connection with Network Devices, running different 320 protocol versions. 321 Example: 322 a. Controller running current protocol version and switch 323 running older protocol version 324 b. Controller running older protocol version and switch 325 running current protocol version 326 3. Encrypted connection with Network Devices, running same 327 protocol version 328 4. Encrypted connection with Network Devices, running different 329 protocol versions. 330 Example: 331 a. Controller running current protocol version and switch 332 running older protocol version 333 b. Controller running older protocol version and switch 334 running current protocol version 336 4.5. Measurement Point Specification and Recommendation 338 The measurement accuracy depends on several factors including the 339 point of observation where the indications are captured. For 340 example, the notification can be observed at the controller or test 341 emulator. The test operator SHOULD make the observations/ 342 measurements at the interfaces of test emulator unless it is 343 explicitly mentioned otherwise in the individual test. In any case, 344 the locations of measurement points MUST be reported. 346 4.6. Connectivity Recommendation 348 The SDN controller in the test setup SHOULD be connected directly 349 with the forwarding and the management plane test emulators to avoid 350 any delays or failure introduced by the intermediate devices during 351 benchmarking tests. When the controller is implemented as a virtual 352 machine, details of the physical and logical connectivity MUST be 353 reported. 355 4.7. Test Repeatability 357 To increase the confidence in measured result, it is recommended 358 that each test SHOULD be repeated a minimum of 10 times. 360 Test Reporting 362 Each test has a reporting format that contains some global and 363 identical reporting components, and some individual components that 364 are specific to individual tests. The following test configuration 365 parameters and controller settings parameters MUST be reflected in 366 the test report. 368 Test Configuration Parameters: 370 1. Controller name and version 371 2. Northbound protocols and versions 372 3. Southbound protocols and versions 373 4. Controller redundancy mode (Standalone or Cluster Mode) 374 5. Connection setup (Unencrypted or Encrypted) 375 6. Network Device Type (Physical or Virtual or Emulated) 376 7. Number of Nodes 377 8. Number of Links 378 9. Dataplane Test Traffic Type 379 10. Controller System Configuration (e.g., Physical or Virtual 380 Machine, CPU, Memory, Caches, Operating System, Interface 381 Speed, Storage) 382 11. Reference Test Setup (e.g., Section 3.1 etc.,) 384 Controller Settings Parameters: 385 1. Topology re-discovery timeout 386 2. Controller redundancy mode (e.g., active-standby etc.,) 387 3. Controller state persistence enabled/disabled 389 To ensure the repeatability of test, the following capabilities of 390 test emulator SHOULD be reported 392 1. Maximum number of Network Devices that the forwarding plane 393 emulates 394 2. Control message processing time (e.g., Topology Discovery 395 Messages) 397 One way to determine the above two values are to simulate the 398 required control sessions and messages from the control plane. 400 5. Benchmarking Tests 402 5.1. Performance 404 5.1.1. Network Topology Discovery Time 406 Objective: 408 The time taken by controller(s) to determine the complete network 409 topology, defined as the interval starting with the first discovery 410 message from the controller(s) at its Southbound interface, ending 411 with all features of the static topology determined. 413 Reference Test Setup: 415 The test SHOULD use one of the test setups described in section 3.1 416 or section 3.2 of this document in combination with Appendix A. 418 Prerequisite: 420 1. The controller MUST support network discovery. 421 2. Tester should be able to retrieve the discovered topology 422 information either through the controller's management interface, 423 or northbound interface to determine if the discovery was 424 successful and complete. 425 3. Ensure that the controller's topology re-discovery timeout has 426 been set to the maximum value to avoid initiation of re-discovery 427 process in the middle of the test. 429 Procedure: 431 1. Ensure that the controller is operational, its network 432 applications, northbound and southbound interfaces are up and 433 running. 434 2. Establish the network connections between controller and Network 435 Devices. 436 3. Record the time for the first discovery message (Tm1) received 437 from the controller at forwarding plane test emulator interface 438 I1. 439 4. Query the controller every 3 seconds to obtain the discovered 440 network topology information through the northbound interface or 441 the management interface and compare it with the deployed network 442 topology information. 443 5. Stop the trial when the discovered topology information matches 444 the deployed network topology, or when the discovered topology 445 information return the same details for 3 consecutive queries. 446 6. Record the time last discovery message (Tmn) sent to controller 447 from the forwarding plane test emulator interface (I1) when the 448 trial completed successfully. (e.g., the topology matches). 450 Measurement: 452 Topology Discovery Time Tr1 = Tmn-Tm1. 454 Tr1 + Tr2 + Tr3 .. Trn 455 Average Topology Discovery Time (TDm) = ----------------------- 456 Total Trials 458 SUM[SQUAREOF(Tri-TDm)] 459 Topology Discovery Time Variance (TDv) ---------------------- 460 Total Trials -1 462 Reporting Format: 464 The Topology Discovery Time results MUST be reported in the format 465 of a table, with a row for each successful iteration. The last row 466 of the table indicates the Topology Discovery Time variance and the 467 previous row indicates the average Topology Discovery Time. 469 If this test is repeated with varying number of nodes over the same 470 topology, the results SHOULD be reported in the form of a graph. The 471 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 472 SHOULD be the average Topology Discovery Time. 474 5.1.2. Asynchronous Message Processing Time 476 Objective: 478 The time taken by controller(s) to process an asynchronous message, 479 defined as the interval starting with an asynchronous message from a 480 network device after the discovery of all the devices by the 481 controller(s), ending with a response message from the controller(s) 482 at its Southbound interface. 484 Reference Test Setup: 486 This test SHOULD use one of the test setup described in section 3.1 487 or section 3.2 of this document in combination with Appendix A. 489 Prerequisite: 491 1. The controller MUST have successfully completed the network 492 topology discovery for the connected Network Devices. 494 Procedure: 496 1. Generate asynchronous messages from every connected Network 497 Device, to the SDN controller, one at a time in series from the 498 forwarding plane test emulator for the trial duration. 499 2. Record every request transmit time (T1) and the corresponding 500 response received time (R1) at the forwarding plane test emulator 501 interface (I1) for every successful message exchange. 503 Measurement: 505 (R1-T1) + (R2-T2)..(Rn-Tn) 506 Asynchronous Message Processing Time Tr1 = ----------------------- 507 Nrx 509 Where Nrx is the total number of successful messages exchanged 511 Tr1 + Tr2 + Tr3..Trn 512 Average Asynchronous Message Processing Time = -------------------- 513 Total Trials 515 Asynchronous Message Processing Time Variance (TAMv) = 517 SUM[SQUAREOF(Tri-TAMm)] 518 ---------------------- 519 Total Trials -1 521 Where TAMm is the Average Asynchronous Message Processing Time. 523 Reporting Format: 525 The Asynchronous Message Processing Time results MUST be reported in 526 the format of a table with a row for each iteration. The last row of 527 the table indicates the Asynchronous Message Processing Time 528 variance and the previous row indicates the average Asynchronous 529 Message Processing Time. 531 The report should capture the following information in addition to 532 the configuration parameters captured in section 5. 534 - Successful messages exchanged (Nrx) 536 - Percentage of unsuccessful messages exchanged, computed using the 537 formula (1 - Nrx/Ntx) * 100), Where Ntx is the total number of 538 messages transmitted to the controller. 540 If this test is repeated with varying number of nodes with same 541 topology, the results SHOULD be reported in the form of a graph. The 542 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 543 SHOULD be the average Asynchronous Message Processing Time. 545 5.1.3. Asynchronous Message Processing Rate 547 Objective: 549 Measure the number of responses to asynchronous messages (such as 550 new flow arrival notification message, etc.) for which the 551 controller(s) performed processing and replied with a valid and 552 productive (non-trivial) response message 554 This test will measure two benchmarks on Asynchronous Message 555 Processing Rate using a single procedure. The two benchmarks are 556 (see section 2.3.1.3 of [I-D.sdn-controller-benchmark-term]): 558 1. Loss-free Asynchronous Message Processing Rate 560 2. Maximum Asynchronous Message Processing Rate 562 Here two benchmarks are determined through a series of trials where 563 the number of messages are sent to the controller(s), and the 564 responses from the controller(s) are counted over the trial 565 duration. The message response rate and the message loss ratio are 566 calculated for each trial. 568 Reference Test Setup: 570 The test SHOULD use one of the test setups described in section 3.1 571 or section 3.2 of this document in combination with Appendix A. 573 Prerequisite: 575 1. The controller(s) MUST have successfully completed the network 576 topology discovery for the connected Network Devices. 577 2. Choose and record the Trial Duration (Td), the sending rate step- 578 size (STEP), the tolerance on equality for two consecutive trials 579 (P%),and the maximum possible message sending rate (Ntx1/Td). 581 Procedure: 583 1. Generate asynchronous messages continuously at the maximum 584 possible rate on the established connections from all the 585 emulated/simulated Network Devices for the given trial Duration 586 (Td). 587 2. Record the total number of responses received from the controller 588 (Nrx1) as well as the number of messages sent (Ntx1) to the 589 controller within the trial duration (Td). 591 3. Calculate the Asynchronous Message Processing Rate (Tr1) and 592 the Message Loss Ratio (Lr1). Ensure that the controller(s) have 593 returned to normal operation. 594 4. Repeat the trial by reducing the asynchronous message sending rate 595 used in last trial by the STEP size. 596 5. Continue repeating the trials and reducing the sending rate until 597 both the maximum value of Nrxn and the Nrxn corresponding to zero 598 loss ratio have been found. 599 6. The trials corresponding to the benchmark levels MUST be repeated 600 using the same asynchronous message rates until the responses 601 received from the controller are equal (+/-P%) for two consecutive 602 trials. 603 7. Record the number of responses received from the controller (Nrxn) 604 as well as the number of messages sent (Ntxn) to the controller in 605 the last trial. 607 Measurement: 609 Nrxn 610 Asynchronous Message Processing Rate Trn = ----- 611 Td 613 Maximum Asynchronous Message Processing Rate = MAX(Trn) for all n 615 Nrxn 616 Asynchronous Message Loss Ratio Lrn = 1 - ----- 617 Ntxn 619 Loss-free Asynchronous Message Processing Rate = MAX(Trn) given 620 Lrn=0 622 Reporting Format: 624 The Asynchronous Message Processing Rate results MUST be reported in 625 the format of a table with a row for each trial. 627 The table should report the following information in addition to the 628 configuration parameters captured in section 5, with columns: 630 - Offered rate (Ntxn/Td) 632 - Asynchronous Message Processing Rate (Trn) 634 - Loss Ratio (Lr) 636 - Benchmark at this iteration (blank for none, Maximum, Loss-Free) 637 The results MAY be presented in the form of a graph. The X axis 638 SHOULD be the Offered rate, and dual Y axes would represent 639 Asynchronous Message Processing Rate and Loss Ratio, respectively. 641 If this test is repeated with varying number of nodes over same 642 topology, the results SHOULD be reported in the form of a graph. The 643 X axis SHOULD be the Number of nodes (N), the Y axis SHOULD be the 644 Asynchronous Message Processing Rate. Both the Maximum and the Loss- 645 Free Rates should be plotted for each N. 647 5.1.4. Reactive Path Provisioning Time 649 Objective: 651 The time taken by the controller to setup a path reactively between 652 source and destination node, defined as the interval starting with 653 the first flow provisioning request message received by the 654 controller(s) at its Southbound interface, ending with the last flow 655 provisioning response message sent from the controller(s) at its 656 Southbound interface. 658 Reference Test Setup: 660 The test SHOULD use one of the test setups described in section 3.1 661 or section 3.2 of this document in combination with Appendix A. The 662 number of Network Devices in the path is a parameter of the test 663 that may be varied from 2 to maximum discovery size in repetitions 664 of this test. 666 Prerequisite: 668 1. The controller MUST contain the network topology information for 669 the deployed network topology. 670 2. The controller should have the knowledge about the location of 671 destination endpoint for which the path has to be provisioned. 672 This can be achieved through dynamic learning or static 673 provisioning. 674 3. Ensure that the default action for 'flow miss' in Network Device 675 is configured to 'send to controller'. 676 4. Ensure that each Network Device in a path requires the controller 677 to make the forwarding decision while paving the entire path. 679 Procedure: 681 1. Send a single traffic stream from the test traffic generator TP1 682 to test traffic generator TP2. 684 2. Record the time of the first flow provisioning request message 685 sent to the controller (Tsf1) from the Network Device at the 686 forwarding plane test emulator interface (I1). 687 3. Wait for the arrival of first traffic frame at the Traffic 688 Endpoint TP2 or the expiry of trial duration (Td). 689 4. Record the time of the last flow provisioning response message 690 received from the controller (Tdf1) to the Network Device at the 691 forwarding plane test emulator interface (I1). 693 Measurement: 695 Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1. 697 Tr1 + Tr2 + Tr3 .. Trn 698 Average Reactive Path Provisioning Time = ----------------------- 699 Total Trials 701 SUM[SQUAREOF(Tri-TRPm)] 702 Reactive Path Provisioning Time Variance(TRPv) --------------------- 703 Total Trials -1 705 Where TRPm is the Average Reactive Path Provisioning Time. 707 Reporting Format: 709 The Reactive Path Provisioning Time results MUST be reported in the 710 format of a table with a row for each iteration. The last row of the 711 table indicates the Reactive Path Provisioning Time variance and the 712 previous row indicates the Average Reactive Path Provisioning Time. 714 The report should capture the following information in addition to 715 the configuration parameters captured in section 5. 717 - Number of Network Devices in the path 719 5.1.5. Proactive Path Provisioning Time 721 Objective: 723 The time taken by the controller to setup a path proactively between 724 source and destination node, defined as the interval starting with 725 the first proactive flow provisioned in the controller(s) at its 726 Northbound interface, ending with the last flow provisioning 727 response message sent from the controller(s) at its Southbound 728 interface. 730 Reference Test Setup: 732 The test SHOULD use one of the test setups described in section 3.1 733 or section 3.2 of this document in combination with Appendix A. 735 Prerequisite: 737 1. The controller MUST contain the network topology information for 738 the deployed network topology. 739 2. The controller should have the knowledge about the location of 740 destination endpoint for which the path has to be provisioned. 741 This can be achieved through dynamic learning or static 742 provisioning. 743 3. Ensure that the default action for flow miss in Network Device is 744 'drop'. 746 Procedure: 748 1. Send a single traffic stream from test traffic generator TP1 to 749 TP2. 750 2. Install the flow entries to reach from test traffic generator TP1 751 to the test traffic generator TP2 through controller's northbound 752 or management interface. 753 3. Wait for the arrival of first traffic frame at the test traffic 754 generator TP2 or the expiry of trial duration (Td). 755 4. Record the time when the proactive flow is provisioned in the 756 Controller (Tsf1) at the management plane test emulator interface 757 I2. 758 5. Record the time of the last flow provisioning message received 759 from the controller (Tdf1) at the forwarding plane test emulator 760 interface I1. 762 Measurement: 764 Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1. 766 Tr1 + Tr2 + Tr3 .. Trn 767 Average Proactive Path Provisioning Time = ----------------------- 768 Total Trials 770 SUM[SQUAREOF(Tri-TPPm)] 771 Proactive Path Provisioning Time Variance(TPPv) -------------------- 772 Total Trials -1 774 Where TPPm is the Average Proactive Path Provisioning Time. 776 Reporting Format: 778 The Proactive Path Provisioning Time results MUST be reported in the 779 format of a table with a row for each iteration. The last row of the 780 table indicates the Proactive Path Provisioning Time variance and 781 the previous row indicates the Average Proactive Path Provisioning 782 Time. 784 The report should capture the following information in addition to 785 the configuration parameters captured in section 5. 787 - Number of Network Devices in the path 789 5.1.6. Reactive Path Provisioning Rate 791 Objective: 793 The maximum number of independent paths a controller can 794 concurrently establish per second between source and destination 795 nodes reactively, defined as the number of paths provisioned per 796 second by the controller(s) at its Southbound interface for the flow 797 provisioning requests received for path provisioning at its 798 Southbound interface between the start of the test and the expiry of 799 given trial duration. 801 Reference Test Setup: 803 The test SHOULD use one of the test setups described in section 3.1 804 or section 3.2 of this document in combination with Appendix A. 806 Prerequisite: 808 1. The controller MUST contain the network topology information for 809 the deployed network topology. 810 2. The controller should have the knowledge about the location of 811 destination addresses for which the paths have to be provisioned. 812 This can be achieved through dynamic learning or static 813 provisioning. 814 3. Ensure that the default action for 'flow miss' in Network Device 815 is configured to 'send to controller'. 816 4. Ensure that each Network Device in a path requires the controller 817 to make the forwarding decision while provisioning the entire 818 path. 820 Procedure: 822 1. Send traffic with unique source and destination addresses from 823 test traffic generator TP1. 824 2. Record total number of unique traffic frames (Ndf) received at the 825 test traffic generator TP2 within the trial duration (Td). 827 Measurement: 829 Ndf 830 Reactive Path Provisioning Rate Tr1 = ------ 831 Td 833 Tr1 + Tr2 + Tr3 .. Trn 834 Average Reactive Path Provisioning Rate = ------------------------ 835 Total Trials 837 SUM[SQUAREOF(Tri-RPPm)] 838 Reactive Path Provisioning Rate Variance(RPPv) -------------------- 839 Total Trials -1 841 Where RPPm is the Average Reactive Path Provisioning Rate. 843 Reporting Format: 845 The Reactive Path Provisioning Rate results MUST be reported in the 846 format of a table with a row for each iteration. The last row of the 847 table indicates the Reactive Path Provisioning Rate variance and the 848 previous row indicates the Average Reactive Path Provisioning Rate. 850 The report should capture the following information in addition to 851 the configuration parameters captured in section 5. 853 - Number of Network Devices in the path 855 - Offered rate 857 5.1.7. Proactive Path Provisioning Rate 859 Objective: 861 Measure the maximum number of independent paths a controller can 862 concurrently establish per second between source and destination 863 nodes proactively, defined as the number of paths provisioned per 864 second by the controller(s) at its Southbound interface for the 865 paths requested in its Northbound interface between the start of the 866 test and the expiry of given trial duration. The measurement is 867 based on dataplane observations of successful path activation 869 Reference Test Setup: 871 The test SHOULD use one of the test setups described in section 3.1 872 or section 3.2 of this document in combination with Appendix A. 874 Prerequisite: 876 1. The controller MUST contain the network topology information for 877 the deployed network topology. 879 2. The controller should have the knowledge about the location of 880 destination addresses for which the paths have to be provisioned. 881 This can be achieved through dynamic learning or static 882 provisioning. 884 3. Ensure that the default action for flow miss in Network Device is 885 'drop'. 887 Procedure: 889 1. Send traffic continuously with unique source and destination 890 addresses from test traffic generator TP1. 892 2. Install corresponding flow entries to reach from simulated 893 sources at the test traffic generator TP1 to the simulated 894 destinations at test traffic generator TP2 through controller's 895 northbound or management interface. 897 3. Record total number of unique traffic frames received Ndf) at the 898 test traffic generator TP2 within the trial duration (Td). 900 Measurement: 902 Ndf 903 Proactive Path Provisioning Rate Tr1 = ------ 904 Td 906 Tr1 + Tr2 + Tr3 .. Trn 907 Average Proactive Path Provisioning Rate = ----------------------- 908 Total Trials 909 SUM[SQUAREOF(Tri-PPPm)] 910 Proactive Path Provisioning Rate Variance(PPPv) -------------------- 911 Total Trials -1 913 Where PPPm is the Average Proactive Path Provisioning Rate. 915 Reporting Format: 917 The Proactive Path Provisioning Rate results MUST be reported in the 918 format of a table with a row for each iteration. The last row of the 919 table indicates the Proactive Path Provisioning Rate variance and 920 the previous row indicates the Average Proactive Path Provisioning 921 Rate. 923 The report should capture the following information in addition to 924 the configuration parameters captured in section 5. 926 - Number of Network Devices in the path 928 - Offered rate 930 5.1.8. Network Topology Change Detection Time 932 Objective: 934 The amount of time required for the controller to detect any changes 935 in the network topology, defined as the interval starting with the 936 notification message received by the controller(s) at its Southbound 937 interface, ending with the first topology rediscovery messages sent 938 from the controller(s) at its Southbound interface. 940 Reference Test Setup: 942 The test SHOULD use one of the test setups described in section 3.1 943 or section 3.2 of this document in combination with Appendix A. 945 Prerequisite: 947 1. The controller MUST have successfully discovered the network 948 topology information for the deployed network topology. 950 2. The periodic network discovery operation should be configured to 951 twice the Trial duration (Td) value. 953 Procedure: 955 1. Trigger a topology change event by bringing down an active 956 Network Device in the topology. 958 2. Record the time when the first topology change notification is 959 sent to the controller (Tcn) at the forwarding plane test emulator 960 interface (I1). 962 3. Stop the trial when the controller sends the first topology re- 963 discovery message to the Network Device or the expiry of trial 964 duration (Td). 966 4. Record the time when the first topology re-discovery message is 967 received from the controller (Tcd) at the forwarding plane test 968 emulator interface (I1) 970 Measurement: 972 Network Topology Change Detection Time Tr1 = Tcd-Tcn. 974 Tr1 + Tr2 + Tr3 .. Trn 975 Average Network Topology Change Detection Time = ------------------ 976 Total Trials 978 Network Topology Change Detection Time Variance(NTDv) = 980 SUM[SQUAREOF(Tri-NTDm)] 981 ----------------------- 982 Total Trials -1 984 Where NTDm is the Average Network Topology Change Detection Time. 986 Reporting Format: 988 The Network Topology Change Detection Time results MUST be reported 989 in the format of a table with a row for each iteration. The last row 990 of the table indicates the Network Topology Change Detection Time 991 variance and the previous row indicates the average Network Topology 992 Change Time. 994 5.2. Scalability 996 5.2.1. Control Session Capacity 998 Objective: 1000 Measure the maximum number of control sessions the controller can 1001 maintain, defined as the number of sessions that the controller can 1002 accept from network devices, starting with the first control 1003 session, ending with the last control session that the controller(s) 1004 accepts at its Southbound interface. 1006 Reference Test Setup: 1008 The test SHOULD use one of the test setups described in section 3.1 1009 or section 3.2 of this document in combination with Appendix A. 1011 Procedure: 1013 1. Establish control connection with controller from every Network 1014 Device emulated in the forwarding plane test emulator. 1015 2. Stop the trial when the controller starts dropping the control 1016 connections. 1017 3. Record the number of successful connections established with the 1018 controller (CCn) at the forwarding plane test emulator. 1020 Measurement: 1022 Control Sessions Capacity = CCn. 1024 Reporting Format: 1026 The Control Session Capacity results MUST be reported in addition to 1027 the configuration parameters captured in section 5. 1029 5.2.2. Network Discovery Size 1031 Objective: 1033 Measure the network size (number of nodes, links and hosts) that a 1034 controller can discover, defined as the size of a network that the 1035 controller(s) can discover, starting from a network topology given 1036 by the user for discovery, ending with the topology that the 1037 controller(s) could successfully discover. 1039 Reference Test Setup: 1041 The test SHOULD use one of the test setups described in section 3.1 1042 or section 3.2 of this document in combination with Appendix A. 1044 Prerequisite: 1046 1. The controller MUST support automatic network discovery. 1047 2. Tester should be able to retrieve the discovered topology 1048 information either through controller's management interface or 1049 northbound interface. 1051 Procedure: 1053 1. Establish the network connections between controller and network 1054 nodes. 1055 2. Query the controller for the discovered network topology 1056 information and compare it with the deployed network topology 1057 information. 1058 3. If the comparison is successful, increase the number of nodes by 1 1059 and repeat the trial. 1060 If the comparison is unsuccessful, decrease the number of nodes by 1061 1 and repeat the trial. 1062 4. Continue the trial until the comparison of step 3 is successful. 1063 5. Record the number of nodes for the last trial (Ns) where the 1064 topology comparison was successful. 1066 Measurement: 1068 Network Discovery Size = Ns. 1070 Reporting Format: 1072 The Network Discovery Size results MUST be reported in addition to 1073 the configuration parameters captured in section 5. 1075 5.2.3. Forwarding Table Capacity 1077 Objective: 1079 Measure the maximum number of flow entries a controller can manage 1080 in its Forwarding table. 1082 Reference Test Setup: 1084 The test SHOULD use one of the test setups described in section 3.1 1085 or section 3.2 of this document in combination with Appendix A. 1087 Prerequisite: 1089 1. The controller Forwarding table should be empty. 1090 2. Flow Idle time MUST be set to higher or infinite value. 1091 3. The controller MUST have successfully completed network topology 1092 discovery. 1093 4. Tester should be able to retrieve the forwarding table information 1094 either through controller's management interface or northbound 1095 interface. 1097 Procedure: 1099 Reactive Flow Provisioning Mode: 1101 1. Send bi-directional traffic continuously with unique source and/or 1102 destination addresses from test traffic generators TP1 and TP2 at 1103 the asynchronous message processing rate of controller. 1104 2. Query the controller at a regular interval (e.g., 5 seconds) for 1105 the number of learned flow entries from its northbound interface. 1106 3. Stop the trial when the retrieved value is constant for three 1107 consecutive iterations and record the value received from the last 1108 query (Nrp). 1110 Proactive Flow Provisioning Mode: 1112 1. Install unique flows continuously through controller's northbound 1113 or management interface until a failure response is received from 1114 the controller. 1115 2. Record the total number of successful responses (Nrp). 1117 Note: 1119 Some controller designs for proactive flow provisioning mode may 1120 require the switch to send flow setup requests in order to generate 1121 flow setup responses. In such cases, it is recommended to generate 1122 bi-directional traffic for the provisioned flows. 1124 Measurement: 1126 Proactive Flow Provisioning Mode: 1128 Max Flow Entries = Total number of flows provisioned (Nrp) 1130 Reactive Flow Provisioning Mode: 1132 Max Flow Entries = Total number of learned flow entries (Nrp) 1134 Forwarding Table Capacity = Max Flow Entries. 1136 Reporting Format: 1138 The Forwarding Table Capacity results MUST be tabulated with the 1139 following information in addition to the configuration parameters 1140 captured in section 5. 1142 - Provisioning Type (Proactive/Reactive) 1144 5.3. Security 1146 5.3.1. Exception Handling 1148 Objective: 1150 Determine the effect of handling error packets and notifications on 1151 performance tests. The impact MUST be measured for the following 1152 performance tests 1154 a. Path Provisioning Rate 1156 b. Path Provisioning Time 1158 c. Network Topology Change Detection Time 1160 Reference Test Setup: 1162 The test SHOULD use one of the test setups described in section 3.1 1163 or section 3.2 of this document in combination with Appendix A. 1165 Prerequisite: 1167 1. This test MUST be performed after obtaining the baseline 1168 measurement results for the above performance tests. 1170 2. Ensure that the invalid messages are not dropped by the 1171 intermediate devices connecting the controller and Network 1172 Devices. 1174 Procedure: 1176 1. Perform the above listed performance tests and send 1% of messages 1177 from the Asynchronous Message Processing Rate as invalid messages 1178 from the connected Network Devices emulated at the forwarding 1179 plane test emulator. 1180 2. Perform the above listed performance tests and send 2% of messages 1181 from the Asynchronous Message Processing Rate as invalid messages 1182 from the connected Network Devices emulated at the forwarding 1183 plane test emulator. 1185 Note: 1187 Invalid messages can be frames with incorrect protocol fields or any 1188 form of failure notifications sent towards controller. 1190 Measurement: 1192 Measurement MUST be done as per the equation defined in the 1193 corresponding performance test measurement section. 1195 Reporting Format: 1197 The Exception Handling results MUST be reported in the format of 1198 table with a column for each of the below parameters and row for 1199 each of the listed performance tests. 1201 - Without Exceptions 1203 - With 1% Exceptions 1205 - With 2% Exceptions 1207 5.3.2. Denial of Service Handling 1209 Objective: 1211 Determine the effect of handling DoS attacks on performance and 1212 scalability tests the impact MUST be measured for the following 1213 tests: 1215 a. Path Provisioning Rate 1217 b. Path Provisioning Time 1219 c. Network Topology Change Detection Time 1221 d. Network Discovery Size 1223 Reference Test Setup: 1225 The test SHOULD use one of the test setups described in section 3.1 1226 or section 3.2 of this document in combination with Appendix A. 1228 Prerequisite: 1230 This test MUST be performed after obtaining the baseline measurement 1231 results for the above tests. 1233 Procedure: 1235 1. Perform the listed tests and launch a DoS attack towards 1236 controller while the trial is running. 1238 Note: 1240 DoS attacks can be launched on one of the following interfaces. 1242 a. Northbound (e.g., Query for flow entries continuously on 1243 northbound interface) 1244 b. Management (e.g., Ping requests to controller's management 1245 interface) 1246 c. Southbound (e.g., TCP SYN messages on southbound interface) 1248 Measurement: 1250 Measurement MUST be done as per the equation defined in the 1251 corresponding test's measurement section. 1253 Reporting Format: 1255 The DoS Attacks Handling results MUST be reported in the format of 1256 table with a column for each of the below parameters and row for 1257 each of the listed tests. 1259 - Without any attacks 1261 - With attacks 1263 The report should also specify the nature of attack and the 1264 interface. 1266 5.4. Reliability 1268 5.4.1. Controller Failover Time 1270 Objective: 1272 The time taken to switch from an active controller to the backup 1273 controller, when the controllers work in redundancy mode and the 1274 active controller fails, defined as the interval starting with the 1275 active controller bringing down, ending with the first re-discovery 1276 message received from the new controller at its Southbound 1277 interface. 1279 Reference Test Setup: 1281 The test SHOULD use the test setup described in section 3.2 of this 1282 document in combination with Appendix A. 1284 Prerequisite: 1286 1. Master controller election MUST be completed. 1287 2. Nodes are connected to the controller cluster as per the 1288 Redundancy Mode (RM). 1289 3. The controller cluster should have successfully completed the 1290 network topology discovery. 1291 4. The Network Device MUST send all new flows to the controller when 1292 it receives from the test traffic generator. 1293 5. Controller should have learned the location of destination (D1) at 1294 test traffic generator TP2. 1296 Procedure: 1298 1. Send uni-directional traffic continuously with incremental 1299 sequence number and source addresses from test traffic generator 1300 TP1 at the rate that the controller processes without any drops. 1301 2. Ensure that there are no packet drops observed at the test traffic 1302 generator TP2. 1303 3. Bring down the active controller. 1304 4. Stop the trial when a first frame received on TP2 after failover 1305 operation. 1307 5. Record the time at which the last valid frame received (T1) at 1308 test traffic generator TP2 before sequence error and the first 1309 valid frame received (T2) after the sequence error at TP2 1311 Measurement: 1313 Controller Failover Time = (T2 - T1) 1315 Packet Loss = Number of missing packet sequences. 1317 Reporting Format: 1319 The Controller Failover Time results MUST be tabulated with the 1320 following information. 1322 - Number of cluster nodes 1324 - Redundancy mode 1326 - Controller Failover Time 1328 - Packet Loss 1330 - Cluster keep-alive interval 1332 5.4.2. Network Re-Provisioning Time 1334 Objective: 1336 The time taken to re-route the traffic by the Controller, when there 1337 is a failure in existing traffic paths, defined as the interval 1338 starting from the first failure notification message received by the 1339 controller, ending with the last flow re-provisioning message sent 1340 by the controller at its Southbound interface. 1342 Reference Test Setup: 1344 This test SHOULD use one of the test setup described in section 3.1 1345 or section 3.2 of this document in combination with Appendix A. 1347 Prerequisite: 1348 1. Network with the given number of nodes and redundant paths MUST be 1349 deployed. 1351 2. Ensure that the controller MUST have knowledge about the location 1352 of test traffic generators TP1 and TP2. 1353 3. Ensure that the controller does not pre-provision the alternate 1354 path in the emulated Network Devices at the forwarding plane test 1355 emulator. 1357 Procedure: 1359 1. Send bi-directional traffic continuously with unique sequence 1360 number from TP1 and TP2. 1361 2. Bring down a link or switch in the traffic path. 1362 3. Stop the trial after receiving first frame after network re- 1363 convergence. 1364 4. Record the time of last received frame prior to the frame loss at 1365 TP2 (TP2-Tlfr) and the time of first frame received after the 1366 frame loss at TP2 (TP2-Tffr). There must be a gap in sequence 1367 numbers of these frames 1368 5. Record the time of last received frame prior to the frame loss at 1369 TP1 (TP1-Tlfr) and the time of first frame received after the 1370 frame loss at TP1 (TP1-Tffr). 1372 Measurement: 1374 Forward Direction Path Re-Provisioning Time (FDRT) 1375 = (TP2-Tffr - TP2-Tlfr) 1377 Reverse Direction Path Re-Provisioning Time (RDRT) 1378 = (TP1-Tffr - TP1-Tlfr) 1380 Network Re-Provisioning Time = (FDRT+RDRT)/2 1382 Forward Direction Packet Loss = Number of missing sequence frames 1383 at TP1 1385 Reverse Direction Packet Loss = Number of missing sequence frames 1386 at TP2 1388 Reporting Format: 1390 The Network Re-Provisioning Time results MUST be tabulated with the 1391 following information. 1393 - Number of nodes in the primary path 1395 - Number of nodes in the alternate path 1396 - Network Re-Provisioning Time 1398 - Forward Direction Packet Loss 1400 - Reverse Direction Packet Loss 1402 6. References 1404 6.1. Normative References 1406 [RFC2119] S. Bradner, "Key words for use in RFCs to Indicate 1407 Requirement Levels", RFC 2119, March 1997. 1409 [RFC8174] B. Leiba, "Ambiguity of Uppercase vs Lowercase in RFC 1410 2119 Key Words", RFC 8174, May 2017. 1412 [I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil, 1413 Mark.T, Vishwas Manral, Sarah Banks, "Terminology for 1414 Benchmarking SDN Controller Performance", 1415 draft-ietf-bmwg-sdn-controller-benchmark-term-08 1416 (Work in progress), February 25, 2018 1418 6.2. Informative References 1420 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 1421 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 1423 7. IANA Considerations 1425 This document does not have any IANA requests. 1427 8. Security Considerations 1429 Benchmarking tests described in this document are limited to the 1430 performance characterization of controllers in a lab environment 1431 with isolated network. 1433 The benchmarking network topology will be an independent test setup 1434 and MUST NOT be connected to devices that may forward the test 1435 traffic into a production network, or misroute traffic to the test 1436 management network. 1438 Further, benchmarking is performed on a "black-box" basis, relying 1439 solely on measurements observable external to the controller. 1441 Special capabilities SHOULD NOT exist in the controller specifically 1442 for benchmarking purposes. Any implications for network security 1443 arising from the controller SHOULD be identical in the lab and in 1444 production networks. 1446 9. Acknowledgments 1448 The authors would like to thank the following individuals for 1449 providing their valuable comments to the earlier versions of this 1450 document: Al Morton (AT&T), Sandeep Gangadharan (HP), M. Georgescu 1451 (NAIST), Andrew McGregor (Google), Scott Bradner , Jay Karthik 1452 (Cisco), Ramakrishnan (Dell), Khasanov Boris (Huawei), Brian 1453 Castelli (Spirent) 1455 This document was prepared using 2-Word-v2.0.template.dot. 1457 Appendix A. Example Test Topology 1459 A.1. Leaf-Spine Topology 1461 +------+ +------+ 1462 | SDN | | SDN | (Spine) 1463 | Node |.. | Node | 1464 +------+ +------+ 1465 / \ / \ 1466 / \ / \ 1467 l1 / / \ ln 1468 / / \ \ 1469 +--------+ +-------+ 1470 | SDN | | SDN | 1471 | Node |.. | Node | (Leaf) 1472 +--------+ +-------+ 1474 Appendix B. Benchmarking Methodology using OpenFlow Controllers 1476 This section gives an overview of OpenFlow protocol and provides 1477 test methodology to benchmark SDN controllers supporting OpenFlow 1478 southbound protocol. 1480 B.1. Protocol Overview 1482 OpenFlow is an open standard protocol defined by Open Networking 1483 Foundation (ONF)[ OpenFlow Switch Specification], used for 1484 programming the forwarding plane of network switches or routers via 1485 a centralized controller. 1487 B.2. Messages Overview 1489 OpenFlow protocol supports three messages types namely controller- 1490 to-switch, asynchronous and symmetric. 1492 Controller-to-switch messages are initiated by the controller and 1493 used to directly manage or inspect the state of the switch. These 1494 messages allow controllers to query/configure the switch (Features, 1495 Configuration messages), collect information from switch (Read-State 1496 message), send packets on specified port of switch (Packet-out 1497 message), and modify switch forwarding plane and state (Modify- 1498 State, Role-Request messages etc.). 1500 Asynchronous messages are generated by the switch without a 1501 controller soliciting them. These messages allow switches to update 1502 controllers to denote an arrival of new flow (Packet-in), switch 1503 state change (Flow-Removed, Port-status) and error (Error). 1505 Symmetric messages are generated in either direction without 1506 solicitation. These messages allow switches and controllers to set 1507 up connection (Hello), verify for liveness (Echo) and offer 1508 additional functionalities (Experimenter). 1510 B.3. Connection Overview 1512 OpenFlow channel is used to exchange OpenFlow message between an 1513 OpenFlow switch and an OpenFlow controller. The OpenFlow channel 1514 connection can be setup using plain TCP or TLS. By default, a switch 1515 establishes single connection with SDN controller. A switch may 1516 establish multiple parallel connections to single controller 1517 (auxiliary connection) or multiple controllers to handle controller 1518 failures and load balancing. 1520 B.4. Performance Benchmarking Tests 1522 B.4.1. Network Topology Discovery Time 1524 Procedure: 1526 Network Devices OpenFlow SDN 1527 Controller Application 1528 | | | 1529 | | | 1531 | | | 1532 | | | 1534 | | | 1535 | OFPT_HELLO Exchange | | 1536 |<-------------------------->| | 1537 | | | 1538 | PACKET_OUT with LLDP | | 1539 | to all switches | | 1540 (Tm1)|<---------------------------| | 1541 | | | 1542 | PACKET_IN with LLDP| | 1543 | rcvd from switch-1| | 1544 |--------------------------->| | 1545 | | | 1546 | PACKET_IN with LLDP| | 1547 | rcvd from switch-2| | 1548 |--------------------------->| | 1549 | . | | 1550 | . | | 1551 | | | 1552 | PACKET_IN with LLDP| | 1553 | rcvd from switch-n| | 1554 (Tmn)|--------------------------->| | 1555 | | | 1556 | | | 1558 | | | 1559 | | Query the controller for| 1560 | | discovered n/w topo.(Di)| 1561 | |<--------------------------| 1562 | | | 1563 | | | 1565 | | | 1567 Legend: 1569 NB: Northbound 1570 SB: Southbound 1571 OF: OpenFlow 1572 Tm1: Time of reception of first LLDP message from controller 1573 Tmn: Time of last LLDP message sent to controller 1575 Discussion: 1577 The Network Topology Discovery Time can be obtained by calculating 1578 the time difference between the first PACKET_OUT with LLDP message 1579 received from the controller (Tm1) and the last PACKET_IN with LLDP 1580 message sent to the controller (Tmn) when the comparison is 1581 successful. 1583 B.4.2. Asynchronous Message Processing Time 1585 Procedure: 1587 Network Devices OpenFlow SDN 1588 Controller Application 1589 | | | 1590 |PACKET_IN with single | | 1591 |OFP match header | | 1592 (T0)|--------------------------->| | 1593 | | | 1594 | PACKET_OUT with single OFP | | 1595 | action header | | 1596 (R0)|<---------------------------| | 1597 | . | | 1598 | . | | 1599 | . | | 1600 | | | 1601 |PACKET_IN with single OFP | | 1602 |match header | | 1603 (Tn)|--------------------------->| | 1604 | | | 1605 | PACKET_OUT with single OFP | | 1606 | action header| | 1607 (Rn)|<---------------------------| | 1608 | | | 1609 | | | 1611 | | | 1612 | | | 1615 | | | 1617 Legend: 1619 T0,T1, ..Tn are PACKET_IN messages transmit timestamps. 1620 R0,R1, ..Rn are PACKET_OUT messages receive timestamps. 1621 Nrx : Number of successful PACKET_IN/PACKET_OUT message 1622 exchanges 1624 Discussion: 1626 The Asynchronous Message Processing Time will be obtained by sum of 1627 ((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx. 1629 B.4.3. Asynchronous Message Processing Rate 1631 Procedure: 1633 Network Devices OpenFlow SDN 1634 Controller Application 1635 | | | 1636 |PACKET_IN with single OFP | | 1637 |match headers | | 1638 |--------------------------->| | 1639 | | | 1640 | PACKET_OUT with single | | 1641 | OFP action headers| | 1642 |<---------------------------| | 1643 | | | 1644 | . | | 1645 | . | | 1646 | . | | 1647 | | | 1648 |PACKET_IN with single OFP | | 1649 |match headers | | 1650 |--------------------------->| | 1651 | | | 1652 | PACKET_OUT with single | | 1653 | OFP action headers| | 1654 |<---------------------------| | 1655 | | | 1656 | | | 1658 | | | 1659 | | | 1661 | | | 1662 | | | 1664 | | | 1666 Note: The Ntx1 on initial trials should be greater than Nrx1 and 1667 repeat the trials until the Nrxn for two consecutive trials equeal 1668 to (+/-P%). 1670 Discussion: 1672 This test will measure two benchmarks using single procedure. 1) The 1673 Maximum Asynchronous Message Processing Rate will be obtained by 1674 calculating the maximum PACKET OUTs (Nrxn) received from the 1675 controller(s) across n trials. 2) The Loss-free Asynchronous Message 1676 Processing Rate will be obtained by calculating the maximum PACKET 1677 OUTs received from controller (s) when Loss Ratio equals zero. The 1678 loss ratio is obtained by 1 - Nrxn/Ntxn 1680 B.4.4. Reactive Path Provisioning Time 1682 Procedure: 1684 Test Traffic Test Traffic Network Devices OpenFlow 1685 Generator TP1 Generator TP2 Controller 1686 | | | | 1687 | |G-ARP (D1) | | 1688 | |--------------------->| | 1689 | | | | 1690 | | |PACKET_IN(D1) | 1691 | | |------------------>| 1692 | | | | 1693 |Traffic (S1,D1) | | 1694 (Tsf1)|----------------------------------->| | 1695 | | | | 1696 | | | | 1697 | | | | 1698 | | |PACKET_IN(S1,D1) | 1699 | | |------------------>| 1700 | | | | 1701 | | | FLOW_MOD(D1) | 1702 | | |<------------------| 1703 | | | | 1704 | |Traffic (S1,D1) | | 1705 | (Tdf1)|<---------------------| | 1706 | | | | 1708 Legend: 1710 G-ARP: Gratuitous ARP message. 1711 Tsf1: Time of first frame sent from TP1 1712 Tdf1: Time of first frame received from TP2 1714 Discussion: 1716 The Reactive Path Provisioning Time can be obtained by finding the 1717 time difference between the transmit and receive time of the traffic 1718 (Tsf1-Tdf1). 1720 B.4.5. Proactive Path Provisioning Time 1722 Procedure: 1724 Test Traffic Test Traffic Network Devices OpenFlow SDN 1725 Generator TP1 Generator TP2 Controller Application 1726 | | | | | 1727 | |G-ARP (D1) | | | 1728 | |-------------->| | | 1729 | | | | | 1730 | | |PACKET_IN(D1) | | 1731 | | |--------------->| | 1732 | | | | | 1733 |Traffic (S1,D1) | | | 1734 Tsf1)|---------------------------->| | | 1735 | | | | | 1736 | | | | | 1738 | | | | | 1739 | | | FLOW_MOD(D1) | | 1740 | | |<---------------| | 1741 | | | | | 1742 | |Traffic (S1,D1)| | | 1743 | (Tdf1)|<--------------| | | 1744 | | | | | 1746 Legend: 1748 G-ARP: Gratuitous ARP message. 1749 Tsf1: Time of first frame sent from TP1 1750 Tdf1: Time of first frame received from TP2 1752 Discussion: 1754 The Proactive Path Provisioning Time can be obtained by finding the 1755 time difference between the transmit and receive time of the traffic 1756 (Tsf1-Tdf1). 1758 B.4.6. Reactive Path Provisioning Rate 1760 Procedure: 1762 Test Traffic Test Traffic Network Devices OpenFlow 1763 Generator TP1 Generator TP2 Controller 1764 | | | | 1765 | | | | 1766 | | | | 1767 | |G-ARP (D1..Dn) | | 1768 | |--------------------| | 1769 | | | | 1770 | | |PACKET_IN(D1..Dn) | 1771 | | |--------------------->| 1772 | | | | 1773 |Traffic (S1..Sn,D1..Dn) | | 1774 |--------------------------------->| | 1775 | | | | 1776 | | |PACKET_IN(S1.Sn,D1.Dn)| 1777 | | |--------------------->| 1778 | | | | 1779 | | | FLOW_MOD(S1) | 1780 | | |<---------------------| 1781 | | | | 1782 | | | FLOW_MOD(D1) | 1783 | | |<---------------------| 1784 | | | | 1785 | | | FLOW_MOD(S2) | 1786 | | |<---------------------| 1787 | | | | 1788 | | | FLOW_MOD(D2) | 1789 | | |<---------------------| 1790 | | | . | 1791 | | | . | 1792 | | | | 1793 | | | FLOW_MOD(Sn) | 1794 | | |<---------------------| 1795 | | | | 1796 | | | FLOW_MOD(Dn) | 1797 | | |<---------------------| 1798 | | | | 1799 | | Traffic (S1..Sn, | | 1800 | | D1..Dn)| | 1801 | |<-------------------| | 1802 | | | | 1803 | | | | 1805 Legend: 1807 G-ARP: Gratuitous ARP 1808 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1809 Destination Endpoint n 1810 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1811 Endpoint n 1813 Discussion: 1815 The Reactive Path Provisioning Rate can be obtained by finding the 1816 total number of frames received at TP2 after the trial duration. 1818 B.4.7. Proactive Path Provisioning Rate 1820 Procedure: 1822 Test Traffic Test Traffic Network Devices OpenFlow SDN 1823 Generator TP1 Generator TP2 Controller Application 1824 | | | | | 1825 | |G-ARP (D1..Dn) | | | 1826 | |-------------->| | | 1827 | | | | | 1828 | | |PACKET_IN(D1.Dn)| | 1829 | | |--------------->| | 1830 | | | | | 1831 |Traffic (S1..Sn,D1..Dn) | | | 1832 Tsf1)|---------------------------->| | | 1833 | | | | | 1834 | | | | | 1836 | | | | | 1837 | | | | . | 1838 | | | | | 1840 | | | | | 1841 | | | FLOW_MOD(S1) | | 1842 | | |<---------------| | 1843 | | | | | 1844 | | | FLOW_MOD(D1) | | 1845 | | |<---------------| | 1846 | | | | | 1847 | | | . | | 1848 | | | FLOW_MOD(Sn) | | 1849 | | |<---------------| | 1850 | | | | | 1851 | | | FLOW_MOD(Dn) | | 1852 | | |<---------------| | 1853 | | | | | 1854 | |Traffic (S1.Sn,| | | 1855 | | D1.Dn)| | | 1856 | (Tdf1)|<--------------| | | 1857 | | | | | 1859 Legend: 1861 G-ARP: Gratuitous ARP 1862 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1863 Destination Endpoint n 1864 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1865 Endpoint n 1867 Discussion: 1869 The Proactive Path Provisioning Rate can be obtained by finding the 1870 total number of frames received at TP2 after the trial duration 1872 B.4.8. Network Topology Change Detection Time 1874 Procedure: 1876 Network Devices OpenFlow SDN 1877 Controller Application 1878 | | | 1879 | | | 1881 | | | 1882 T0 |PORT_STATUS with link down | | 1883 | from S1 | | 1884 |--------------------------->| | 1885 | | | 1886 |First PACKET_OUT with LLDP | | 1887 |to OF Switch | | 1888 T1 |<---------------------------| | 1889 | | | 1890 | | | 1893 Discussion: 1895 The Network Topology Change Detection Time can be obtained by 1896 finding the difference between the time the OpenFlow switch S1 sends 1897 the PORT_STATUS message (T0) and the time that the OpenFlow 1898 controller sends the first topology re-discovery message (T1) to 1899 OpenFlow switches. 1901 B.5. Scalability 1903 B.5.1. Control Sessions Capacity 1905 Procedure: 1907 Network Devices OpenFlow 1908 Controller 1909 | | 1910 | OFPT_HELLO Exchange for Switch 1 | 1911 |<------------------------------------->| 1912 | | 1913 | OFPT_HELLO Exchange for Switch 2 | 1914 |<------------------------------------->| 1915 | . | 1916 | . | 1917 | . | 1918 | OFPT_HELLO Exchange for Switch n | 1919 |X<----------------------------------->X| 1920 | | 1922 Discussion: 1924 The value of Switch n-1 will provide Control Sessions Capacity. 1926 B.5.2. Network Discovery Size 1928 Procedure: 1930 Network Devices OpenFlow SDN 1931 Controller Application 1932 | | | 1933 | | | 1935 | | | 1936 | OFPT_HELLO Exchange | | 1937 |<-------------------------->| | 1938 | | | 1939 | PACKET_OUT with LLDP | | 1940 | to all switches | | 1941 |<---------------------------| | 1942 | | | 1943 | PACKET_IN with LLDP| | 1944 | rcvd from switch-1| | 1945 |--------------------------->| | 1946 | | | 1947 | PACKET_IN with LLDP| | 1948 | rcvd from switch-2| | 1949 |--------------------------->| | 1950 | . | | 1951 | . | | 1952 | | | 1953 | PACKET_IN with LLDP| | 1954 | rcvd from switch-n| | 1955 |--------------------------->| | 1956 | | | 1957 | | | 1959 | | | 1960 | | Query the controller for| 1961 | | discovered n/w topo.(N1)| 1962 | |<--------------------------| 1963 | | | 1964 | | | 1966 | | | 1967 | | | 1970 | | | 1972 Legend: 1974 n/w topo: Network Topology 1975 OF: OpenFlow 1977 Discussion: 1979 The value of N1 provides the Network Discovery Size value. The trial 1980 duration can be set to the stipulated time within which the user 1981 expects the controller to complete the discovery process. 1983 B.5.3. Forwarding Table Capacity 1984 Procedure: 1986 Test Traffic Network Devices OpenFlow SDN 1987 Generator TP1 Controller Application 1988 | | | | 1989 | | | | 1990 |G-ARP (H1..Hn) | | | 1991 |----------------->| | | 1992 | | | | 1993 | |PACKET_IN(D1..Dn) | | 1994 | |------------------>| | 1995 | | | | 1996 | | || 1997 | | | | 1998 | | | |(F1) 2000 | | | | 2001 | | || 2002 | | | | 2003 | | | |(F2) 2005 | | | | 2006 | | || 2007 | | | | 2008 | | | |(F3) 2010 | | | | 2011 | | | | 2013 | | | | 2015 Legend: 2017 G-ARP: Gratuitous ARP 2018 H1..Hn: Host 1 .. Host n 2019 FWD: Forwarding Table 2021 Discussion: 2023 Query the controller forwarding table entries for multiple times 2024 until the three consecutive queries return the same value. The last 2025 value retrieved from the controller will provide the Forwarding 2026 Table Capacity value. The query interval is user configurable. The 5 2027 seconds shown in this example is for representational purpose. 2029 B.6. Security 2031 B.6.1. Exception Handling 2033 Procedure: 2035 Test Traffic Test Traffic Network Devices OpenFlow SDN 2036 Generator TP1 Generator TP2 Controller Application 2037 | | | | | 2038 | |G-ARP (D1..Dn) | | | 2039 | |------------------>| | | 2040 | | | | | 2041 | | |PACKET_IN(D1..Dn)| | 2042 | | |---------------->| | 2043 | | | | | 2044 |Traffic (S1..Sn,D1..Dn) | | | 2045 |----------------------------->| | | 2046 | | | | | 2047 | | |PACKET_IN(S1..Sa,| | 2048 | | | D1..Da)| | 2049 | | |---------------->| | 2050 | | | | | 2051 | | |PACKET_IN(Sa+1.. | | 2052 | | |.Sn,Da+1..Dn) | | 2053 | | |(1% incorrect OFP| | 2054 | | | Match header)| | 2055 | | |---------------->| | 2056 | | | | | 2057 | | | FLOW_MOD(D1..Dn)| | 2058 | | |<----------------| | 2059 | | | | | 2060 | | | FLOW_MOD(S1..Sa)| | 2061 | | | OFP headers| | 2062 | | |<----------------| | 2063 | | | | | 2064 | |Traffic (S1..Sa, | | | 2065 | | D1..Da)| | | 2066 | |<------------------| | | 2067 | | | | | 2068 | | | | | 2071 | | | | | 2072 | | | | | 2075 | | | | | 2076 | | | | | 2080 | | | | | 2081 | | | | | 2084 | | | | | 2086 Legend: 2088 G-ARP: Gratuitous ARP 2089 PACKET_IN(Sa+1..Sn,Da+1..Dn): OpenFlow PACKET_IN with wrong 2090 version number 2091 Rn1: Total number of frames received at Test Port 2 with 2092 1% incorrect frames 2093 Rn2: Total number of frames received at Test Port 2 with 2094 2% incorrect frames 2096 Discussion: 2098 The traffic rate sent towards OpenFlow switch from Test Port 1 2099 should be 1% higher than the Path Programming Rate. Rn1 will provide 2100 the Path Provisioning Rate of controller at 1% of incorrect frames 2101 handling and Rn2 will provide the Path Provisioning Rate of 2102 controller at 2% of incorrect frames handling. 2104 The procedure defined above provides test steps to determine the 2105 effect of handling error packets on Path Programming Rate. Same 2106 procedure can be adopted to determine the effects on other 2107 performance tests listed in this benchmarking tests. 2109 B.6.2. Denial of Service Handling 2111 Procedure: 2113 Test Traffic Test Traffic Network Devic OpenFlow SDN 2114 Generator TP1 Generator TP2 Controller Application 2115 | | | | | 2116 | |G-ARP (D1..Dn) | | | 2117 | |------------------>| | | 2118 | | | | | 2119 | | |PACKET_IN(D1..Dn)| | 2120 | | |---------------->| | 2121 | | | | | 2122 |Traffic (S1..Sn,D1..Dn) | | | 2123 |----------------------------->| | | 2124 | | | | | 2125 | | |PACKET_IN(S1..Sn,| | 2126 | | | D1..Dn)| | 2127 | | |---------------->| | 2128 | | | | | 2129 | | |TCP SYN Attack | | 2130 | | |from a switch | | 2131 | | |---------------->| | 2132 | | | | | 2133 | | |FLOW_MOD(D1..Dn) | | 2134 | | |<----------------| | 2135 | | | | | 2136 | | | FLOW_MOD(S1..Sn)| | 2137 | | | OFP headers| | 2138 | | |<----------------| | 2139 | | | | | 2140 | |Traffic (S1..Sn, | | | 2141 | | D1..Dn)| | | 2142 | |<------------------| | | 2143 | | | | | 2144 | | | | | 2147 | | | | | 2148 | | | | | 2151 | | | | | 2153 Legend: 2155 G-ARP: Gratuitous ARP 2157 Discussion: 2159 TCP SYN attack should be launched from one of the emulated/simulated 2160 OpenFlow Switch. Rn1 provides the Path Programming Rate of 2161 controller uponhandling denial of service attack. 2163 The procedure defined above provides test steps to determine the 2164 effect of handling denial of service on Path Programming Rate. Same 2165 procedure can be adopted to determine the effects on other 2166 performance tests listed in this benchmarking tests. 2168 B.7. Reliability 2170 B.7.1. Controller Failover Time 2172 Procedure: 2174 Test Traffic Test Traffic Network Device OpenFlow SDN 2175 Generator TP1 Generator TP2 Controller Application 2176 | | | | | 2177 | |G-ARP (D1) | | | 2178 | |------------>| | | 2179 | | | | | 2180 | | |PACKET_IN(D1) | | 2181 | | |---------------->| | 2182 | | | | | 2183 |Traffic (S1..Sn,D1) | | | 2184 |-------------------------->| | | 2185 | | | | | 2186 | | | | | 2187 | | |PACKET_IN(S1,D1) | | 2188 | | |---------------->| | 2189 | | | | | 2190 | | |FLOW_MOD(D1) | | 2191 | | |<----------------| | 2192 | | |FLOW_MOD(S1) | | 2193 | | |<----------------| | 2194 | | | | | 2195 | |Traffic (S1,D1)| | | 2196 | |<------------| | | 2197 | | | | | 2198 | | |PACKET_IN(S2,D1) | | 2199 | | |---------------->| | 2200 | | | | | 2201 | | |FLOW_MOD(S2) | | 2202 | | |<----------------| | 2203 | | | | | 2204 | | |PACKET_IN(Sn-1,D1)| | 2205 | | |---------------->| | 2206 | | | | | 2207 | | |PACKET_IN(Sn,D1) | | 2208 | | |---------------->| | 2209 | | | . | | 2210 | | | . | | 2213 | | | FLOW_MOD(Sn-1) | | 2214 | | | <-X----------| | 2215 | | | | | 2216 | | |FLOW_MOD(Sn) | | 2217 | | |<----------------| | 2218 | | | | | 2219 | |Traffic (Sn,D1)| | | 2220 | |<------------| | | 2221 | | | | | 2222 | | | | | 2227 Legend: 2229 G-ARP: Gratuitous ARP. 2231 Discussion: 2233 The time difference between the last valid frame received before the 2234 traffic loss and the first frame received after the traffic loss 2235 will provide the controller failover time. 2237 If there is no frame loss during controller failover time, the 2238 controller failover time can be deemed negligible. 2240 B.7.2. Network Re-Provisioning Time 2242 Procedure: 2244 Test Traffic Test Traffic Network Devices OpenFlow SDN 2245 Generator TP1 Generator TP2 Controller Application 2246 | | | | | 2247 | |G-ARP (D1) | | | 2248 | |-------------->| | | 2249 | | | | | 2250 | | |PACKET_IN(D1) | | 2251 | | |---------------->| | 2252 | G-ARP (S1) | | | 2253 |---------------------------->| | | 2254 | | | | | 2255 | | |PACKET_IN(S1) | | 2256 | | |---------------->| | 2257 | | | | | 2258 |Traffic (S1,D1,Seq.no (1..n))| | | 2259 |---------------------------->| | | 2260 | | | | | 2261 | | |PACKET_IN(S1,D1) | | 2262 | | |---------------->| | 2263 | | | | | 2264 | |Traffic (D1,S1,| | | 2265 | | Seq.no (1..n))| | | 2266 | |-------------->| | | 2267 | | | | | 2268 | | |PACKET_IN(D1,S1) | | 2269 | | |---------------->| | 2270 | | | | | 2271 | | |FLOW_MOD(D1) | | 2272 | | |<----------------| | 2273 | | | | | 2274 | | |FLOW_MOD(S1) | | 2275 | | |<----------------| | 2276 | | | | | 2277 | |Traffic (S1,D1,| | | 2278 | | Seq.no(1))| | | 2279 | |<--------------| | | 2280 | | | | | 2281 | |Traffic (S1,D1,| | | 2282 | | Seq.no(2))| | | 2283 | |<--------------| | | 2284 | | | | | 2285 | | | | | 2286 | Traffic (D1,S1,Seq.no(1))| | | 2287 |<----------------------------| | | 2288 | | | | | 2289 | Traffic (D1,S1,Seq.no(2))| | | 2290 |<----------------------------| | | 2291 | | | | | 2292 | Traffic (D1,S1,Seq.no(x))| | | 2293 |<----------------------------| | | 2294 | | | | | 2295 | |Traffic (S1,D1,| | | 2296 | | Seq.no(x))| | | 2297 | |<--------------| | | 2298 | | | | | 2299 | | | | | 2300 | | | | | 2304 | | | | | 2305 | | |PORT_STATUS(Sa) | | 2306 | | |---------------->| | 2307 | | | | | 2308 | |Traffic (S1,D1,| | | 2309 | | Seq.no(n-1))| | | 2310 | | X<-----------| | | 2311 | | | | | 2312 | Traffic (D1,S1,Seq.no(n-1))| | | 2313 | X------------------------| | | 2314 | | | | | 2315 | | | | | 2316 | | |FLOW_MOD(D1) | | 2317 | | |<----------------| | 2318 | | | | | 2319 | | |FLOW_MOD(S1) | | 2320 | | |<----------------| | 2321 | | | | | 2322 | Traffic (D1,S1,Seq.no(n))| | | 2323 |<----------------------------| | | 2324 | | | | | 2325 | |Traffic (S1,D1,| | | 2326 | | Seq.no(n))| | | 2327 | |<--------------| | | 2328 | | | | | 2329 | | | | | 2334 Legend: 2336 G-ARP: Gratuitous ARP message. 2337 Seq.no: Sequence number. 2338 Sa: Neighbor switch of the switch that was brought down. 2340 Discussion: 2342 The time difference between the last valid frame received before the 2343 traffic loss (Packet number with sequence number x) and the first 2344 frame received after the traffic loss (packet with sequence number 2345 n) will provide the network path re-provisioning time. 2347 Note that the trial is valid only when the controller provisions the 2348 alternate path upon network failure. 2350 Authors' Addresses 2352 Bhuvaneswaran Vengainathan 2353 Veryx Technologies Inc. 2354 1 International Plaza, Suite 550 2355 Philadelphia 2356 PA 19113 2358 Email: bhuvaneswaran.vengainathan@veryxtech.com 2360 Anton Basil 2361 Veryx Technologies Inc. 2362 1 International Plaza, Suite 550 2363 Philadelphia 2364 PA 19113 2366 Email: anton.basil@veryxtech.com 2368 Mark Tassinari 2369 Hewlett-Packard, 2370 8000 Foothills Blvd, 2371 Roseville, CA 95747 2373 Email: mark.tassinari@hpe.com 2375 Vishwas Manral 2376 Nano Sec, 2377 CA 2379 Email: vishwas.manral@gmail.com 2381 Sarah Banks 2382 VSS Monitoring 2383 930 De Guigne Drive, 2384 Sunnyvale, CA 2386 Email: sbanks@encrypted.net