idnits 2.17.1 draft-ietf-bmwg-sdn-controller-benchmark-meth-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 25, 2018) is 2164 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Bhuvaneswaran Vengainathan 2 Network Working Group Anton Basil 3 Intended Status: Informational Veryx Technologies 4 Expires: November 25, 2018 Mark Tassinari 5 Hewlett-Packard 6 Vishwas Manral 7 Nano Sec 8 Sarah Banks 9 VSS Monitoring 10 May 25, 2018 12 Benchmarking Methodology for SDN Controller Performance 13 draft-ietf-bmwg-sdn-controller-benchmark-meth-09 15 Abstract 17 This document defines methodologies for benchmarking control plane 18 performance of SDN controllers. SDN controller is a core component 19 in software-defined networking architecture that controls the 20 network behavior. SDN controllers have been implemented with many 21 varying designs in order to achieve their intended network 22 functionality. Hence, the authors have taken the approach of 23 considering an SDN controller as a black box, defining the 24 methodology in a manner that is agnostic to protocols and network 25 services supported by controllers. The intent of this document is to 26 provide a method to measure the performance of all controller 27 implementations. 29 Status of this Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current. 39 Internet-Drafts are draft documents valid for a maximum of six 40 months and may be updated, replaced, or obsoleted by other documents 41 at any time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress. 44 This Internet-Draft will expire on November 25, 2018. 46 Copyright Notice 48 Copyright (c) 2018 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with 56 respect to this document. Code Components extracted from this 57 document must include Simplified BSD License text as described in 58 Section 4.e of the Trust Legal Provisions and are provided without 59 warranty as described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction...................................................4 64 2. Scope..........................................................4 65 3. Test Setup.....................................................4 66 3.1. Test setup - Controller working in Standalone Mode........5 67 3.2. Test setup - Controller working in Cluster Mode...........6 68 4. Test Considerations............................................7 69 4.1. Network Topology..........................................7 70 4.2. Test Traffic..............................................7 71 4.3. Test Emulator Requirements................................7 72 4.4. Connection Setup..........................................7 73 4.5. Measurement Point Specification and Recommendation........8 74 4.6. Connectivity Recommendation...............................8 75 4.7. Test Repeatability........................................8 76 4.8. Test Reporting............................................8 77 5. Benchmarking Tests.............................................9 78 5.1. Performance...............................................9 79 5.1.1. Network Topology Discovery Time......................9 80 5.1.2. Asynchronous Message Processing Time................11 81 5.1.3. Asynchronous Message Processing Rate................12 82 5.1.4. Reactive Path Provisioning Time.....................15 83 5.1.5. Proactive Path Provisioning Time....................16 84 5.1.6. Reactive Path Provisioning Rate.....................18 85 5.1.7. Proactive Path Provisioning Rate....................19 86 5.1.8. Network Topology Change Detection Time..............21 87 5.2. Scalability..............................................22 88 5.2.1. Control Session Capacity............................22 89 5.2.2. Network Discovery Size..............................23 90 5.2.3. Forwarding Table Capacity...........................24 91 5.3. Security.................................................26 92 5.3.1. Exception Handling..................................26 93 5.3.2. Denial of Service Handling..........................27 94 5.4. Reliability..............................................29 95 5.4.1. Controller Failover Time............................29 96 5.4.2. Network Re-Provisioning Time........................30 97 6. References....................................................32 98 6.1. Normative References.....................................32 99 6.2. Informative References...................................32 100 7. IANA Considerations...........................................32 101 8. Security Considerations.......................................32 102 9. Acknowledgments...............................................33 103 Appendix A Benchmarking Methodology using OpenFlow Controllers..34 104 A.1. Protocol Overview........................................34 105 A.2. Messages Overview........................................34 106 A.3. Connection Overview......................................34 107 A.4. Performance Benchmarking Tests...........................35 108 A.4.1. Network Topology Discovery Time.....................35 109 A.4.2. Asynchronous Message Processing Time................36 110 A.4.3. Asynchronous Message Processing Rate................37 111 A.4.4. Reactive Path Provisioning Time.....................38 112 A.4.5. Proactive Path Provisioning Time....................39 113 A.4.6. Reactive Path Provisioning Rate.....................40 114 A.4.7. Proactive Path Provisioning Rate....................41 115 A.4.8. Network Topology Change Detection Time..............42 116 A.5. Scalability..............................................43 117 A.5.1. Control Sessions Capacity...........................43 118 A.5.2. Network Discovery Size..............................43 119 A.5.3. Forwarding Table Capacity...........................44 120 A.6. Security.................................................46 121 A.6.1. Exception Handling..................................46 122 A.6.2. Denial of Service Handling..........................47 123 A.7. Reliability..............................................49 124 A.7.1. Controller Failover Time............................49 125 A.7.2. Network Re-Provisioning Time........................50 126 Authors' Addresses...............................................53 128 1. Introduction 130 This document provides generic methodologies for benchmarking SDN 131 controller performance. An SDN controller may support many 132 northbound and southbound protocols, implement a wide range of 133 applications, and work solely, or as a group to achieve the desired 134 functionality. This document considers an SDN controller as a black 135 box, regardless of design and implementation. The tests defined in 136 the document can be used to benchmark SDN controller for 137 performance, scalability, reliability and security independent of 138 northbound and southbound protocols. Terminology related to 139 benchmarking SDN controllers is described in the companion 140 terminology document [I-D.sdn-controller-benchmark-term]. These 141 tests can be performed on an SDN controller running as a virtual 142 machine (VM) instance or on a bare metal server. This document is 143 intended for those who want to measure the SDN controller 144 performance as well as compare various SDN controllers performance. 146 Conventions used in this document 148 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 149 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 150 "OPTIONAL" in this document are to be interpreted as described in 151 BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all 152 capitals, as shown here. 154 2. Scope 156 This document defines methodology to measure the networking metrics 157 of SDN controllers. For the purpose of this memo, the SDN controller 158 is a function that manages and controls Network Devices. Any SDN 159 controller without a control capability is out of scope for this 160 memo. The tests defined in this document enable benchmarking of SDN 161 Controllers in two ways; as a standalone controller and as a cluster 162 of homogeneous controllers. These tests are recommended for 163 execution in lab environments rather than in live network 164 deployments. Performance benchmarking of a federation of 165 controllers, set of SDN controllers managing different domains, is 166 beyond the scope of this document. 168 3. Test Setup 170 The tests defined in this document enable measurement of an SDN 171 controller's performance in standalone mode and cluster mode. This 172 section defines common reference topologies that are later referred 173 to in individual tests. 175 3.1. Test setup - Controller working in Standalone Mode 177 +-----------------------------------------------------------+ 178 | Application Plane Test Emulator | 179 | | 180 | +-----------------+ +-------------+ | 181 | | Application | | Service | | 182 | +-----------------+ +-------------+ | 183 | | 184 +-----------------------------+(I2)-------------------------+ 185 | 186 | (Northbound interfaces) 187 +-------------------------------+ 188 | +----------------+ | 189 | | SDN Controller | | 190 | +----------------+ | 191 | | 192 | Device Under Test (DUT) | 193 +-------------------------------+ 194 | (Southbound interfaces) 195 | 196 +-----------------------------+(I1)-------------------------+ 197 | | 198 | +-----------+ +-----------+ | 199 | | Network | | Network | | 200 | | Device 2 |--..-| Device n-1| | 201 | +-----------+ +-----------+ | 202 | / \ / \ | 203 | / \ / \ | 204 | l0 / X \ ln | 205 | / / \ \ | 206 | +-----------+ +-----------+ | 207 | | Network | | Network | | 208 | | Device 1 |..| Device n | | 209 | +-----------+ +-----------+ | 210 | | | | 211 | +---------------+ +---------------+ | 212 | | Test Traffic | | Test Traffic | | 213 | | Generator | | Generator | | 214 | | (TP1) | | (TP2) | | 215 | +---------------+ +---------------+ | 216 | | 217 | Forwarding Plane Test Emulator | 218 +-----------------------------------------------------------+ 220 Figure 1 222 3.2. Test setup - Controller working in Cluster Mode 224 +-----------------------------------------------------------+ 225 | Application Plane Test Emulator | 226 | | 227 | +-----------------+ +-------------+ | 228 | | Application | | Service | | 229 | +-----------------+ +-------------+ | 230 | | 231 +-----------------------------+(I2)-------------------------+ 232 | 233 | (Northbound interfaces) 234 +---------------------------------------------------------+ 235 | | 236 | ------------------ ------------------ | 237 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 238 | ------------------ ------------------ | 239 | | 240 | Device Under Test (DUT) | 241 +---------------------------------------------------------+ 242 | (Southbound interfaces) 243 | 244 +-----------------------------+(I1)-------------------------+ 245 | | 246 | +-----------+ +-----------+ | 247 | | Network | | Network | | 248 | | Device 2 |--..-| Device n-1| | 249 | +-----------+ +-----------+ | 250 | / \ / \ | 251 | / \ / \ | 252 | l0 / X \ ln | 253 | / / \ \ | 254 | +-----------+ +-----------+ | 255 | | Network | | Network | | 256 | | Device 1 |..| Device n | | 257 | +-----------+ +-----------+ | 258 | | | | 259 | +---------------+ +---------------+ | 260 | | Test Traffic | | Test Traffic | | 261 | | Generator | | Generator | | 262 | | (TP1) | | (TP2) | | 263 | +---------------+ +---------------+ | 264 | | 265 | Forwarding Plane Test Emulator | 266 +-----------------------------------------------------------+ 268 Figure 2 270 4. Test Considerations 272 4.1. Network Topology 274 The test cases SHOULD use Leaf-Spine topology with at least 2 275 Network Devices in the topology for benchmarking. The test traffic 276 generators TP1 and TP2 SHOULD be connected to the leaf Network 277 Device 1 and the leaf Network Device n. To achieve a complete 278 performance characterization of the SDN controller, it is 279 recommended that the controller be benchmarked for many network 280 topologies and a varying number of Network Devices. Further, care 281 should be taken to make sure that a loop prevention mechanism is 282 enabled either in the SDN controller, or in the network when the 283 topology contains redundant network paths. 285 4.2. Test Traffic 287 Test traffic is used to notify the controller about the asynchronous 288 arrival of new flows. The test cases SHOULD use frame sizes of 128, 289 512 and 1508 bytes for benchmarking. Tests using jumbo frames are 290 optional. 292 4.3. Test Emulator Requirements 294 The Test Emulator SHOULD time stamp the transmitted and received 295 control messages to/from the controller on the established network 296 connections. The test cases use these values to compute the 297 controller processing time. 299 4.4. Connection Setup 301 There may be controller implementations that support unencrypted and 302 encrypted network connections with Network Devices. Further, the 303 controller may have backward compatibility with Network Devices 304 running older versions of southbound protocols. It may be useful to 305 measure the controller performance with one or more applicable 306 connection setup methods defined below. For cases with encrypted 307 communications between the controller and the switch, key management 308 and key exchange MUST take place before any performance or benchmark 309 measurements. 311 1. Unencrypted connection with Network Devices, running same 312 protocol version. 313 2. Unencrypted connection with Network Devices, running different 314 protocol versions. 315 Example: 317 a. Controller running current protocol version and switch 318 running older protocol version 319 b. Controller running older protocol version and switch 320 running current protocol version 321 3. Encrypted connection with Network Devices, running same 322 protocol version 323 4. Encrypted connection with Network Devices, running different 324 protocol versions. 325 Example: 326 a. Controller running current protocol version and switch 327 running older protocol version 328 b. Controller running older protocol version and switch 329 running current protocol version 331 4.5. Measurement Point Specification and Recommendation 333 The measurement accuracy depends on several factors including the 334 point of observation where the indications are captured. For 335 example, the notification can be observed at the controller or test 336 emulator. The test operator SHOULD make the observations/ 337 measurements at the interfaces of test emulator unless it is 338 explicitly mentioned otherwise in the individual test. In any case, 339 the locations of measurement points MUST be reported. 341 4.6. Connectivity Recommendation 343 The SDN controller in the test setup SHOULD be connected directly 344 with the forwarding and the management plane test emulators to avoid 345 any delays or failure introduced by the intermediate devices during 346 benchmarking tests. When the controller is implemented as a virtual 347 machine, details of the physical and logical connectivity MUST be 348 reported. 350 4.7. Test Repeatability 352 To increase the confidence in measured result, it is recommended 353 that each test RECOMMENDED be repeated a minimum of 10 times. 355 4.8. Test Reporting 357 Each test has a reporting format that contains some global and 358 identical reporting components, and some individual components that 359 are specific to individual tests. The following test configuration 360 parameters and controller settings parameters MUST be reflected in 361 the test report. 363 Test Configuration Parameters: 365 1. Controller name and version 366 2. Northbound protocols and versions 367 3. Southbound protocols and versions 368 4. Controller redundancy mode (Standalone or Cluster Mode) 369 5. Connection setup (Unencrypted or Encrypted) 370 6. Network Device Type (Physical or Virtual or Emulated) 371 7. Number of Nodes 372 8. Number of Links 373 9. Dataplane Test Traffic Type 374 10. Controller System Configuration (e.g., Physical or Virtual 375 Machine, CPU, Memory, Caches, Operating System, Interface 376 Speed, Storage) 377 11. Reference Test Setup (e.g., Section 3.1 etc.,) 379 Controller Settings Parameters: 380 1. Topology re-discovery timeout 381 2. Controller redundancy mode (e.g., active-standby etc.,) 382 3. Controller state persistence enabled/disabled 384 To ensure the repeatability of test, the following capabilities of 385 test emulator SHOULD be reported 387 1. Maximum number of Network Devices that the forwarding plane 388 emulates 389 2. Control message processing time (e.g., Topology Discovery 390 Messages) 392 One way to determine the above two values are to simulate the 393 required control sessions and messages from the control plane. 395 5. Benchmarking Tests 397 5.1. Performance 399 5.1.1. Network Topology Discovery Time 401 Objective: 403 The time taken by controller(s) to determine the complete network 404 topology, defined as the interval starting with the first discovery 405 message from the controller(s) at its Southbound interface, ending 406 with all features of the static topology determined. 408 Reference Test Setup: 410 The test SHOULD use one of the test setups described in section 3.1 411 or section 3.2 of this document. 413 Prerequisite: 415 1. The controller MUST support network discovery. 416 2. Tester should be able to retrieve the discovered topology 417 information either through the controller's management interface, 418 or northbound interface to determine if the discovery was 419 successful and complete. 420 3. Ensure that the controller's topology re-discovery timeout has 421 been set to the maximum value to avoid initiation of re-discovery 422 process in the middle of the test. 424 Procedure: 426 1. Ensure that the controller is operational, its network 427 applications, northbound and southbound interfaces are up and 428 running. 429 2. Establish the network connections between controller and Network 430 Devices. 431 3. Record the time for the first discovery message (Tm1) received 432 from the controller at forwarding plane test emulator interface 433 I1. 434 4. Query the controller every t seconds (RECOMMENDED value for t is 435 3) to obtain the discovered network topology information through 436 the northbound interface or the management interface and compare 437 it with the deployed network topology information. 438 5. Stop the trial when the discovered topology information matches 439 the deployed network topology, or when the discovered topology 440 information return the same details for 3 consecutive queries. 441 6. Record the time last discovery message (Tmn) sent to controller 442 from the forwarding plane test emulator interface (I1) when the 443 trial completed successfully. (e.g., the topology matches). 445 Measurement: 447 Topology Discovery Time Tr1 = Tmn-Tm1. 449 Tr1 + Tr2 + Tr3 .. Trn 450 Average Topology Discovery Time (TDm) = ----------------------- 451 Total Trials 452 SUM[SQUAREOF(Tri-TDm)] 453 Topology Discovery Time Variance (TDv) ---------------------- 454 Total Trials -1 456 Reporting Format: 458 The Topology Discovery Time results MUST be reported in the format 459 of a table, with a row for each successful iteration. The last row 460 of the table indicates the Topology Discovery Time variance and the 461 previous row indicates the average Topology Discovery Time. 463 If this test is repeated with varying number of nodes over the same 464 topology, the results SHOULD be reported in the form of a graph. The 465 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 466 SHOULD be the average Topology Discovery Time. 468 5.1.2. Asynchronous Message Processing Time 470 Objective: 472 The time taken by controller(s) to process an asynchronous message, 473 defined as the interval starting with an asynchronous message from a 474 network device after the discovery of all the devices by the 475 controller(s), ending with a response message from the controller(s) 476 at its Southbound interface. 478 Reference Test Setup: 480 This test SHOULD use one of the test setup described in section 3.1 481 or section 3.2 of this document. 483 Prerequisite: 485 1. The controller MUST have successfully completed the network 486 topology discovery for the connected Network Devices. 488 Procedure: 490 1. Generate asynchronous messages from every connected Network 491 Device, to the SDN controller, one at a time in series from the 492 forwarding plane test emulator for the trial duration. 493 2. Record every request transmit time (T1) and the corresponding 494 response received time (R1) at the forwarding plane test emulator 495 interface (I1) for every successful message exchange. 497 Measurement: 499 SUM{Ri} - SUM{Ti} 500 Asynchronous Message Processing Time Tr1 = ----------------------- 501 Nrx 503 Where Nrx is the total number of successful messages exchanged 505 Tr1 + Tr2 + Tr3..Trn 506 Average Asynchronous Message Processing Time = -------------------- 507 Total Trials 509 Asynchronous Message Processing Time Variance (TAMv) = 511 SUM[SQUAREOF(Tri-TAMm)] 512 ---------------------- 513 Total Trials -1 515 Where TAMm is the Average Asynchronous Message Processing Time. 517 Reporting Format: 519 The Asynchronous Message Processing Time results MUST be reported in 520 the format of a table with a row for each iteration. The last row of 521 the table indicates the Asynchronous Message Processing Time 522 variance and the previous row indicates the average Asynchronous 523 Message Processing Time. 525 The report SHOULD capture the following information in addition to 526 the configuration parameters captured in section 4.8. 528 - Successful messages exchanged (Nrx) 530 - Percentage of unsuccessful messages exchanged, computed using the 531 formula (1 - Nrx/Ntx) * 100), Where Ntx is the total number of 532 messages transmitted to the controller. 534 If this test is repeated with varying number of nodes with same 535 topology, the results SHOULD be reported in the form of a graph. The 536 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 537 SHOULD be the average Asynchronous Message Processing Time. 539 5.1.3. Asynchronous Message Processing Rate 541 Objective: 543 Measure the number of responses to asynchronous messages (such as 544 new flow arrival notification message, link down, etc.) for which 545 the controller(s) performed processing and replied with a valid and 546 productive (non-trivial) response message 548 This test will measure two benchmarks on Asynchronous Message 549 Processing Rate using a single procedure. The two benchmarks are 550 (see section 2.3.1.3 of [I-D.sdn-controller-benchmark-term]): 552 1. Loss-free Asynchronous Message Processing Rate 554 2. Maximum Asynchronous Message Processing Rate 556 Here two benchmarks are determined through a series of trials where 557 the number of messages are sent to the controller(s), and the 558 responses from the controller(s) are counted over the trial 559 duration. The message response rate and the message loss ratio are 560 calculated for each trial. 562 Reference Test Setup: 564 The test SHOULD use one of the test setups described in section 3.1 565 or section 3.2 of this document. 567 Prerequisite: 569 1. The controller(s) MUST have successfully completed the network 570 topology discovery for the connected Network Devices. 571 2. Choose and record the Trial Duration (Td), the sending rate step- 572 size (STEP), the tolerance on equality for two consecutive trials 573 (P%),and the maximum possible message sending rate (Ntx1/Td). 575 Procedure: 577 1. Generate asynchronous messages continuously at the maximum 578 possible rate on the established connections from all the 579 emulated/simulated Network Devices for the given trial Duration 580 (Td). 581 2. Record the total number of responses received from the controller 582 (Nrx1) as well as the number of messages sent (Ntx1) to the 583 controller within the trial duration (Td). 584 3. Calculate the Asynchronous Message Processing Rate (Tr1) and the 585 Message Loss Ratio (Lr1). Ensure that the controller(s) have 586 returned to normal operation. 587 4. Repeat the trial by reducing the asynchronous message sending rate 588 used in last trial by the STEP size. 589 5. Continue repeating the trials and reducing the sending rate until 590 both the maximum value of Nrxn (number of responses received from 591 the controller) and the Nrxn corresponding to zero loss ratio have 592 been found. 593 6. The trials corresponding to the benchmark levels MUST be repeated 594 using the same asynchronous message rates until the responses 595 received from the controller are equal (+/-P%) for two consecutive 596 trials. 597 7. Record the number of responses received from the controller (Nrxn) 598 as well as the number of messages sent (Ntxn) to the controller in 599 the last trial. 601 Measurement: 603 Nrxn 604 Asynchronous Message Processing Rate Trn = ----- 605 Td 607 Maximum Asynchronous Message Processing Rate = MAX(Trn) for all n 609 Nrxn 610 Asynchronous Message Loss Ratio Lrn = 1 - ----- 611 Ntxn 613 Loss-free Asynchronous Message Processing Rate = MAX(Trn) given 614 Lrn=0 616 Reporting Format: 618 The Asynchronous Message Processing Rate results MUST be reported in 619 the format of a table with a row for each trial. 621 The table should report the following information in addition to the 622 configuration parameters captured in section 4.8, with columns: 624 - Offered rate (Ntxn/Td) 626 - Asynchronous Message Processing Rate (Trn) 628 - Loss Ratio (Lr) 630 - Benchmark at this iteration (blank for none, Maximum, Loss-Free) 632 The results MAY be presented in the form of a graph. The X axis 633 SHOULD be the Offered rate, and dual Y axes would represent 634 Asynchronous Message Processing Rate and Loss Ratio, respectively. 636 If this test is repeated with varying number of nodes over same 637 topology, the results SHOULD be reported in the form of a graph. The 638 X axis SHOULD be the Number of nodes (N), the Y axis SHOULD be the 639 Asynchronous Message Processing Rate. Both the Maximum and the Loss- 640 Free Rates should be plotted for each N. 642 5.1.4. Reactive Path Provisioning Time 644 Objective: 646 The time taken by the controller to setup a path reactively between 647 source and destination node, defined as the interval starting with 648 the first flow provisioning request message received by the 649 controller(s) at its Southbound interface, ending with the last flow 650 provisioning response message sent from the controller(s) at its 651 Southbound interface. 653 Reference Test Setup: 655 The test SHOULD use one of the test setups described in section 3.1 656 or section 3.2 of this document. The number of Network Devices in 657 the path is a parameter of the test that may be varied from 2 to 658 maximum discovery size in repetitions of this test. 660 Prerequisite: 662 1. The controller MUST contain the network topology information for 663 the deployed network topology. 664 2. The controller should have the knowledge about the location of 665 destination endpoint for which the path has to be provisioned. 666 This can be achieved through dynamic learning or static 667 provisioning. 668 3. Ensure that the default action for 'flow miss' in Network Device 669 is configured to 'send to controller'. 670 4. Ensure that each Network Device in a path requires the controller 671 to make the forwarding decision while paving the entire path. 673 Procedure: 675 1. Send a single traffic stream from the test traffic generator TP1 676 to test traffic generator TP2. 677 2. Record the time of the first flow provisioning request message 678 sent to the controller (Tsf1) from the Network Device at the 679 forwarding plane test emulator interface (I1). 680 3. Wait for the arrival of first traffic frame at the Traffic 681 Endpoint TP2 or the expiry of trial duration (Td). 682 4. Record the time of the last flow provisioning response message 683 received from the controller (Tdf1) to the Network Device at the 684 forwarding plane test emulator interface (I1). 686 Measurement: 688 Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1. 690 Tr1 + Tr2 + Tr3 .. Trn 691 Average Reactive Path Provisioning Time = ----------------------- 692 Total Trials 694 SUM[SQUAREOF(Tri-TRPm)] 695 Reactive Path Provisioning Time Variance(TRPv) --------------------- 696 Total Trials -1 698 Where TRPm is the Average Reactive Path Provisioning Time. 700 Reporting Format: 702 The Reactive Path Provisioning Time results MUST be reported in the 703 format of a table with a row for each iteration. The last row of the 704 table indicates the Reactive Path Provisioning Time variance and the 705 previous row indicates the Average Reactive Path Provisioning Time. 707 The report should capture the following information in addition to 708 the configuration parameters captured in section 4.8. 710 - Number of Network Devices in the path 712 5.1.5. Proactive Path Provisioning Time 714 Objective: 716 The time taken by the controller to setup a path proactively between 717 source and destination node, defined as the interval starting with 718 the first proactive flow provisioned in the controller(s) at its 719 Northbound interface, ending with the last flow provisioning 720 response message sent from the controller(s) at its Southbound 721 interface. 723 Reference Test Setup: 725 The test SHOULD use one of the test setups described in section 3.1 726 or section 3.2 of this document. 728 Prerequisite: 730 1. The controller MUST contain the network topology information for 731 the deployed network topology. 732 2. The controller should have the knowledge about the location of 733 destination endpoint for which the path has to be provisioned. 734 This can be achieved through dynamic learning or static 735 provisioning. 736 3. Ensure that the default action for flow miss in Network Device is 737 'drop'. 739 Procedure: 741 1. Send a single traffic stream from test traffic generator TP1 to 742 TP2. 743 2. Install the flow entries to reach from test traffic generator TP1 744 to the test traffic generator TP2 through controller's northbound 745 or management interface. 746 3. Wait for the arrival of first traffic frame at the test traffic 747 generator TP2 or the expiry of trial duration (Td). 748 4. Record the time when the proactive flow is provisioned in the 749 Controller (Tsf1) at the management plane test emulator interface 750 I2. 751 5. Record the time of the last flow provisioning message received 752 from the controller (Tdf1) at the forwarding plane test emulator 753 interface I1. 755 Measurement: 757 Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1. 759 Tr1 + Tr2 + Tr3 .. Trn 760 Average Proactive Path Provisioning Time = ----------------------- 761 Total Trials 763 SUM[SQUAREOF(Tri-TPPm)] 764 Proactive Path Provisioning Time Variance(TPPv) -------------------- 765 Total Trials -1 767 Where TPPm is the Average Proactive Path Provisioning Time. 769 Reporting Format: 771 The Proactive Path Provisioning Time results MUST be reported in the 772 format of a table with a row for each iteration. The last row of the 773 table indicates the Proactive Path Provisioning Time variance and 774 the previous row indicates the Average Proactive Path Provisioning 775 Time. 777 The report should capture the following information in addition to 778 the configuration parameters captured in section 4.8. 780 - Number of Network Devices in the path 782 5.1.6. Reactive Path Provisioning Rate 784 Objective: 786 The maximum number of independent paths a controller can 787 concurrently establish per second between source and destination 788 nodes reactively, defined as the number of paths provisioned per 789 second by the controller(s) at its Southbound interface for the flow 790 provisioning requests received for path provisioning at its 791 Southbound interface between the start of the test and the expiry of 792 given trial duration. 794 Reference Test Setup: 796 The test SHOULD use one of the test setups described in section 3.1 797 or section 3.2 of this document. 799 Prerequisite: 801 1. The controller MUST contain the network topology information for 802 the deployed network topology. 803 2. The controller should have the knowledge about the location of 804 destination addresses for which the paths have to be provisioned. 805 This can be achieved through dynamic learning or static 806 provisioning. 807 3. Ensure that the default action for 'flow miss' in Network Device 808 is configured to 'send to controller'. 809 4. Ensure that each Network Device in a path requires the controller 810 to make the forwarding decision while provisioning the entire 811 path. 813 Procedure: 815 1. Send traffic with unique source and destination addresses from 816 test traffic generator TP1. 817 2. Record total number of unique traffic frames (Ndf) received at the 818 test traffic generator TP2 within the trial duration (Td). 820 Measurement: 822 Ndf 823 Reactive Path Provisioning Rate Tr1 = ------ 824 Td 826 Tr1 + Tr2 + Tr3 .. Trn 827 Average Reactive Path Provisioning Rate = ------------------------ 828 Total Trials 830 SUM[SQUAREOF(Tri-RPPm)] 831 Reactive Path Provisioning Rate Variance(RPPv) -------------------- 832 Total Trials -1 834 Where RPPm is the Average Reactive Path Provisioning Rate. 836 Reporting Format: 838 The Reactive Path Provisioning Rate results MUST be reported in the 839 format of a table with a row for each iteration. The last row of the 840 table indicates the Reactive Path Provisioning Rate variance and the 841 previous row indicates the Average Reactive Path Provisioning Rate. 843 The report should capture the following information in addition to 844 the configuration parameters captured in section 4.8. 846 - Number of Network Devices in the path 848 - Offered rate 850 5.1.7. Proactive Path Provisioning Rate 852 Objective: 854 Measure the maximum number of independent paths a controller can 855 concurrently establish per second between source and destination 856 nodes proactively, defined as the number of paths provisioned per 857 second by the controller(s) at its Southbound interface for the 858 paths requested in its Northbound interface between the start of the 859 test and the expiry of given trial duration. The measurement is 860 based on dataplane observations of successful path activation 862 Reference Test Setup: 864 The test SHOULD use one of the test setups described in section 3.1 865 or section 3.2 of this document. 867 Prerequisite: 869 1. The controller MUST contain the network topology information for 870 the deployed network topology. 872 2. The controller should have the knowledge about the location of 873 destination addresses for which the paths have to be provisioned. 874 This can be achieved through dynamic learning or static 875 provisioning. 877 3. Ensure that the default action for flow miss in Network Device is 878 'drop'. 880 Procedure: 882 1. Send traffic continuously with unique source and destination 883 addresses from test traffic generator TP1. 885 2. Install corresponding flow entries to reach from simulated 886 sources at the test traffic generator TP1 to the simulated 887 destinations at test traffic generator TP2 through controller's 888 northbound or management interface. 890 3. Record total number of unique traffic frames received Ndf) at the 891 test traffic generator TP2 within the trial duration (Td). 893 Measurement: 895 Ndf 896 Proactive Path Provisioning Rate Tr1 = ------ 897 Td 899 Tr1 + Tr2 + Tr3 .. Trn 900 Average Proactive Path Provisioning Rate = ----------------------- 901 Total Trials 903 SUM[SQUAREOF(Tri-PPPm)] 904 Proactive Path Provisioning Rate Variance(PPPv) -------------------- 905 Total Trials -1 907 Where PPPm is the Average Proactive Path Provisioning Rate. 909 Reporting Format: 911 The Proactive Path Provisioning Rate results MUST be reported in the 912 format of a table with a row for each iteration. The last row of the 913 table indicates the Proactive Path Provisioning Rate variance and 914 the previous row indicates the Average Proactive Path Provisioning 915 Rate. 917 The report should capture the following information in addition to 918 the configuration parameters captured in section 4.8. 920 - Number of Network Devices in the path 922 - Offered rate 924 5.1.8. Network Topology Change Detection Time 926 Objective: 928 The amount of time required for the controller to detect any changes 929 in the network topology, defined as the interval starting with the 930 notification message received by the controller(s) at its Southbound 931 interface, ending with the first topology rediscovery messages sent 932 from the controller(s) at its Southbound interface. 934 Reference Test Setup: 936 The test SHOULD use one of the test setups described in section 3.1 937 or section 3.2 of this document. 939 Prerequisite: 941 1. The controller MUST have successfully discovered the network 942 topology information for the deployed network topology. 944 2. The periodic network discovery operation should be configured to 945 twice the Trial duration (Td) value. 947 Procedure: 949 1. Trigger a topology change event by bringing down an active 950 Network Device in the topology. 952 2. Record the time when the first topology change notification is 953 sent to the controller (Tcn) at the forwarding plane test emulator 954 interface (I1). 956 3. Stop the trial when the controller sends the first topology re- 957 discovery message to the Network Device or the expiry of trial 958 duration (Td). 960 4. Record the time when the first topology re-discovery message is 961 received from the controller (Tcd) at the forwarding plane test 962 emulator interface (I1) 964 Measurement: 966 Network Topology Change Detection Time Tr1 = Tcd-Tcn. 968 Tr1 + Tr2 + Tr3 .. Trn 969 Average Network Topology Change Detection Time = ------------------ 970 Total Trials 972 Network Topology Change Detection Time Variance(NTDv) = 974 SUM[SQUAREOF(Tri-NTDm)] 975 ----------------------- 976 Total Trials -1 978 Where NTDm is the Average Network Topology Change Detection Time. 980 Reporting Format: 982 The Network Topology Change Detection Time results MUST be reported 983 in the format of a table with a row for each iteration. The last row 984 of the table indicates the Network Topology Change Detection Time 985 variance and the previous row indicates the average Network Topology 986 Change Time. 988 5.2. Scalability 990 5.2.1. Control Session Capacity 992 Objective: 994 Measure the maximum number of control sessions the controller can 995 maintain, defined as the number of sessions that the controller can 996 accept from network devices, starting with the first control 997 session, ending with the last control session that the controller(s) 998 accepts at its Southbound interface. 1000 Reference Test Setup: 1002 The test SHOULD use one of the test setups described in section 3.1 1003 or section 3.2 of this document. 1005 Procedure: 1007 1. Establish control connection with controller from every Network 1008 Device emulated in the forwarding plane test emulator. 1009 2. Stop the trial when the controller starts dropping the control 1010 connections. 1011 3. Record the number of successful connections established with the 1012 controller (CCn) at the forwarding plane test emulator. 1014 Measurement: 1016 Control Sessions Capacity = CCn. 1018 Reporting Format: 1020 The Control Session Capacity results MUST be reported in addition to 1021 the configuration parameters captured in section 4.8. 1023 5.2.2. Network Discovery Size 1025 Objective: 1027 Measure the network size (number of nodes, links and hosts) that a 1028 controller can discover, defined as the size of a network that the 1029 controller(s) can discover, starting from a network topology given 1030 by the user for discovery, ending with the topology that the 1031 controller(s) could successfully discover. 1033 Reference Test Setup: 1035 The test SHOULD use one of the test setups described in section 3.1 1036 or section 3.2 of this document. 1038 Prerequisite: 1040 1. The controller MUST support automatic network discovery. 1042 2. Tester should be able to retrieve the discovered topology 1043 information either through controller's management interface or 1044 northbound interface. 1046 Procedure: 1048 1. Establish the network connections between controller and network 1049 nodes. 1050 2. Query the controller every t seconds (RECOMMENDED value for t is 1051 30) to obtain the discovered network topology information through 1052 the northbound interface or the management interface. 1053 3. Stop the trial when the discovered network topology information 1054 remains the same as that of last two query responses. 1055 4. Compare the obtained network topology information with the 1056 deployed network topology information. 1057 5. If the comparison is successful, increase the number of nodes by 1 1058 and repeat the trial. 1059 If the comparison is unsuccessful, decrease the number of nodes by 1060 1 and repeat the trial. 1061 6. Continue the trial until the comparison of step 5 is successful. 1062 7. Record the number of nodes for the last trial run (Ns) where the 1063 topology comparison was successful. 1065 Measurement: 1067 Network Discovery Size = Ns. 1069 Reporting Format: 1071 The Network Discovery Size results MUST be reported in addition to 1072 the configuration parameters captured in section 4.8. 1074 5.2.3. Forwarding Table Capacity 1076 Objective: 1078 Measure the maximum number of flow entries a controller can manage 1079 in its Forwarding table. 1081 Reference Test Setup: 1083 The test SHOULD use one of the test setups described in section 3.1 1084 or section 3.2 of this document. 1086 Prerequisite: 1088 1. The controller Forwarding table should be empty. 1089 2. Flow Idle time MUST be set to higher or infinite value. 1090 3. The controller MUST have successfully completed network topology 1091 discovery. 1092 4. Tester should be able to retrieve the forwarding table information 1093 either through controller's management interface or northbound 1094 interface. 1096 Procedure: 1098 Reactive Flow Provisioning Mode: 1100 1. Send bi-directional traffic continuously with unique source and 1101 destination addresses from test traffic generators TP1 and TP2 at 1102 the asynchronous message processing rate of controller. 1103 2. Query the controller at a regular interval (e.g., 5 seconds) for 1104 the number of learned flow entries from its northbound interface. 1105 3. Stop the trial when the retrieved value is constant for three 1106 consecutive iterations and record the value received from the last 1107 query (Nrp). 1109 Proactive Flow Provisioning Mode: 1111 1. Install unique flows continuously through controller's northbound 1112 or management interface until a failure response is received from 1113 the controller. 1114 2. Record the total number of successful responses (Nrp). 1116 Note: 1118 Some controller designs for proactive flow provisioning mode may 1119 require the switch to send flow setup requests in order to generate 1120 flow setup responses. In such cases, it is recommended to generate 1121 bi-directional traffic for the provisioned flows. 1123 Measurement: 1125 Proactive Flow Provisioning Mode: 1127 Max Flow Entries = Total number of flows provisioned (Nrp) 1129 Reactive Flow Provisioning Mode: 1131 Max Flow Entries = Total number of learned flow entries (Nrp) 1132 Forwarding Table Capacity = Max Flow Entries. 1134 Reporting Format: 1136 The Forwarding Table Capacity results MUST be tabulated with the 1137 following information in addition to the configuration parameters 1138 captured in section 4.8. 1140 - Provisioning Type (Proactive/Reactive) 1142 5.3. Security 1144 5.3.1. Exception Handling 1146 Objective: 1148 Determine the effect of handling error packets and notifications on 1149 performance tests. The impact MUST be measured for the following 1150 performance tests 1152 a. Path Provisioning Rate 1154 b. Path Provisioning Time 1156 c. Network Topology Change Detection Time 1158 Reference Test Setup: 1160 The test SHOULD use one of the test setups described in section 3.1 1161 or section 3.2 of this document. 1163 Prerequisite: 1165 1. This test MUST be performed after obtaining the baseline 1166 measurement results for the above performance tests. 1167 2. Ensure that the invalid messages are not dropped by the 1168 intermediate devices connecting the controller and Network 1169 Devices. 1171 Procedure: 1173 1. Perform the above listed performance tests and send 1% of messages 1174 from the Asynchronous Message Processing Rate as invalid messages 1175 from the connected Network Devices emulated at the forwarding 1176 plane test emulator. 1177 2. Perform the above listed performance tests and send 2% of messages 1178 from the Asynchronous Message Processing Rate as invalid messages 1179 from the connected Network Devices emulated at the forwarding 1180 plane test emulator. 1182 Note: 1184 Invalid messages can be frames with incorrect protocol fields or any 1185 form of failure notifications sent towards controller. 1187 Measurement: 1189 Measurement MUST be done as per the equation defined in the 1190 corresponding performance test measurement section. 1192 Reporting Format: 1194 The Exception Handling results MUST be reported in the format of 1195 table with a column for each of the below parameters and row for 1196 each of the listed performance tests. 1198 - Without Exceptions 1200 - With 1% Exceptions 1202 - With 2% Exceptions 1204 5.3.2. Denial of Service Handling 1206 Objective: 1208 Determine the effect of handling DoS attacks on performance and 1209 scalability tests the impact MUST be measured for the following 1210 tests: 1212 a. Path Provisioning Rate 1214 b. Path Provisioning Time 1215 c. Network Topology Change Detection Time 1217 d. Network Discovery Size 1219 Reference Test Setup: 1221 The test SHOULD use one of the test setups described in section 3.1 1222 or section 3.2 of this document. 1224 Prerequisite: 1226 This test MUST be performed after obtaining the baseline measurement 1227 results for the above tests. 1229 Procedure: 1231 1. Perform the listed tests and launch a DoS attack towards 1232 controller while the trial is running. 1234 Note: 1236 DoS attacks can be launched on one of the following interfaces. 1238 a. Northbound (e.g., Query for flow entries continuously on 1239 northbound interface) 1240 b. Management (e.g., Ping requests to controller's management 1241 interface) 1242 c. Southbound (e.g., TCP SYN messages on southbound interface) 1244 Measurement: 1246 Measurement MUST be done as per the equation defined in the 1247 corresponding test's measurement section. 1249 Reporting Format: 1251 The DoS Attacks Handling results MUST be reported in the format of 1252 table with a column for each of the below parameters and row for 1253 each of the listed tests. 1255 - Without any attacks 1257 - With attacks 1259 The report should also specify the nature of attack and the 1260 interface. 1262 5.4. Reliability 1264 5.4.1. Controller Failover Time 1266 Objective: 1268 The time taken to switch from an active controller to the backup 1269 controller, when the controllers work in redundancy mode and the 1270 active controller fails, defined as the interval starting with the 1271 active controller bringing down, ending with the first re-discovery 1272 message received from the new controller at its Southbound 1273 interface. 1275 Reference Test Setup: 1277 The test SHOULD use the test setup described in section 3.2 of this 1278 document. 1280 Prerequisite: 1282 1. Master controller election MUST be completed. 1283 2. Nodes are connected to the controller cluster as per the 1284 Redundancy Mode (RM). 1285 3. The controller cluster should have successfully completed the 1286 network topology discovery. 1287 4. The Network Device MUST send all new flows to the controller when 1288 it receives from the test traffic generator. 1289 5. Controller should have learned the location of destination (D1) at 1290 TP2. 1292 Procedure: 1294 1. Send uni-directional traffic continuously with incremental 1295 sequence number and source addresses from test traffic generator 1296 TP1 at the rate that the controller processes without any drops. 1297 2. Ensure that there are no packet drops observed at TP2. 1298 3. Bring down the active controller. 1299 4. Stop the trial when a first frame received on TP2 after failover 1300 operation. 1301 5. Record the time at which the last valid frame received (T1) at 1302 test traffic generator TP2 before sequence error and the first 1303 valid frame received (T2) after the sequence error at TP2 1305 Measurement: 1307 Controller Failover Time = (T2 - T1) 1309 Packet Loss = Number of missing packet sequences. 1311 Reporting Format: 1313 The Controller Failover Time results MUST be tabulated with the 1314 following information. 1316 - Number of cluster nodes 1318 - Redundancy mode 1320 - Controller Failover Time 1322 - Packet Loss 1324 - Cluster keep-alive interval 1326 5.4.2. Network Re-Provisioning Time 1328 Objective: 1330 The time taken to re-route the traffic by the Controller, when there 1331 is a failure in existing traffic paths, defined as the interval 1332 starting from the first failure notification message received by the 1333 controller, ending with the last flow re-provisioning message sent 1334 by the controller at its Southbound interface. 1336 Reference Test Setup: 1338 This test SHOULD use one of the test setup described in section 3.1 1339 or section 3.2 of this document. 1341 Prerequisite: 1342 1. Network with the given number of nodes and redundant paths MUST be 1343 deployed. 1344 2. Ensure that the controller MUST have knowledge about the location 1345 of test traffic generators TP1 and TP2. 1347 3. Ensure that the controller does not pre-provision the alternate 1348 path in the emulated Network Devices at the forwarding plane test 1349 emulator. 1351 Procedure: 1353 1. Send bi-directional traffic continuously with unique sequence 1354 number from TP1 and TP2. 1355 2. Bring down a link or switch in the traffic path. 1356 3. Stop the trial after receiving first frame after network re- 1357 convergence. 1358 4. Record the time of last received frame prior to the frame loss at 1359 TP2 (TP2-Tlfr) and the time of first frame received after the 1360 frame loss at TP2 (TP2-Tffr). There must be a gap in sequence 1361 numbers of these frames 1362 5. Record the time of last received frame prior to the frame loss at 1363 TP1 (TP1-Tlfr) and the time of first frame received after the 1364 frame loss at TP1 (TP1-Tffr). 1366 Measurement: 1368 Forward Direction Path Re-Provisioning Time (FDRT) 1369 = (TP2-Tffr - TP2-Tlfr) 1371 Reverse Direction Path Re-Provisioning Time (RDRT) 1372 = (TP1-Tffr - TP1-Tlfr) 1374 Network Re-Provisioning Time = (FDRT+RDRT)/2 1376 Forward Direction Packet Loss = Number of missing sequence frames 1377 at TP1 1379 Reverse Direction Packet Loss = Number of missing sequence frames 1380 at TP2 1382 Reporting Format: 1384 The Network Re-Provisioning Time results MUST be tabulated with the 1385 following information. 1387 - Number of nodes in the primary path 1389 - Number of nodes in the alternate path 1391 - Network Re-Provisioning Time 1392 - Forward Direction Packet Loss 1394 - Reverse Direction Packet Loss 1396 6. References 1398 6.1. Normative References 1400 [RFC2119] S. Bradner, "Key words for use in RFCs to Indicate 1401 Requirement Levels", RFC 2119, March 1997. 1403 [RFC8174] B. Leiba, "Ambiguity of Uppercase vs Lowercase in RFC 1404 2119 Key Words", RFC 8174, May 2017. 1406 [I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil, 1407 Mark.T, Vishwas Manral, Sarah Banks, "Terminology for 1408 Benchmarking SDN Controller Performance", 1409 draft-ietf-bmwg-sdn-controller-benchmark-term-10 1410 (Work in progress), May 25, 2018 1412 6.2. Informative References 1414 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 1415 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 1417 7. IANA Considerations 1419 This document does not have any IANA requests. 1421 8. Security Considerations 1423 Benchmarking tests described in this document are limited to the 1424 performance characterization of controllers in a lab environment 1425 with isolated network. 1427 The benchmarking network topology will be an independent test setup 1428 and MUST NOT be connected to devices that may forward the test 1429 traffic into a production network, or misroute traffic to the test 1430 management network. 1432 Further, benchmarking is performed on a "black-box" basis, relying 1433 solely on measurements observable external to the controller. 1435 Special capabilities SHOULD NOT exist in the controller specifically 1436 for benchmarking purposes. Any implications for network security 1437 arising from the controller SHOULD be identical in the lab and in 1438 production networks. 1440 9. Acknowledgments 1442 The authors would like to thank the following individuals for 1443 providing their valuable comments to the earlier versions of this 1444 document: Al Morton (AT&T), Sandeep Gangadharan (HP), M. Georgescu 1445 (NAIST), Andrew McGregor (Google), Scott Bradner , Jay Karthik 1446 (Cisco), Ramakrishnan (Dell), Khasanov Boris (Huawei), Brian 1447 Castelli (Spirent) 1449 This document was prepared using 2-Word-v2.0.template.dot. 1451 Appendix A Benchmarking Methodology using OpenFlow Controllers 1453 This section gives an overview of OpenFlow protocol and provides 1454 test methodology to benchmark SDN controllers supporting OpenFlow 1455 southbound protocol. OpenFlow protocol is used as an example to 1456 illustrate the methodologies defined in this document. 1458 A.1. Protocol Overview 1460 OpenFlow is an open standard protocol defined by Open Networking 1461 Foundation (ONF)[ OpenFlow Switch Specification], used for 1462 programming the forwarding plane of network switches or routers via 1463 a centralized controller. 1465 A.2. Messages Overview 1467 OpenFlow protocol supports three messages types namely controller- 1468 to-switch, asynchronous and symmetric. 1470 Controller-to-switch messages are initiated by the controller and 1471 used to directly manage or inspect the state of the switch. These 1472 messages allow controllers to query/configure the switch (Features, 1473 Configuration messages), collect information from switch (Read-State 1474 message), send packets on specified port of switch (Packet-out 1475 message), and modify switch forwarding plane and state (Modify- 1476 State, Role-Request messages etc.). 1478 Asynchronous messages are generated by the switch without a 1479 controller soliciting them. These messages allow switches to update 1480 controllers to denote an arrival of new flow (Packet-in), switch 1481 state change (Flow-Removed, Port-status) and error (Error). 1483 Symmetric messages are generated in either direction without 1484 solicitation. These messages allow switches and controllers to set 1485 up connection (Hello), verify for liveness (Echo) and offer 1486 additional functionalities (Experimenter). 1488 A.3. Connection Overview 1490 OpenFlow channel is used to exchange OpenFlow message between an 1491 OpenFlow switch and an OpenFlow controller. The OpenFlow channel 1492 connection can be setup using plain TCP or TLS. By default, a switch 1493 establishes single connection with SDN controller. A switch may 1494 establish multiple parallel connections to single controller 1495 (auxiliary connection) or multiple controllers to handle controller 1496 failures and load balancing. 1498 A.4. Performance Benchmarking Tests 1500 A.4.1. Network Topology Discovery Time 1502 Procedure: 1504 Network Devices OpenFlow SDN 1505 Controller Application 1506 | | | 1507 | | | 1509 | | | 1510 | | | 1512 | | | 1513 | OFPT_HELLO Exchange | | 1514 |<-------------------------->| | 1515 | | | 1516 | PACKET_OUT with LLDP | | 1517 | to all switches | | 1518 (Tm1)|<---------------------------| | 1519 | | | 1520 | PACKET_IN with LLDP| | 1521 | rcvd from switch-1| | 1522 |--------------------------->| | 1523 | | | 1524 | PACKET_IN with LLDP| | 1525 | rcvd from switch-2| | 1526 |--------------------------->| | 1527 | . | | 1528 | . | | 1529 | | | 1530 | PACKET_IN with LLDP| | 1531 | rcvd from switch-n| | 1532 (Tmn)|--------------------------->| | 1533 | | | 1534 | | | 1536 | | | 1537 | | Query the controller for| 1538 | | discovered n/w topo.(Di)| 1539 | |<--------------------------| 1540 | | | 1541 | | | 1543 | | | 1545 Legend: 1547 NB: Northbound 1548 SB: Southbound 1549 OF: OpenFlow 1550 Tm1: Time of reception of first LLDP message from controller 1551 Tmn: Time of last LLDP message sent to controller 1553 Discussion: 1555 The Network Topology Discovery Time can be obtained by calculating 1556 the time difference between the first PACKET_OUT with LLDP message 1557 received from the controller (Tm1) and the last PACKET_IN with LLDP 1558 message sent to the controller (Tmn) when the comparison is 1559 successful. 1561 A.4.2. Asynchronous Message Processing Time 1563 Procedure: 1565 Network Devices OpenFlow SDN 1566 Controller Application 1567 | | | 1568 |PACKET_IN with single | | 1569 |OFP match header | | 1570 (T0)|--------------------------->| | 1571 | | | 1572 | PACKET_OUT with single OFP | | 1573 | action header | | 1574 (R0)|<---------------------------| | 1575 | . | | 1576 | . | | 1577 | . | | 1578 | | | 1579 |PACKET_IN with single OFP | | 1580 |match header | | 1581 (Tn)|--------------------------->| | 1582 | | | 1583 | PACKET_OUT with single OFP | | 1584 | action header| | 1585 (Rn)|<---------------------------| | 1586 | | | 1587 | | | 1589 | | | 1590 | | | 1593 | | | 1595 Legend: 1597 T0,T1, ..Tn are PACKET_IN messages transmit timestamps. 1598 R0,R1, ..Rn are PACKET_OUT messages receive timestamps. 1599 Nrx : Number of successful PACKET_IN/PACKET_OUT message 1600 exchanges 1602 Discussion: 1604 The Asynchronous Message Processing Time will be obtained by sum of 1605 ((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx. 1607 A.4.3. Asynchronous Message Processing Rate 1609 Procedure: 1611 Network Devices OpenFlow SDN 1612 Controller Application 1613 | | | 1614 |PACKET_IN with single OFP | | 1615 |match headers | | 1616 |--------------------------->| | 1617 | | | 1618 | PACKET_OUT with single | | 1619 | OFP action headers| | 1620 |<---------------------------| | 1621 | | | 1622 | . | | 1623 | . | | 1624 | . | | 1625 | | | 1626 |PACKET_IN with single OFP | | 1627 |match headers | | 1628 |--------------------------->| | 1629 | | | 1630 | PACKET_OUT with single | | 1631 | OFP action headers| | 1632 |<---------------------------| | 1633 | | | 1634 | | | 1636 | | | 1637 | | | 1639 | | | 1640 | | | 1642 | | | 1644 Note: The Ntx1 on initial trials should be greater than Nrx1 and 1645 repeat the trials until the Nrxn for two consecutive trials equeal 1646 to (+/-P%). 1648 Discussion: 1650 This test will measure two benchmarks using single procedure. 1) The 1651 Maximum Asynchronous Message Processing Rate will be obtained by 1652 calculating the maximum PACKET OUTs (Nrxn) received from the 1653 controller(s) across n trials. 2) The Loss-free Asynchronous Message 1654 Processing Rate will be obtained by calculating the maximum PACKET 1655 OUTs received from controller (s) when Loss Ratio equals zero. The 1656 loss ratio is obtained by 1 - Nrxn/Ntxn 1658 A.4.4. Reactive Path Provisioning Time 1660 Procedure: 1662 Test Traffic Test Traffic Network Devices OpenFlow 1663 Generator TP1 Generator TP2 Controller 1664 | | | | 1665 | |G-ARP (D1) | | 1666 | |--------------------->| | 1667 | | | | 1668 | | |PACKET_IN(D1) | 1669 | | |------------------>| 1670 | | | | 1671 |Traffic (S1,D1) | | 1672 (Tsf1)|----------------------------------->| | 1673 | | | | 1674 | | | | 1675 | | | | 1676 | | |PACKET_IN(S1,D1) | 1677 | | |------------------>| 1678 | | | | 1679 | | | FLOW_MOD(D1) | 1680 | | |<------------------| 1681 | | | | 1682 | |Traffic (S1,D1) | | 1683 | (Tdf1)|<---------------------| | 1684 | | | | 1686 Legend: 1688 G-ARP: Gratuitous ARP message. 1689 Tsf1: Time of first frame sent from TP1 1690 Tdf1: Time of first frame received from TP2 1692 Discussion: 1694 The Reactive Path Provisioning Time can be obtained by finding the 1695 time difference between the transmit and receive time of the traffic 1696 (Tsf1-Tdf1). 1698 A.4.5. Proactive Path Provisioning Time 1700 Procedure: 1702 Test Traffic Test Traffic Network Devices OpenFlow SDN 1703 Generator TP1 Generator TP2 Controller Application 1704 | | | | | 1705 | | | | | 1706 | | | | | 1708 | |G-ARP (D1) | | | 1709 | |-------------->| | | 1710 | | | | | 1711 | | |PACKET_IN(D1) | | 1712 | | |--------------->| | 1713 | | | | | 1714 |Traffic (S1,D1) | | | 1715 Tsf1)|---------------------------->| | | 1716 | | | | | 1717 | | | FLOW_MOD(D1) | | 1718 | | |<---------------| | 1719 | | | | | 1720 | |Traffic (S1,D1)| | | 1721 | (Tdf1)|<--------------| | | 1722 | | | | | 1724 Legend: 1726 G-ARP: Gratuitous ARP message. 1727 Tsf1: Time of first frame sent from TP1 1728 Tdf1: Time of first frame received from TP2 1730 Discussion: 1732 The Proactive Path Provisioning Time can be obtained by finding the 1733 time difference between the transmit and receive time of the traffic 1734 (Tsf1-Tdf1). 1736 A.4.6. Reactive Path Provisioning Rate 1738 Procedure: 1740 Test Traffic Test Traffic Network Devices OpenFlow 1741 Generator TP1 Generator TP2 Controller 1742 | | | | 1743 | | | | 1744 | | | | 1745 | |G-ARP (D1..Dn) | | 1746 | |--------------------| | 1747 | | | | 1748 | | |PACKET_IN(D1..Dn) | 1749 | | |--------------------->| 1750 | | | | 1751 |Traffic (S1..Sn,D1..Dn) | | 1752 |--------------------------------->| | 1753 | | | | 1754 | | |PACKET_IN(S1.Sn,D1.Dn)| 1755 | | |--------------------->| 1756 | | | | 1757 | | | FLOW_MOD(S1) | 1758 | | |<---------------------| 1759 | | | | 1760 | | | FLOW_MOD(D1) | 1761 | | |<---------------------| 1762 | | | | 1763 | | | FLOW_MOD(S2) | 1764 | | |<---------------------| 1765 | | | | 1766 | | | FLOW_MOD(D2) | 1767 | | |<---------------------| 1768 | | | . | 1769 | | | . | 1770 | | | | 1771 | | | FLOW_MOD(Sn) | 1772 | | |<---------------------| 1773 | | | | 1774 | | | FLOW_MOD(Dn) | 1775 | | |<---------------------| 1776 | | | | 1777 | | Traffic (S1..Sn, | | 1778 | | D1..Dn)| | 1779 | |<-------------------| | 1780 | | | | 1781 | | | | 1783 Legend: 1785 G-ARP: Gratuitous ARP 1786 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1787 Destination Endpoint n 1788 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1789 Endpoint n 1791 Discussion: 1793 The Reactive Path Provisioning Rate can be obtained by finding the 1794 total number of frames received at TP2 after the trial duration. 1796 A.4.7. Proactive Path Provisioning Rate 1798 Procedure: 1800 Test Traffic Test Traffic Network Devices OpenFlow SDN 1801 Generator TP1 Generator TP2 Controller Application 1802 | | | | | 1803 | |G-ARP (D1..Dn) | | | 1804 | |-------------->| | | 1805 | | | | | 1806 | | |PACKET_IN(D1.Dn)| | 1807 | | |--------------->| | 1808 | | | | | 1809 |Traffic (S1..Sn,D1..Dn) | | | 1810 Tsf1)|---------------------------->| | | 1811 | | | | | 1812 | | | | | 1814 | | | | | 1815 | | | | . | 1816 | | | | | 1818 | | | | | 1819 | | | FLOW_MOD(S1) | | 1820 | | |<---------------| | 1821 | | | | | 1822 | | | FLOW_MOD(D1) | | 1823 | | |<---------------| | 1824 | | | | | 1825 | | | . | | 1826 | | | FLOW_MOD(Sn) | | 1827 | | |<---------------| | 1828 | | | | | 1829 | | | FLOW_MOD(Dn) | | 1830 | | |<---------------| | 1831 | | | | | 1832 | |Traffic (S1.Sn,| | | 1833 | | D1.Dn)| | | 1834 | (Tdf1)|<--------------| | | 1835 | | | | | 1837 Legend: 1839 G-ARP: Gratuitous ARP 1840 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1841 Destination Endpoint n 1842 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1843 Endpoint n 1845 Discussion: 1847 The Proactive Path Provisioning Rate can be obtained by finding the 1848 total number of frames received at TP2 after the trial duration 1850 A.4.8. Network Topology Change Detection Time 1852 Procedure: 1854 Network Devices OpenFlow SDN 1855 Controller Application 1856 | | | 1857 | | | 1859 | | | 1860 T0 |PORT_STATUS with link down | | 1861 | from S1 | | 1862 |--------------------------->| | 1863 | | | 1864 |First PACKET_OUT with LLDP | | 1865 |to OF Switch | | 1866 T1 |<---------------------------| | 1867 | | | 1868 | | | 1871 Discussion: 1873 The Network Topology Change Detection Time can be obtained by 1874 finding the difference between the time the OpenFlow switch S1 sends 1875 the PORT_STATUS message (T0) and the time that the OpenFlow 1876 controller sends the first topology re-discovery message (T1) to 1877 OpenFlow switches. 1879 A.5. Scalability 1881 A.5.1. Control Sessions Capacity 1883 Procedure: 1885 Network Devices OpenFlow 1886 Controller 1887 | | 1888 | OFPT_HELLO Exchange for Switch 1 | 1889 |<------------------------------------->| 1890 | | 1891 | OFPT_HELLO Exchange for Switch 2 | 1892 |<------------------------------------->| 1893 | . | 1894 | . | 1895 | . | 1896 | OFPT_HELLO Exchange for Switch n | 1897 |X<----------------------------------->X| 1898 | | 1900 Discussion: 1902 The value of Switch n-1 will provide Control Sessions Capacity. 1904 A.5.2. Network Discovery Size 1906 Procedure: 1908 Network Devices OpenFlow SDN 1909 Controller Application 1910 | | | 1911 | | | 1913 | | | 1914 | OFPT_HELLO Exchange | | 1915 |<-------------------------->| | 1916 | | | 1917 | PACKET_OUT with LLDP | | 1918 | to all switches | | 1919 |<---------------------------| | 1920 | | | 1921 | PACKET_IN with LLDP| | 1922 | rcvd from switch-1| | 1923 |--------------------------->| | 1924 | | | 1925 | PACKET_IN with LLDP| | 1926 | rcvd from switch-2| | 1927 |--------------------------->| | 1928 | . | | 1929 | . | | 1930 | | | 1931 | PACKET_IN with LLDP| | 1932 | rcvd from switch-n| | 1933 |--------------------------->| | 1934 | | | 1935 | | | 1937 | | | 1938 | | Query the controller for| 1939 | | discovered n/w topo.(N1)| 1940 | |<--------------------------| 1941 | | | 1942 | | | 1944 | | | 1945 | | | 1948 | | | 1950 Legend: 1952 n/w topo: Network Topology 1953 OF: OpenFlow 1955 Discussion: 1957 The value of N1 provides the Network Discovery Size value. The trial 1958 duration can be set to the stipulated time within which the user 1959 expects the controller to complete the discovery process. 1961 A.5.3. Forwarding Table Capacity 1962 Procedure: 1964 Test Traffic Network Devices OpenFlow SDN 1965 Generator TP1 Controller Application 1966 | | | | 1967 | | | | 1968 |G-ARP (H1..Hn) | | | 1969 |----------------->| | | 1970 | | | | 1971 | |PACKET_IN(D1..Dn) | | 1972 | |------------------>| | 1973 | | | | 1974 | | || 1975 | | | | 1976 | | | |(F1) 1978 | | | | 1979 | | || 1980 | | | | 1981 | | | |(F2) 1983 | | | | 1984 | | || 1985 | | | | 1986 | | | |(F3) 1988 | | | | 1989 | | | | 1991 | | | | 1993 Legend: 1995 G-ARP: Gratuitous ARP 1996 H1..Hn: Host 1 .. Host n 1997 FWD: Forwarding Table 1999 Discussion: 2001 Query the controller forwarding table entries for multiple times 2002 until the three consecutive queries return the same value. The last 2003 value retrieved from the controller will provide the Forwarding 2004 Table Capacity value. The query interval is user configurable. The 5 2005 seconds shown in this example is for representational purpose. 2007 A.6. Security 2009 A.6.1. Exception Handling 2011 Procedure: 2013 Test Traffic Test Traffic Network Devices OpenFlow SDN 2014 Generator TP1 Generator TP2 Controller Application 2015 | | | | | 2016 | |G-ARP (D1..Dn) | | | 2017 | |------------------>| | | 2018 | | | | | 2019 | | |PACKET_IN(D1..Dn)| | 2020 | | |---------------->| | 2021 | | | | | 2022 |Traffic (S1..Sn,D1..Dn) | | | 2023 |----------------------------->| | | 2024 | | | | | 2025 | | |PACKET_IN(S1..Sa,| | 2026 | | | D1..Da)| | 2027 | | |---------------->| | 2028 | | | | | 2029 | | |PACKET_IN(Sa+1.. | | 2030 | | |.Sn,Da+1..Dn) | | 2031 | | |(1% incorrect OFP| | 2032 | | | Match header)| | 2033 | | |---------------->| | 2034 | | | | | 2035 | | | FLOW_MOD(D1..Dn)| | 2036 | | |<----------------| | 2037 | | | | | 2038 | | | FLOW_MOD(S1..Sa)| | 2039 | | | OFP headers| | 2040 | | |<----------------| | 2041 | | | | | 2042 | |Traffic (S1..Sa, | | | 2043 | | D1..Da)| | | 2044 | |<------------------| | | 2045 | | | | | 2046 | | | | | 2049 | | | | | 2050 | | | | | 2053 | | | | | 2054 | | | | | 2058 | | | | | 2059 | | | | | 2062 | | | | | 2064 Legend: 2066 G-ARP: Gratuitous ARP 2067 PACKET_IN(Sa+1..Sn,Da+1..Dn): OpenFlow PACKET_IN with wrong 2068 version number 2069 Rn1: Total number of frames received at Test Port 2 with 2070 1% incorrect frames 2071 Rn2: Total number of frames received at Test Port 2 with 2072 2% incorrect frames 2074 Discussion: 2076 The traffic rate sent towards OpenFlow switch from Test Port 1 2077 should be 1% higher than the Path Programming Rate. Rn1 will provide 2078 the Path Provisioning Rate of controller at 1% of incorrect frames 2079 handling and Rn2 will provide the Path Provisioning Rate of 2080 controller at 2% of incorrect frames handling. 2082 The procedure defined above provides test steps to determine the 2083 effect of handling error packets on Path Programming Rate. Same 2084 procedure can be adopted to determine the effects on other 2085 performance tests listed in this benchmarking tests. 2087 A.6.2. Denial of Service Handling 2089 Procedure: 2091 Test Traffic Test Traffic Network Devic OpenFlow SDN 2092 Generator TP1 Generator TP2 Controller Application 2093 | | | | | 2094 | |G-ARP (D1..Dn) | | | 2095 | |------------------>| | | 2096 | | | | | 2097 | | |PACKET_IN(D1..Dn)| | 2098 | | |---------------->| | 2099 | | | | | 2100 |Traffic (S1..Sn,D1..Dn) | | | 2101 |----------------------------->| | | 2102 | | | | | 2103 | | |PACKET_IN(S1..Sn,| | 2104 | | | D1..Dn)| | 2105 | | |---------------->| | 2106 | | | | | 2107 | | |TCP SYN Attack | | 2108 | | |from a switch | | 2109 | | |---------------->| | 2110 | | | | | 2111 | | |FLOW_MOD(D1..Dn) | | 2112 | | |<----------------| | 2113 | | | | | 2114 | | | FLOW_MOD(S1..Sn)| | 2115 | | | OFP headers| | 2116 | | |<----------------| | 2117 | | | | | 2118 | |Traffic (S1..Sn, | | | 2119 | | D1..Dn)| | | 2120 | |<------------------| | | 2121 | | | | | 2122 | | | | | 2125 | | | | | 2126 | | | | | 2129 | | | | | 2131 Legend: 2133 G-ARP: Gratuitous ARP 2135 Discussion: 2137 TCP SYN attack should be launched from one of the emulated/simulated 2138 OpenFlow Switch. Rn1 provides the Path Programming Rate of 2139 controller uponhandling denial of service attack. 2141 The procedure defined above provides test steps to determine the 2142 effect of handling denial of service on Path Programming Rate. Same 2143 procedure can be adopted to determine the effects on other 2144 performance tests listed in this benchmarking tests. 2146 A.7. Reliability 2148 A.7.1. Controller Failover Time 2150 Procedure: 2152 Test Traffic Test Traffic Network Device OpenFlow SDN 2153 Generator TP1 Generator TP2 Controller Application 2154 | | | | | 2155 | |G-ARP (D1) | | | 2156 | |------------>| | | 2157 | | | | | 2158 | | |PACKET_IN(D1) | | 2159 | | |---------------->| | 2160 | | | | | 2161 |Traffic (S1..Sn,D1) | | | 2162 |-------------------------->| | | 2163 | | | | | 2164 | | | | | 2165 | | |PACKET_IN(S1,D1) | | 2166 | | |---------------->| | 2167 | | | | | 2168 | | |FLOW_MOD(D1) | | 2169 | | |<----------------| | 2170 | | |FLOW_MOD(S1) | | 2171 | | |<----------------| | 2172 | | | | | 2173 | |Traffic (S1,D1)| | | 2174 | |<------------| | | 2175 | | | | | 2176 | | |PACKET_IN(S2,D1) | | 2177 | | |---------------->| | 2178 | | | | | 2179 | | |FLOW_MOD(S2) | | 2180 | | |<----------------| | 2181 | | | | | 2182 | | |PACKET_IN(Sn-1,D1)| | 2183 | | |---------------->| | 2184 | | | | | 2185 | | |PACKET_IN(Sn,D1) | | 2186 | | |---------------->| | 2187 | | | . | | 2188 | | | . | | 2191 | | | FLOW_MOD(Sn-1) | | 2192 | | | <-X----------| | 2193 | | | | | 2194 | | |FLOW_MOD(Sn) | | 2195 | | |<----------------| | 2196 | | | | | 2197 | |Traffic (Sn,D1)| | | 2198 | |<------------| | | 2199 | | | | | 2200 | | | | | 2205 Legend: 2207 G-ARP: Gratuitous ARP. 2209 Discussion: 2211 The time difference between the last valid frame received before the 2212 traffic loss and the first frame received after the traffic loss 2213 will provide the controller failover time. 2215 If there is no frame loss during controller failover time, the 2216 controller failover time can be deemed negligible. 2218 A.7.2. Network Re-Provisioning Time 2220 Procedure: 2222 Test Traffic Test Traffic Network Devices OpenFlow SDN 2223 Generator TP1 Generator TP2 Controller Application 2224 | | | | | 2225 | |G-ARP (D1) | | | 2226 | |-------------->| | | 2227 | | | | | 2228 | | |PACKET_IN(D1) | | 2229 | | |---------------->| | 2230 | G-ARP (S1) | | | 2231 |---------------------------->| | | 2232 | | | | | 2233 | | |PACKET_IN(S1) | | 2234 | | |---------------->| | 2235 | | | | | 2236 |Traffic (S1,D1,Seq.no (1..n))| | | 2237 |---------------------------->| | | 2238 | | | | | 2239 | | |PACKET_IN(S1,D1) | | 2240 | | |---------------->| | 2241 | | | | | 2242 | |Traffic (D1,S1,| | | 2243 | | Seq.no (1..n))| | | 2244 | |-------------->| | | 2245 | | | | | 2246 | | |PACKET_IN(D1,S1) | | 2247 | | |---------------->| | 2248 | | | | | 2249 | | |FLOW_MOD(D1) | | 2250 | | |<----------------| | 2251 | | | | | 2252 | | |FLOW_MOD(S1) | | 2253 | | |<----------------| | 2254 | | | | | 2255 | |Traffic (S1,D1,| | | 2256 | | Seq.no(1))| | | 2257 | |<--------------| | | 2258 | | | | | 2259 | |Traffic (S1,D1,| | | 2260 | | Seq.no(2))| | | 2261 | |<--------------| | | 2262 | | | | | 2263 | | | | | 2264 | Traffic (D1,S1,Seq.no(1))| | | 2265 |<----------------------------| | | 2266 | | | | | 2267 | Traffic (D1,S1,Seq.no(2))| | | 2268 |<----------------------------| | | 2269 | | | | | 2270 | Traffic (D1,S1,Seq.no(x))| | | 2271 |<----------------------------| | | 2272 | | | | | 2273 | |Traffic (S1,D1,| | | 2274 | | Seq.no(x))| | | 2275 | |<--------------| | | 2276 | | | | | 2277 | | | | | 2278 | | | | | 2282 | | | | | 2283 | | |PORT_STATUS(Sa) | | 2284 | | |---------------->| | 2285 | | | | | 2286 | |Traffic (S1,D1,| | | 2287 | | Seq.no(n-1))| | | 2288 | | X<-----------| | | 2289 | | | | | 2290 | Traffic (D1,S1,Seq.no(n-1))| | | 2291 | X------------------------| | | 2292 | | | | | 2293 | | | | | 2294 | | |FLOW_MOD(D1) | | 2295 | | |<----------------| | 2296 | | | | | 2297 | | |FLOW_MOD(S1) | | 2298 | | |<----------------| | 2299 | | | | | 2300 | Traffic (D1,S1,Seq.no(n))| | | 2301 |<----------------------------| | | 2302 | | | | | 2303 | |Traffic (S1,D1,| | | 2304 | | Seq.no(n))| | | 2305 | |<--------------| | | 2306 | | | | | 2307 | | | | | 2312 Legend: 2314 G-ARP: Gratuitous ARP message. 2315 Seq.no: Sequence number. 2316 Sa: Neighbor switch of the switch that was brought down. 2318 Discussion: 2320 The time difference between the last valid frame received before the 2321 traffic loss (Packet number with sequence number x) and the first 2322 frame received after the traffic loss (packet with sequence number 2323 n) will provide the network path re-provisioning time. 2325 Note that the trial is valid only when the controller provisions the 2326 alternate path upon network failure. 2328 Authors' Addresses 2330 Bhuvaneswaran Vengainathan 2331 Veryx Technologies Inc. 2332 1 International Plaza, Suite 550 2333 Philadelphia 2334 PA 19113 2336 Email: bhuvaneswaran.vengainathan@veryxtech.com 2338 Anton Basil 2339 Veryx Technologies Inc. 2340 1 International Plaza, Suite 550 2341 Philadelphia 2342 PA 19113 2344 Email: anton.basil@veryxtech.com 2346 Mark Tassinari 2347 Hewlett-Packard, 2348 8000 Foothills Blvd, 2349 Roseville, CA 95747 2351 Email: mark.tassinari@hpe.com 2353 Vishwas Manral 2354 Nano Sec, 2355 CA 2357 Email: vishwas.manral@gmail.com 2359 Sarah Banks 2360 VSS Monitoring 2361 930 De Guigne Drive, 2362 Sunnyvale, CA 2364 Email: sbanks@encrypted.net