idnits 2.17.1 draft-ietf-bmwg-sdn-controller-benchmark-meth-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 28, 2017) is 2486 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2544' is defined on line 1313, but no explicit reference was found in the text == Unused Reference: 'RFC2330' is defined on line 1316, but no explicit reference was found in the text == Unused Reference: 'RFC6241' is defined on line 1320, but no explicit reference was found in the text == Unused Reference: 'RFC6020' is defined on line 1324, but no explicit reference was found in the text == Unused Reference: 'RFC5440' is defined on line 1328, but no explicit reference was found in the text == Unused Reference: 'I-D.sdn-controller-benchmark-term' is defined on line 1334, but no explicit reference was found in the text == Unused Reference: 'I-D.i2rs-architecture' is defined on line 1342, but no explicit reference was found in the text == Unused Reference: 'OpenContrail' is defined on line 1347, but no explicit reference was found in the text == Unused Reference: 'OpenDaylight' is defined on line 1351, but no explicit reference was found in the text == Outdated reference: A later version (-10) exists of draft-ietf-bmwg-sdn-controller-benchmark-term-04 == Outdated reference: A later version (-15) exists of draft-ietf-i2rs-architecture-09 Summary: 0 errors (**), 0 flaws (~~), 12 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Bhuvaneswaran Vengainathan 2 Network Working Group Anton Basil 3 Intended Status: Informational Veryx Technologies 4 Expires: December 28, 2017 Mark Tassinari 5 Hewlett-Packard 6 Vishwas Manral 7 Nano Sec 8 Sarah Banks 9 VSS Monitoring 10 June 28, 2017 12 Benchmarking Methodology for SDN Controller Performance 13 draft-ietf-bmwg-sdn-controller-benchmark-meth-04 15 Abstract 17 This document defines the methodologies for benchmarking control 18 plane performance of SDN controllers. Terminology related to 19 benchmarking SDN controllers is described in the companion 20 terminology document. SDN controllers have been implemented with 21 many varying designs in order to achieve their intended network 22 functionality. Hence, the authors have taken the approach of 23 considering an SDN controller as a black box, defining the 24 methodology in a manner that is agnostic to protocols and network 25 services supported by controllers. The intent of this document is to 26 provide a standard mechanism to measure the performance of all 27 controller implementations. 29 Status of this Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current. 39 Internet-Drafts are draft documents valid for a maximum of six 40 months and may be updated, replaced, or obsoleted by other documents 41 at any time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress. 44 This Internet-Draft will expire on December 28, 2017. 46 Copyright Notice 48 Copyright (c) 2017 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with 56 respect to this document. Code Components extracted from this 57 document must include Simplified BSD License text as described in 58 Section 4.e of the Trust Legal Provisions and are provided without 59 warranty as described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction..................................................4 64 2. Scope.........................................................4 65 3. Test Setup....................................................4 66 3.1. Test setup - Controller working in Standalone Mode.......5 67 3.2. Test setup - Controller working in Cluster Mode..........6 68 4. Test Considerations...........................................7 69 4.1. Network Topology.........................................7 70 4.2. Test Traffic.............................................7 71 4.3. Test Emulator Requirements...............................7 72 4.4. Connection Setup.........................................7 73 4.5. Measurement Point Specification and Recommendation.......8 74 4.6. Connectivity Recommendation..............................8 75 4.7. Test Repeatability.......................................8 76 5. Benchmarking Tests............................................9 77 5.1. Performance..............................................9 78 5.1.1. Network Topology Discovery Time.....................9 79 5.1.2. Asynchronous Message Processing Time...............11 80 5.1.3. Asynchronous Message Processing Rate...............12 81 5.1.4. Reactive Path Provisioning Time....................14 82 5.1.5. Proactive Path Provisioning Time...................15 83 5.1.6. Reactive Path Provisioning Rate....................17 84 5.1.7. Proactive Path Provisioning Rate...................18 85 5.1.8. Network Topology Change Detection Time.............20 86 5.2. Scalability.............................................21 87 5.2.1. Control Session Capacity...........................21 88 5.2.2. Network Discovery Size.............................22 89 5.2.3. Forwarding Table Capacity..........................23 90 5.3. Security................................................24 91 5.3.1. Exception Handling.................................24 92 5.3.2. Denial of Service Handling.........................26 93 5.4. Reliability.............................................27 94 5.4.1. Controller Failover Time...........................27 95 5.4.2. Network Re-Provisioning Time.......................28 96 6. References...................................................30 97 6.1. Normative References....................................30 98 6.2. Informative References..................................31 99 7. IANA Considerations..........................................31 100 8. Security Considerations......................................31 101 9. Acknowledgments..............................................31 102 Appendix A. Example Test Topologies.............................33 103 A.1. Leaf-Spine Topology - Three Tier Network Architecture...33 104 A.2. Leaf-Spine Topology - Two Tier Network Architecture.....33 105 Appendix B. Benchmarking Methodology using OpenFlow Controllers.34 106 B.1. Protocol Overview.......................................34 107 B.2. Messages Overview.......................................34 108 B.3. Connection Overview.....................................34 109 B.4. Performance Benchmarking Tests..........................35 110 B.4.1. Network Topology Discovery Time....................35 111 B.4.2. Asynchronous Message Processing Time...............36 112 B.4.3. Asynchronous Message Processing Rate...............37 113 B.4.4. Reactive Path Provisioning Time....................38 114 B.4.5. Proactive Path Provisioning Time...................39 115 B.4.6. Reactive Path Provisioning Rate....................40 116 B.4.7. Proactive Path Provisioning Rate...................41 117 B.4.8. Network Topology Change Detection Time.............42 118 B.5. Scalability.............................................43 119 B.5.1. Control Sessions Capacity..........................43 120 B.5.2. Network Discovery Size.............................43 121 B.5.3. Forwarding Table Capacity..........................44 122 B.6. Security................................................46 123 B.6.1. Exception Handling.................................46 124 B.6.2. Denial of Service Handling.........................47 125 B.7. Reliability.............................................49 126 B.7.1. Controller Failover Time...........................49 127 B.7.2. Network Re-Provisioning Time.......................50 128 Authors' Addresses..............................................53 130 1. Introduction 132 This document provides generic methodologies for benchmarking SDN 133 controller performance. An SDN controller may support many 134 northbound and southbound protocols, implement a wide range of 135 applications, and work solely, or as a group to achieve the desired 136 functionality. This document considers an SDN controller as a black 137 box, regardless of design and implementation. The tests defined in 138 the document can be used to benchmark SDN controller for 139 performance, scalability, reliability and security independent of 140 northbound and southbound protocols. These tests can be performed on 141 an SDN controller running as a virtual machine (VM) instance or on a 142 bare metal server. This document is intended for those who want to 143 measure the SDN controller performance as well as compare various 144 SDN controllers performance. 146 Conventions used in this document 148 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 149 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 150 document are to be interpreted as described in RFC 2119. 152 2. Scope 154 3. This document defines methodology to measure the networking metrics 155 of SDN controllers. For the purpose of this memo, the SDN controller 156 is a function that manages and controls Network Devices. Any SDN 157 controller without a control capability is out of scope for this 158 memo. The tests defined in this document enable benchmarking of SDN 159 Controllers in two ways; as a standalone controller and as a cluster 160 of homogeneous controllers. These tests are recommended for 161 execution in lab environments rather than in live network 162 deployments. Performance benchmarking of a federation of controllers 163 is beyond the scope of this document. Test Setup 165 The tests defined in this document enable measurement of an SDN 166 controllers performance in standalone mode and cluster mode. This 167 section defines common reference topologies that are later referred 168 to in individual tests (Additional forwarding Plane topologies are 169 provided in Appendix A). 171 3.1. Test setup - Controller working in Standalone Mode 173 +-----------------------------------------------------------+ 174 | Application Plane Test Emulator | 175 | | 176 | +-----------------+ +-------------+ | 177 | | Application | | Service | | 178 | +-----------------+ +-------------+ | 179 | | 180 +-----------------------------+(I2)-------------------------+ 181 | 182 | 183 | (Northbound interfaces) 184 +-------------------------------+ 185 | +----------------+ | 186 | | SDN Controller | | 187 | +----------------+ | 188 | | 189 | Device Under Test (DUT) | 190 +-------------------------------+ 191 | (Southbound interfaces) 192 | 193 | 194 +-----------------------------+(I1)-------------------------+ 195 | | 196 | +-----------+ +-----------+ | 197 | | Network |l1 ln-1| Network | | 198 | | Device 1 |---- .... ----| Device n | | 199 | +-----------+ +-----------+ | 200 | |l0 |ln | 201 | | | | 202 | | | | 203 | +---------------+ +---------------+ | 204 | | Test Traffic | | Test Traffic | | 205 | | Generator | | Generator | | 206 | | (TP1) | | (TP2) | | 207 | +---------------+ +---------------+ | 208 | | 209 | Forwarding Plane Test Emulator | 210 +-----------------------------------------------------------+ 212 Figure 1 214 3.2. Test setup - Controller working in Cluster Mode 216 +-----------------------------------------------------------+ 217 | Application Plane Test Emulator | 218 | | 219 | +-----------------+ +-------------+ | 220 | | Application | | Service | | 221 | +-----------------+ +-------------+ | 222 | | 223 +-----------------------------+(I2)-------------------------+ 224 | 225 | 226 | (Northbound interfaces) 227 +---------------------------------------------------------+ 228 | | 229 | ------------------ ------------------ | 230 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 231 | ------------------ ------------------ | 232 | | 233 | Device Under Test (DUT) | 234 +---------------------------------------------------------+ 235 | (Southbound interfaces) 236 | 237 | 238 +-----------------------------+(I1)-------------------------+ 239 | | 240 | +-----------+ +-----------+ | 241 | | Network |l1 ln-1| Network | | 242 | | Device 1 |---- .... ----| Device n | | 243 | +-----------+ +-----------+ | 244 | |l0 |ln | 245 | | | | 246 | | | | 247 | +---------------+ +---------------+ | 248 | | Test Traffic | | Test Traffic | | 249 | | Generator | | Generator | | 250 | | (TP1) | | (TP2) | | 251 | +---------------+ +---------------+ | 252 | | 253 | Forwarding Plane Test Emulator | 254 +-----------------------------------------------------------+ 256 Figure 2 258 4. Test Considerations 260 4.1. Network Topology 262 The test cases SHOULD use Leaf-Spine topology with at least 1 263 Network Device in the topology for benchmarking. The test traffic 264 generators TP1 and TP2 SHOULD be connected to the first and the last 265 leaf Network Device. If a test case uses test topology with 1 266 Network Device, the test traffic generators TP1 and TP2 SHOULD be 267 connected to the same node. However to achieve a complete 268 performance characterization of the SDN controller, it is 269 recommended that the controller be benchmarked for many network 270 topologies and a varying number of Network Devices. This document 271 includes two sample test topologies, defined in Section 10 - 272 Appendix A for reference. Further, care should be taken to make sure 273 that a loop prevention mechanism is enabled either in the SDN 274 controller, or in the network when the topology contains redundant 275 network paths. 277 4.2. Test Traffic 279 Test traffic is used to notify the controller about the asynchronous 280 arrival of new flows. The test cases SHOULD use frame sizes of 128, 281 512 and 1508 bytes for benchmarking. Testing using jumbo frames are 282 optional. 284 4.3. Test Emulator Requirements 286 The Test Emulator SHOULD time stamp the transmitted and received 287 control messages to/from the controller on the established network 288 connections. The test cases use these values to compute the 289 controller processing time. 291 4.4. Connection Setup 293 There may be controller implementations that support unencrypted and 294 encrypted network connections with Network Devices. Further, the 295 controller may have backward compatibility with Network Devices 296 running older versions of southbound protocols. It may be useful to 297 measure the controller performance be measured with one or more 298 applicable connection setup methods defined below. 300 1. Unencrypted connection with Network Devices, running same 301 protocol version. 302 2. Unencrypted connection with Network Devices, running different 303 protocol versions. 304 Example: 306 a. Controller running current protocol version and switch 307 running older protocol version 308 b. Controller running older protocol version and switch 309 running current protocol version 310 3. Encrypted connection with Network Devices, running same 311 protocol version 312 4. Encrypted connection with Network Devices, running different 313 protocol versions. 314 Example: 315 a. Controller running current protocol version and switch 316 running older protocol version 317 b. Controller running older protocol version and switch 318 running current protocol version 320 4.5. Measurement Point Specification and Recommendation 322 The measurement accuracy depends on several factors including the 323 point of observation where the indications are captured. For 324 example, the notification can be observed at the controller or test 325 emulator. The test operator SHOULD make the observations/ 326 measurements at the interfaces of test emulator unless it is 327 explicitly mentioned otherwise in the individual test. In any case, 328 the locations of measurement points MUST be reported. 330 4.6. Connectivity Recommendation 332 The SDN controller in the test setup SHOULD be connected directly 333 with the forwarding and the management plane test emulators to avoid 334 any delays or failure introduced by the intermediate devices during 335 benchmarking tests. When the controller is implemented as a virtual 336 machine, details of the physical and logical connectivity MUST be 337 reported. 339 4.7. Test Repeatability 341 To increase the confidence in measured result, it is recommended 342 that each test SHOULD be repeated a minimum of 10 times. 344 Test Reporting 346 Each test has a reporting format that contains some global and 347 identical reporting components, and some individual components that 348 are specific to individual tests. The following test configuration 349 parameters and controller settings parameters MUST be reflected in 350 the test report. 352 Test Configuration Parameters: 354 1. Controller name and version 355 2. Northbound protocols and versions 356 3. Southbound protocols and versions 357 4. Controller redundancy mode (Standalone or Cluster Mode) 358 5. Connection setup (Unencrypted or Encrypted) 359 6. Network Topology (Mesh or Tree or Linear) 360 7. Network Device Type (Physical or Virtual or Emulated) 361 8. Number of Nodes 362 9. Number of Links 363 10. Dataplane Test Traffic Type 364 11. Controller System Configuration (e.g., Physical or Virtual 365 Machine, CPU, Memory, Caches, Operating System, Interface 366 Speed, Storage) 367 12. Reference Test Setup (e.g., Section 3.1 etc.,) 369 Controller Settings Parameters: 370 1. Topology re-discovery timeout 371 2. Controller redundancy mode (e.g., active-standby etc.,) 372 3. Controller state persistence enabled/disabled 374 To ensure the repeatability of test, the following capabilities of 375 test emulator SHOULD be reported 377 1. Maximum number of Network Devices that the forwarding plane 378 emulates 379 2. Control message processing time (e.g., Topology Discovery 380 Messages) 382 One way to determine the above two values are to simulate the 383 required control sessions and messages from the control plane. 385 5. Benchmarking Tests 387 5.1. Performance 389 5.1.1. Network Topology Discovery Time 391 Objective: 393 The time taken by controller(s) to determine the complete network 394 topology, defined as the interval starting with the first discovery 395 message from the controller(s) at its Southbound interface, ending 396 with all features of the static topology determined. 398 Reference Test Setup: 400 The test SHOULD use one of the test setups described in section 3.1 401 or section 3.2 of this document in combination with Appendix A. 403 Prerequisite: 405 1. The controller MUST support network discovery. 406 2. Tester should be able to retrieve the discovered topology 407 information either through the controller's management interface, 408 or northbound interface to determine if the discovery was 409 successful and complete. 410 3. Ensure that the controller's topology re-discovery timeout has 411 been set to the maximum value to avoid initiation of re-discovery 412 process in the middle of the test. 414 Procedure: 416 1. Ensure that the controller is operational, its network 417 applications, northbound and southbound interfaces are up and 418 running. 419 2. Establish the network connections between controller and Network 420 Devices. 421 3. Record the time for the first discovery message (Tm1) received 422 from the controller at forwarding plane test emulator interface 423 I1. 424 4. Query the controller every 3 seconds to obtain the discovered 425 network topology information through the northbound interface or 426 the management interface and compare it with the deployed network 427 topology information. 428 5. Stop the test when the discovered topology information matches the 429 deployed network topology, or when the discovered topology 430 information for 3 consecutive queries return the same details. 431 6. Record the time last discovery message (Tmn) sent to controller 432 from the forwarding plane test emulator interface (I1) when the 433 test completed successfully. (e.g., the topology matches). 435 Measurement: 437 Topology Discovery Time Tr1 = Tmn-Tm1. 439 Tr1 + Tr2 + Tr3 .. Trn 440 Average Topology Discovery Time = ----------------------- 441 Total Test Iterations 443 Reporting Format: 445 The Topology Discovery Time results MUST be reported in the format 446 of a table, with a row for each successful iteration. The last row 447 of the table indicates the average Topology Discovery Time. 449 If this test is repeated with varying number of nodes over the same 450 topology, the results SHOULD be reported in the form of a graph. The 451 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 452 SHOULD be the average Topology Discovery Time. 454 If this test is repeated with same number of nodes over different 455 topologies, the results SHOULD be reported in the form of a graph. 456 The X coordinate SHOULD be the Topology Type, the Y coordinate 457 SHOULD be the average Topology Discovery Time. 459 5.1.2. Asynchronous Message Processing Time 461 Objective: 463 The time taken by controller(s) to process an asynchronous message, 464 defined as the interval starting with an asynchronous message from a 465 network device after the discovery of all the devices by the 466 controller(s), ending with a response message from the controller(s) 467 at its Southbound interface. 469 Reference Test Setup: 471 This test SHOULD use one of the test setup described in section 3.1 472 or section 3.2 of this document in combination with Appendix A. 474 Prerequisite: 476 1. The controller MUST have successfully completed the network 477 topology discovery for the connected Network Devices. 479 Procedure: 481 1. Generate asynchronous messages from every connected Network 482 Device, to the SDN controller, one at a time in series from the 483 forwarding plane test emulator for the test duration. 484 2. Record every request transmit (T1) timestamp and the 485 corresponding response (R1) received timestamp at the 486 forwarding plane test emulator interface (I1) for every 487 successful message exchange. 489 Measurement: 491 (R1-T1) + (R2-T2)..(Rn-Tn) 492 Asynchronous Message Processing Time Tr1 = ----------------------- 493 Nrx 495 Where Nrx is the total number of successful messages exchanged 497 Tr1 + Tr2 + Tr3..Trn 498 Average Asynchronous Message Processing Time= -------------------- 499 Total Test Iterations 501 Reporting Format: 503 The Asynchronous Message Processing Time results MUST be reported in 504 the format of a table with a row for each iteration. The last row of 505 the table indicates the average Asynchronous Message Processing 506 Time. 508 The report should capture the following information in addition to 509 the configuration parameters captured in section 5. - Successful 510 messages exchanged (Nrx) 512 If this test is repeated with varying number of nodes with same 513 topology, the results SHOULD be reported in the form of a graph. The 514 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 515 SHOULD be the average Asynchronous Message Processing Time. 517 If this test is repeated with same number of nodes using different 518 topologies, the results SHOULD be reported in the form of a graph. 519 The X coordinate SHOULD be the Topology Type, the Y coordinate 520 SHOULD be the average Asynchronous Message Processing Time. 522 5.1.3. Asynchronous Message Processing Rate 524 Objective: 526 The maximum number of asynchronous messages (session aliveness check 527 message, new flow arrival notification message etc.) that the 528 controller(s) can process, defined as the iteration starting with 529 sending asynchronous messages to the controller (s) at the maximum 530 possible rate and ending with an iteration that the controller(s) 531 processes the received asynchronous messages without dropping. 533 Reference Test Setup: 535 The test SHOULD use one of the test setups described in section 3.1 536 or section 3.2 of this document in combination with Appendix A. 538 Prerequisite: 540 1. The controller MUST have successfully completed the network 541 topology discovery for the connected Network Devices. 543 Procedure: 545 1. Generate asynchronous messages continuously at the maximum 546 possible rate on the established connections from all the 547 emulated/simulated Network Devices for the given Test Duration 548 (Td). 549 2. Record the total number of responses received from the controller 550 (Nrx1) as well as the number of messages sent (Ntx1) to the 551 controller within the test duration(Td). 552 3. Repeat the test by generating the asynchronous messages equal to 553 the number of responses received from the controller in last 554 iteration for the given test duration (Td). 555 4. Test MUST be repeated until the generated asynchronous messages 556 and the responses received from the controller are equal for two 557 consecutive iterations. 558 5. Record the number of responses received from the controller (Nrxn) 559 as well as the number of messages sent(Ntxn) to the controller in 560 the last test iteration. 562 Measurement: 564 Nrxn 565 Asynchronous Message Processing Rate Tr1 = ----- 566 Td 568 Tr1 + Tr2 + Tr3..Trn 569 Average Asynchronous Message Processing Rate= -------------------- 570 Total Test Iterations 572 Reporting Format: 574 The Asynchronous Message Processing Rate results MUST be reported in 575 the format of a table with a row for each iteration. The last row of 576 the table indicates the average Asynchronous Message Processing 577 Rate. 579 The report should capture the following information in addition to 580 the configuration parameters captured in section 5. 582 - Offered rate (Ntx) 584 - Loss Ratio 586 If this test is repeated with varying number of nodes over same 587 topology, the results SHOULD be reported in the form of a graph. The 588 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 589 SHOULD be the average Asynchronous Message Processing Rate. 591 If this test is repeated with same number of nodes over different 592 topologies, the results SHOULD be reported in the form of a graph. 593 The X coordinate SHOULD be the Topology Type, the Y coordinate 594 SHOULD be the average Asynchronous Message Processing Rate. 596 5.1.4. Reactive Path Provisioning Time 598 Objective: 600 The time taken by the controller to setup a path reactively between 601 source and destination node, defined as the interval starting with 602 the first flow provisioning request message received by the 603 controller(s) at its Southbound interface, ending with the last flow 604 provisioning response message sent from the controller(s) at its 605 Southbound interface. 607 Reference Test Setup: 609 The test SHOULD use one of the test setups described in section 3.1 610 or section 3.2 of this document in combination with Appendix A. The 611 number of Network Devices in the path is a parameter of the test 612 that may be varied from 2 to maximum discovery size in repetitions 613 of this test. 615 Prerequisite: 617 1. The controller MUST contain the network topology information for 618 the deployed network topology. 619 2. The controller should have the knowledge about the location of 620 destination endpoint for which the path has to be provisioned. 621 This can be achieved through dynamic learning or static 622 provisioning. 624 3. Ensure that the default action for 'flow miss' in Network Device 625 is configured to 'send to controller'. 626 4. Ensure that each Network Device in a path requires the controller 627 to make the forwarding decision while paving the entire path. 629 Procedure: 631 1. Send a single traffic stream from the test traffic generator TP1 632 to test traffic generator TP2. 633 2. Record the time of the first flow provisioning request message 634 sent to the controller (Tsf1) from the Network Device at the 635 forwarding plane test emulator interface (I1). 636 3. Wait for the arrival of first traffic frame at the Traffic 637 Endpoint TP2 or the expiry of test duration (Td). 638 4. Record the time of the last flow provisioning response message 639 received from the controller (Tdf1) to the Network Device at the 640 forwarding plane test emulator interface (I1). 642 Measurement: 644 Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1. 646 Tr1 + Tr2 + Tr3 .. Trn 647 Average Reactive Path Provisioning Time = ----------------------- 648 Total Test Iterations 650 Reporting Format: 652 The Reactive Path Provisioning Time results MUST be reported in the 653 format of a table with a row for each iteration. The last row of the 654 table indicates the Average Reactive Path Provisioning Time 656 The report should capture the following information in addition to 657 the configuration parameters captured in section 5. 659 - Number of Network Devices in the path 661 5.1.5. Proactive Path Provisioning Time 663 Objective: 665 The time taken by the controller to setup a path proactively between 666 source and destination node, defined as the interval starting with 667 the first proactive flow provisioned in the controller(s) at its 668 Northbound interface, ending with the last flow provisioning 669 response message sent from the controller(s) at it Southbound 670 interface. 672 Reference Test Setup: 674 The test SHOULD use one of the test setups described in section 3.1 675 or section 3.2 of this document in combination with Appendix A. 677 Prerequisite: 679 1. The controller MUST contain the network topology information for 680 the deployed network topology. 681 2. The controller should have the knowledge about the location of 682 destination endpoint for which the path has to be provisioned. 683 This can be achieved through dynamic learning or static 684 provisioning. 685 3. Ensure that the default action for flow miss in Network Device is 686 'drop'. 688 Procedure: 690 1. Send a single traffic stream from test traffic generator TP1 to 691 TP2. 692 2. Install the flow entries to reach from test traffic generator TP1 693 to the test traffic generator TP2 through controller's northbound 694 or management interface. 695 3. Wait for the arrival of first traffic frame at the test traffic 696 generator TP2 or the expiry of test duration (Td). 697 4. Record the time when the proactive flow is provisioned in the 698 Controller (Tsf1) at the management plane test emulator interface 699 I2. 700 5. Record the time of the last flow provisioning message received 701 from the controller (Tdf1) at the forwarding plane test emulator 702 interface I1. 704 Measurement: 706 Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1. 708 Tr1 + Tr2 + Tr3 .. Trn 709 Average Proactive Path Provisioning Time = ----------------------- 710 Total Test Iterations 712 Reporting Format: 714 The Proactive Path Provisioning Time results MUST be reported in the 715 format of a table with a row for each iteration. The last row of the 716 table indicates the Average Proactive Path Provisioning Time. 718 The report should capture the following information in addition to 719 the configuration parameters captured in section 5. 721 - Number of Network Devices in the path 723 5.1.6. Reactive Path Provisioning Rate 725 Objective: 727 The maximum number of independent paths a controller can 728 concurrently establish between source and destination nodes 729 reactively, defined as the number of paths provisioned by the 730 controller(s) at its Southbound interface for the flow provisioning 731 requests received for path provisioning at its Southbound interface 732 between the start of the test and the expiry of given test duration. 734 Reference Test Setup: 736 The test SHOULD use one of the test setups described in section 3.1 737 or section 3.2 of this document in combination with Appendix A. 739 Prerequisite: 741 1. The controller MUST contain the network topology information for 742 the deployed network topology. 743 2. The controller should have the knowledge about the location of 744 destination addresses for which the paths have to be provisioned. 745 This can be achieved through dynamic learning or static 746 provisioning. 747 3. Ensure that the default action for 'flow miss' in Network Device 748 is configured to 'send to controller'. 749 4. Ensure that each Network Device in a path requires the controller 750 to make the forwarding decision while provisioning the entire 751 path. 753 Procedure: 755 1. Send traffic with unique source and destination addresses from 756 test traffic generator TP1. 757 2. Record total number of unique traffic frames (Ndf) received at the 758 test traffic generator TP2 within the test duration (Td). 760 Measurement: 762 Ndf 763 Reactive Path Provisioning Rate Tr1 = ------ 764 Td 766 Tr1 + Tr2 + Tr3 .. Trn 767 Average Reactive Path Provisioning Rate = ------------------------ 768 Total Test Iterations 770 Reporting Format: 772 The Reactive Path Provisioning Rate results MUST be reported in the 773 format of a table with a row for each iteration. The last row of the 774 table indicates the Average Reactive Path Provisioning Rate. 776 The report should capture the following information in addition to 777 the configuration parameters captured in section 5. 779 - Number of Network Devices in the path 781 - Offered rate 783 5.1.7. Proactive Path Provisioning Rate 785 Objective: 787 Measure the maximum rate of independent paths a controller can 788 concurrently establish between source and destination nodes 789 proactively, defined as the number of paths provisioned by the 790 controller(s) at its Southbound interface for the paths requested in 791 its Northbound interface between the start of the test and the 792 expiry of given test duration . The measurement is based on 793 dataplane observations of successful path activation 795 Reference Test Setup: 797 The test SHOULD use one of the test setups described in section 3.1 798 or section 3.2 of this document in combination with Appendix A. 800 Prerequisite: 802 1. The controller MUST contain the network topology information for 803 the deployed network topology. 805 2. The controller should have the knowledge about the location of 806 destination addresses for which the paths have to be provisioned. 807 This can be achieved through dynamic learning or static 808 provisioning. 810 3. Ensure that the default action for flow miss in Network Device is 811 'drop'. 813 Procedure: 815 1. Send traffic continuously with unique source and destination 816 addresses from test traffic generator TP1. 818 2. Install corresponding flow entries to reach from simulated 819 sources at the test traffic generator TP1 to the simulated 820 destinations at test traffic generator TP2 through controller's 821 northbound or management interface. 823 3. Record total number of unique traffic frames received Ndf) at the 824 test traffic generator TP2 within the test duration (Td). 826 Measurement: 828 Ndf 829 Proactive Path Provisioning Rate Tr1 = ------ 830 Td 832 Tr1 + Tr2 + Tr3 .. Trn 833 Average Proactive Path Provisioning Rate = ----------------------- 834 Total Test Iterations 836 Reporting Format: 838 The Proactive Path Provisioning Rate results MUST be reported in the 839 format of a table with a row for each iteration. The last row of the 840 table indicates the Average Proactive Path Provisioning Rate. 842 The report should capture the following information in addition to 843 the configuration parameters captured in section 5. 845 - Number of Network Devices in the path 847 - Offered rate 849 5.1.8. Network Topology Change Detection Time 851 Objective: 853 The amount of time required for the controller to detect any changes 854 in the network topology, defined as the interval starting with the 855 notification message received by the controller(s) at its Southbound 856 interface, ending with the first topology rediscovery messages sent 857 from the controller(s) at its Southbound interface. 859 Reference Test Setup: 861 The test SHOULD use one of the test setups described in section 3.1 862 or section 3.2 of this document in combination with Appendix A. 864 Prerequisite: 866 1. The controller MUST have successfully discovered the network 867 topology information for the deployed network topology. 869 2. The periodic network discovery operation should be configured to 870 twice the Test duration (Td) value. 872 Procedure: 874 1. Trigger a topology change event by bringing down an active 875 Network Device in the topology. 877 2. Record the time when the first topology change notification is 878 sent to the controller (Tcn) at the forwarding plane test emulator 879 interface (I1). 881 3. Stop the test when the controller sends the first topology re- 882 discovery message to the Network Device or the expiry of test 883 interval (Td). 885 4. Record the time when the first topology re-discovery message is 886 received from the controller (Tcd) at the forwarding plane test 887 emulator interface (I1) 889 Measurement: 891 Network Topology Change Detection Time Tr1 = Tcd-Tcn. 893 Tr1 + Tr2 + Tr3 .. Trn 894 Average Network Topology Change Detection Time = ------------------ 895 Total Test Iterations 897 Reporting Format: 899 The Network Topology Change Detection Time results MUST be reported 900 in the format of a table with a row for each iteration. The last 901 row of the table indicates the average Network Topology Change Time. 903 5.2. Scalability 905 5.2.1. Control Session Capacity 907 Objective: 909 Measure the maximum number of control sessions the controller can 910 maintain, defined as the number of sessions that the controller can 911 accept from network devices, starting with the first control 912 session, ending with the last control session that the controller(s) 913 accepts at its Southbound interface. 915 Reference Test Setup: 917 The test SHOULD use one of the test setups described in section 3.1 918 or section 3.2 of this document in combination with Appendix A. 920 Procedure: 922 1. Establish control connection with controller from every Network 923 Device emulated in the forwarding plane test emulator. 924 2. Stop the test when the controller starts dropping the control 925 connections. 926 3. Record the number of successful connections established with the 927 controller (CCn) at the forwarding plane test emulator. 929 Measurement: 931 Control Sessions Capacity = CCn. 933 Reporting Format: 935 The Control Session Capacity results MUST be reported in addition to 936 the configuration parameters captured in section 5. 938 5.2.2. Network Discovery Size 940 Objective: 942 Measure the network size (number of nodes, links and hosts) that a 943 controller can discover, defined as the size of a network that the 944 controller(s) can discover, starting from a network topology given 945 by the user for discovery, ending with the topology that the 946 controller(s) could successfully discover. 948 Reference Test Setup: 950 The test SHOULD use one of the test setups described in section 3.1 951 or section 3.2 of this document in combination with Appendix A. 953 Prerequisite: 955 1. The controller MUST support automatic network discovery. 956 2. Tester should be able to retrieve the discovered topology 957 information either through controller's management interface or 958 northbound interface. 960 Procedure: 962 1. Establish the network connections between controller and network 963 nodes. 964 2. Query the controller for the discovered network topology 965 information and compare it with the deployed network topology 966 information. 967 3. 3a. Increase the number of nodes by 1 when the comparison is 968 successful and repeat the test. 969 4. 3b. Decrease the number of nodes by 1 when the comparison fails 970 and repeat the test. 971 5. Continue the test until the comparison of step 3b is successful. 972 6. Record the number of nodes for the last iteration (Ns) where the 973 topology comparison was successful. 975 Measurement: 977 Network Discovery Size = Ns. 979 Reporting Format: 981 The Network Discovery Size results MUST be reported in addition to 982 the configuration parameters captured in section 5. 984 5.2.3. Forwarding Table Capacity 986 Objective: 988 Measure the maximum number of flow entries a controller can manage 989 in its Forwarding table. 991 Reference Test Setup: 993 The test SHOULD use one of the test setups described in section 3.1 994 or section 3.2 of this document in combination with Appendix A. 996 Prerequisite: 998 1. The controller Forwarding table should be empty. 999 2. Flow Idle time MUST be set to higher or infinite value. 1000 3. The controller MUST have successfully completed network topology 1001 discovery. 1002 4. Tester should be able to retrieve the forwarding table information 1003 either through controller's management interface or northbound 1004 interface. 1006 Procedure: 1008 Reactive Flow Provisioning Mode: 1010 1. Send bi-directional traffic continuously with unique source and/or 1011 destination addresses from test traffic generators TP1 and TP2 at 1012 the asynchronous message processing rate of controller. 1013 2. Query the controller at a regular interval (e.g., 5 seconds) for 1014 the number of learnt flow entries from its northbound interface. 1015 3. Stop the test when the retrieved value is constant for three 1016 consecutive iterations and record the value received from the last 1017 query (Nrp). 1019 Proactive Flow Provisioning Mode: 1021 1. Install unique flows continuously through controller's northbound 1022 or management interface until a failure response is received from 1023 the controller. 1024 2. Record the total number of successful responses (Nrp). 1026 Note: 1028 Some controller designs for proactive flow provisioning mode may 1029 require the switch to send flow setup requests in order to generate 1030 flow setup responses. In such cases, it is recommended to generate 1031 bi-directional traffic for the provisioned flows. 1033 Measurement: 1035 Proactive Flow Provisioning Mode: 1037 Max Flow Entries = Total number of flows provisioned (Nrp) 1039 Reactive Flow Provisioning Mode: 1041 Max Flow Entries = Total number of learnt flow entries (Nrp) 1043 Forwarding Table Capacity = Max Flow Entries. 1045 Reporting Format: 1047 The Forwarding Table Capacity results MUST be tabulated with the 1048 following information in addition to the configuration parameters 1049 captured in section 5. 1051 - Provisioning Type (Proactive/Reactive) 1053 5.3. Security 1055 5.3.1. Exception Handling 1057 Objective: 1059 Determine the effect of handling error packets and notifications on 1060 performance tests. The impact MUST be measured for the following 1061 performance tests 1063 a. Path Provisioning Rate 1065 b. Path Provisioning Time 1067 c. Network Topology Change Detection Time 1069 Reference Test Setup: 1071 The test SHOULD use one of the test setups described in section 3.1 1072 or section 3.2 of this document in combination with Appendix A. 1074 Prerequisite: 1076 1. This test MUST be performed after obtaining the baseline 1077 measurement results for the above performance tests. 1078 2. Ensure that the invalid messages are not dropped by the 1079 intermediate devices connecting the controller and Network 1080 Devices. 1082 Procedure: 1084 1. Perform the above listed performance tests and send 1% of messages 1085 from the Asynchronous Message Processing Rate as invalid messages 1086 from the connected Network Devices emulated at the forwarding 1087 plane test emulator. 1088 2. Perform the above listed performance tests and send 2% of messages 1089 from the Asynchronous Message Processing Rate as invalid messages 1090 from the connected Network Devices emulated at the forwarding 1091 plane test emulator. 1093 Note: 1095 Invalid messages can be frames with incorrect protocol fields or any 1096 form of failure notifications sent towards controller. 1098 Measurement: 1100 Measurement MUST be done as per the equation defined in the 1101 corresponding performance test measurement section. 1103 Reporting Format: 1105 The Exception Handling results MUST be reported in the format of 1106 table with a column for each of the below parameters and row for 1107 each of the listed performance tests. 1109 - Without Exceptions 1111 - With 1% Exceptions 1113 - With 2% Exceptions 1115 5.3.2. Denial of Service Handling 1117 Objective: 1119 Determine the effect of handling DoS attacks on performance and 1120 scalability tests the impact MUST be measured for the following 1121 tests: 1123 a. Path Provisioning Rate 1125 b. Path Provisioning Time 1127 c. Network Topology Change Detection Time 1129 d. Network Discovery Size 1131 Reference Test Setup: 1133 The test SHOULD use one of the test setups described in section 3.1 1134 or section 3.2 of this document in combination with Appendix A. 1136 Prerequisite: 1138 This test MUST be performed after obtaining the baseline measurement 1139 results for the above tests. 1141 Procedure: 1143 1. Perform the listed tests and launch a DoS attack towards 1144 controller while the test is running. 1146 Note: 1148 DoS attacks can be launched on one of the following interfaces. 1150 a. Northbound (e.g., Sending a huge number of requests on 1151 northbound interface) 1152 b. Management (e.g., Ping requests to controller's management 1153 interface) 1154 c. Southbound (e.g., TCP SYNC messages on southbound interface) 1156 Measurement: 1158 Measurement MUST be done as per the equation defined in the 1159 corresponding test's measurement section. 1161 Reporting Format: 1163 The DoS Attacks Handling results MUST be reported in the format of 1164 table with a column for each of the below parameters and row for 1165 each of the listed tests. 1167 - Without any attacks 1169 - With attacks 1171 The report should also specify the nature of attack and the 1172 interface. 1174 5.4. Reliability 1176 5.4.1. Controller Failover Time 1178 Objective: 1180 The time taken to switch from an active controller to the backup 1181 controller, when the controllers work in redundancy mode and the 1182 active controller fails, defined as the interval starting with the 1183 active controller bringing down, ending with the first re-discovery 1184 message received from the new controller at its Southbound 1185 interface. 1187 Reference Test Setup: 1189 The test SHOULD use the test setup described in section 3.2 of this 1190 document in combination with Appendix A. 1192 Prerequisite: 1194 1. Master controller election MUST be completed. 1195 2. Nodes are connected to the controller cluster as per the 1196 Redundancy Mode (RM). 1197 3. The controller cluster should have successfully completed the 1198 network topology discovery. 1199 4. The Network Device MUST send all new flows to the controller when 1200 it receives from the test traffic generator. 1201 5. Controller should have learnt the location of destination (D1) at 1202 test traffic generator TP2. 1204 Procedure: 1206 1. Send uni-directional traffic continuously with incremental 1207 sequence number and source addresses from test traffic generator 1208 TP1 at the rate that the controller processes without any drops. 1209 2. Ensure that there are no packet drops observed at the test traffic 1210 generator TP2. 1211 3. Bring down the active controller. 1212 4. Stop the test when a first frame received on TP2 after failover 1213 operation. 1214 5. Record the time at which the last valid frame received (T1) at 1215 test traffic generator TP2 before sequence error and the first 1216 valid frame received (T2) after the sequence error at TP2 1218 Measurement: 1220 Controller Failover Time = (T2 - T1) 1222 Packet Loss = Number of missing packet sequences. 1224 Reporting Format: 1226 The Controller Failover Time results MUST be tabulated with the 1227 following information. 1229 - Number of cluster nodes 1231 - Redundancy mode 1233 - Controller Failover Time 1235 - Packet Loss 1237 - Cluster keep-alive interval 1239 5.4.2. Network Re-Provisioning Time 1241 Objective: 1243 The time taken to re-route the traffic by the Controller, when there 1244 is a failure in existing traffic paths, defined as the interval 1245 starting from the first failure notification message received by the 1246 controller, ending with the last flow re-provisioning message sent 1247 by the controller at its Southbound interface. 1249 Reference Test Setup: 1251 This test SHOULD use one of the test setup described in section 3.1 1252 or section 3.2 of this document in combination with Appendix A. 1254 Prerequisite: 1255 1. Network with the given number of nodes and redundant paths MUST be 1256 deployed. 1257 2. Ensure that the controller MUST have knowledge about the location 1258 of test traffic generators TP1 and TP2. 1259 3. Ensure that the controller does not pre-provision the alternate 1260 path in the emulated Network Devices at the forwarding plane test 1261 emulator. 1263 Procedure: 1265 1. Send bi-directional traffic continuously with unique sequence 1266 number from TP1 and TP2. 1267 2. Bring down a link or switch in the traffic path. 1268 3. Stop the test after receiving first frame after network re- 1269 convergence. 1270 4. Record the time of last received frame prior to the frame loss at 1271 TP2 (TP2-Tlfr) and the time of first frame received after the 1272 frame loss at TP2 (TP2-Tffr). There must be a gap in sequence 1273 numbers of these frames 1274 5. Record the time of last received frame prior to the frame loss at 1275 TP1 (TP1-Tlfr) and the time of first frame received after the 1276 frame loss at TP1 (TP1-Tffr). 1278 Measurement: 1280 Forward Direction Path Re-Provisioning Time (FDRT) 1281 = (TP2-Tffr - TP2-Tlfr) 1283 Reverse Direction Path Re-Provisioning Time (RDRT) 1284 = (TP1-Tffr - TP1-Tlfr) 1286 Network Re-Provisioning Time = (FDRT+RDRT)/2 1288 Forward Direction Packet Loss = Number of missing sequence frames 1289 at TP1 1291 Reverse Direction Packet Loss = Number of missing sequence frames 1292 at TP2 1294 Reporting Format: 1296 The Network Re-Provisioning Time results MUST be tabulated with the 1297 following information. 1299 - Number of nodes in the primary path 1301 - Number of nodes in the alternate path 1303 - Network Re-Provisioning Time 1305 - Forward Direction Packet Loss 1307 - Reverse Direction Packet Loss 1309 6. References 1311 6.1. Normative References 1313 [RFC2544] S. Bradner, J. McQuaid, "Benchmarking Methodology for 1314 Network Interconnect Devices",RFC 2544, March 1999. 1316 [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, 1317 "Framework for IP Performance Metrics",RFC 2330, 1318 May 1998. 1320 [RFC6241] R. Enns, M. Bjorklund, J. Schoenwaelder, A. Bierman, 1321 "Network Configuration Protocol (NETCONF)",RFC 6241, 1322 July 2011. 1324 [RFC6020] M. Bjorklund, "YANG - A Data Modeling Language for 1325 the Network Configuration Protocol (NETCONF)", RFC 6020, 1326 October 2010 1328 [RFC5440] JP. Vasseur, JL. Le Roux, "Path Computation Element (PCE) 1329 Communication Protocol (PCEP)", RFC 5440, March 2009. 1331 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 1332 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 1334 [I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil, 1335 Mark.T, Vishwas Manral, Sarah Banks, "Terminology for 1336 Benchmarking SDN Controller Performance", 1337 draft-ietf-bmwg-sdn-controller-benchmark-term-04 1338 (Work in progress), June 28, 2017 1340 6.2. Informative References 1342 [I-D.i2rs-architecture] A. Atlas, J. Halpern, S. Hares, D. Ward, 1343 T. Nadeau, "An Architecture for the Interface to the 1344 Routing System", draft-ietf-i2rs-architecture-09 1345 (Work in progress), March 6, 2015 1347 [OpenContrail] Ankur Singla, Bruno Rijsman, "OpenContrail 1348 Architecture Documentation", 1349 http://opencontrail.org/opencontrail-architecture-documentation 1351 [OpenDaylight] OpenDaylight Controller:Architectural Framework, 1352 https://wiki.opendaylight.org/view/OpenDaylight_Controller 1354 7. IANA Considerations 1356 This document does not have any IANA requests. 1358 8. Security Considerations 1360 Benchmarking tests described in this document are limited to the 1361 performance characterization of controller in lab environment with 1362 isolated network. 1364 The benchmarking network topology will be an independent test setup 1365 and MUST NOT be connected to devices that may forward the test 1366 traffic into a production network, or misroute traffic to the test 1367 management network. 1369 Further, benchmarking is performed on a "black-box" basis, relying 1370 solely on measurements observable external to the controller. 1372 Special capabilities SHOULD NOT exist in the controller specifically 1373 for benchmarking purposes. Any implications for network security 1374 arising from the controller SHOULD be identical in the lab and in 1375 production networks 1377 9. Acknowledgments 1379 The authors would like to thank the following individuals for 1380 providing their valuable comments to the earlier versions of this 1381 document: Al Morton (AT&T), Sandeep Gangadharan (HP), M. Georgescu 1382 (NAIST), Andrew McGregor (Google), Scott Bradner (Harvard 1383 University), Jay Karthik (Cisco), Ramakrishnan (Dell), Khasanov 1384 Boris (Huawei), Brian Castelli (Spirent) 1386 This document was prepared using 2-Word-v2.0.template.dot. 1388 Appendix A. Example Test Topologies 1390 A.1. Leaf-Spine Topology - Three Tier Network Architecture 1392 +----------+ 1393 | SDN | 1394 | Node | (Core) 1395 +----------+ 1396 / \ 1397 / \ 1398 +------+ +------+ 1399 | SDN | | SDN | (Spine) 1400 | Node |.. | Node | 1401 +------+ +------+ 1402 / \ / \ 1403 / \ / \ 1404 l1 / / \ ln-1 1405 / / \ \ 1406 +--------+ +-------+ 1407 | SDN | | SDN | 1408 | Node |.. | Node | (Leaf) 1409 +--------+ +-------+ 1411 A.2. Leaf-Spine Topology - Two Tier Network Architecture 1413 +------+ +------+ 1414 | SDN | | SDN | (Spine) 1415 | Node |.. | Node | 1416 +------+ +------+ 1417 / \ / \ 1418 / \ / \ 1419 l1 / / \ ln-1 1420 / / \ \ 1421 +--------+ +-------+ 1422 | SDN | | SDN | 1423 | Node |.. | Node | (Leaf) 1424 +--------+ +-------+ 1426 Appendix B. Benchmarking Methodology using OpenFlow Controllers 1428 This section gives an overview of OpenFlow protocol and provides 1429 test methodology to benchmark SDN controllers supporting OpenFlow 1430 southbound protocol. 1432 B.1. Protocol Overview 1434 OpenFlow is an open standard protocol defined by Open Networking 1435 Foundation (ONF), used for programming the forwarding plane of 1436 network switches or routers via a centralized controller. 1438 B.2. Messages Overview 1440 OpenFlow protocol supports three messages types namely controller- 1441 to-switch, asynchronous and symmetric. 1443 Controller-to-switch messages are initiated by the controller and 1444 used to directly manage or inspect the state of the switch. These 1445 messages allow controllers to query/configure the switch (Features, 1446 Configuration messages), collect information from switch (Read-State 1447 message), send packets on specified port of switch (Packet-out 1448 message), and modify switch forwarding plane and state (Modify- 1449 State, Role-Request messages etc.). 1451 Asynchronous messages are generated by the switch without a 1452 controller soliciting them. These messages allow switches to update 1453 controllers to denote an arrival of new flow (Packet-in), switch 1454 state change (Flow-Removed, Port-status) and error (Error). 1456 Symmetric messages are generated in either direction without 1457 solicitation. These messages allow switches and controllers to set 1458 up connection (Hello), verify for liveness (Echo) and offer 1459 additional functionalities (Experimenter). 1461 B.3. Connection Overview 1463 OpenFlow channel is used to exchange OpenFlow message between an 1464 OpenFlow switch and an OpenFlow controller. The OpenFlow channel 1465 connection can be setup using plain TCP or TLS. By default, a switch 1466 establishes single connection with SDN controller. A switch may 1467 establish multiple parallel connections to single controller 1468 (auxiliary connection) or multiple controllers to handle controller 1469 failures and load balancing. 1471 B.4. Performance Benchmarking Tests 1473 B.4.1. Network Topology Discovery Time 1475 Procedure: 1477 Network Devices OpenFlow SDN 1478 Controller Application 1479 | | | 1480 | | | 1482 | | | 1483 | | | 1485 | | | 1486 | OFPT_HELLO Exchange | | 1487 |<-------------------------->| | 1488 | | | 1489 | PACKET_OUT with LLDP | | 1490 | to all switches | | 1491 (Tm1)|<---------------------------| | 1492 | | | 1493 | PACKET_IN with LLDP| | 1494 | rcvd from switch-1| | 1495 |--------------------------->| | 1496 | | | 1497 | PACKET_IN with LLDP| | 1498 | rcvd from switch-2| | 1499 |--------------------------->| | 1500 | . | | 1501 | . | | 1502 | | | 1503 | PACKET_IN with LLDP| | 1504 | rcvd from switch-n| | 1505 (Tmn)|--------------------------->| | 1506 | | | 1507 | | | 1509 | | | 1510 | | Query the controller for| 1511 | | discovered n/w topo.(Di)| 1512 | |<--------------------------| 1513 | | | 1514 | | | 1516 | | | 1518 Legend: 1520 NB: Northbound 1521 SB: Southbound 1522 OF: OpenFlow 1523 Tm1: Time of reception of first LLDP message from controller 1524 Tmn: Time of last LLDP message sent to controller 1526 Discussion: 1528 The Network Topology Discovery Time can be obtained by calculating 1529 the time difference between the first PACKET_OUT with LLDP message 1530 received from the controller (Tm1) and the last PACKET_IN with LLDP 1531 message sent to the controller (Tmn) when the comparison is 1532 successful. 1534 B.4.2. Asynchronous Message Processing Time 1536 Procedure: 1538 Network Devices OpenFlow SDN 1539 Controller Application 1540 | | | 1541 |PACKET_IN with single | | 1542 |OFP match header | | 1543 (T0)|--------------------------->| | 1544 | | | 1545 | PACKET_OUT with single OFP | | 1546 | action header | | 1547 (R0)|<---------------------------| | 1548 | . | | 1549 | . | | 1550 | . | | 1551 | | | 1552 |PACKET_IN with single OFP | | 1553 |match header | | 1554 (Tn)|--------------------------->| | 1555 | | | 1556 | PACKET_OUT with single OFP | | 1557 | action header| | 1558 (Rn)|<---------------------------| | 1559 | | | 1560 | | | 1562 | | | 1563 | | | 1566 | | | 1568 Legend: 1570 T0,T1, ..Tn are PACKET_IN messages transmit timestamps. 1571 R0,R1, ..Rn are PACKET_OUT messages receive timestamps. 1572 Nrx : Number of successful PACKET_IN/PACKET_OUT message 1573 exchanges 1575 Discussion: 1577 The Asynchronous Message Processing Time will be obtained by sum of 1578 ((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx. 1580 B.4.3. Asynchronous Message Processing Rate 1582 Procedure: 1584 Network Devices OpenFlow SDN 1585 Controller Application 1586 | | | 1587 |PACKET_IN with multiple OFP | | 1588 |match headers | | 1589 |--------------------------->| | 1590 | | | 1591 | PACKET_OUT with multiple | | 1592 | OFP action headers| | 1593 |<---------------------------| | 1594 | | | 1595 |PACKET_IN with multiple OFP | | 1596 |match headers | | 1597 |--------------------------->| | 1598 | | | 1599 | PACKET_OUT with multiple | | 1600 | OFP action headers| | 1601 |<---------------------------| | 1602 | . | | 1603 | . | | 1604 | . | | 1605 | | | 1606 |PACKET_IN with multiple OFP | | 1607 |match headers | | 1608 |--------------------------->| | 1609 | | | 1610 | PACKET_OUT with multiple | | 1611 | OFP action headers| | 1612 |<---------------------------| | 1613 | | | 1614 | | | 1616 | | | 1617 | | | 1619 | | | 1621 Discussion: 1623 The Asynchronous Message Processing Rate will be obtained by 1624 calculating the number of OFP action headers received in all 1625 PACKET_OUT messages during the test duration. 1627 B.4.4. Reactive Path Provisioning Time 1629 Procedure: 1631 Test Traffic Test Traffic Network Devices OpenFlow 1632 Generator TP1 Generator TP2 Controller 1633 | | | | 1634 | |G-ARP (D1) | | 1635 | |--------------------->| | 1636 | | | | 1637 | | |PACKET_IN(D1) | 1638 | | |------------------>| 1639 | | | | 1640 |Traffic (S1,D1) | | 1641 (Tsf1)|----------------------------------->| | 1642 | | | | 1643 | | | | 1644 | | | | 1645 | | |PACKET_IN(S1,D1) | 1646 | | |------------------>| 1647 | | | | 1648 | | | FLOW_MOD(D1) | 1649 | | |<------------------| 1650 | | | | 1651 | |Traffic (S1,D1) | | 1652 | (Tdf1)|<---------------------| | 1653 | | | | 1655 Legend: 1657 G-ARP: Gratuitous ARP message. 1658 Tsf1: Time of first frame sent from TP1 1659 Tdf1: Time of first frame received from TP2 1661 Discussion: 1663 The Reactive Path Provisioning Time can be obtained by finding the 1664 time difference between the transmit and receive time of the traffic 1665 (Tsf1-Tdf1). 1667 B.4.5. Proactive Path Provisioning Time 1669 Procedure: 1671 Test Traffic Test Traffic Network Devices OpenFlow SDN 1672 Generator TP1 Generator TP2 Controller Application 1673 | | | | | 1674 | |G-ARP (D1) | | | 1675 | |-------------->| | | 1676 | | | | | 1677 | | |PACKET_IN(D1) | | 1678 | | |--------------->| | 1679 | | | | | 1680 |Traffic (S1,D1) | | | 1681 Tsf1)|---------------------------->| | | 1682 | | | | | 1683 | | | | | 1685 | | | | | 1686 | | | FLOW_MOD(D1) | | 1687 | | |<---------------| | 1688 | | | | | 1689 | |Traffic (S1,D1)| | | 1690 | (Tdf1)|<--------------| | | 1691 | | | | | 1693 Legend: 1695 G-ARP: Gratuitous ARP message. 1696 Tsf1: Time of first frame sent from TP1 1697 Tdf1: Time of first frame received from TP2 1699 Discussion: 1701 The Proactive Path Provisioning Time can be obtained by finding the 1702 time difference between the transmit and receive time of the traffic 1703 (Tsf1-Tdf1). 1705 B.4.6. Reactive Path Provisioning Rate 1707 Procedure: 1709 Test Traffic Test Traffic Network Devices OpenFlow 1710 Generator TP1 Generator TP2 Controller 1711 | | | | 1712 | | | | 1713 | | | | 1714 | |G-ARP (D1..Dn) | | 1715 | |--------------------| | 1716 | | | | 1717 | | |PACKET_IN(D1..Dn) | 1718 | | |--------------------->| 1719 | | | | 1720 |Traffic (S1..Sn,D1..Dn) | | 1721 |--------------------------------->| | 1722 | | | | 1723 | | |PACKET_IN(S1.Sn,D1.Dn)| 1724 | | |--------------------->| 1725 | | | | 1726 | | | FLOW_MOD(S1) | 1727 | | |<---------------------| 1728 | | | | 1729 | | | FLOW_MOD(D1) | 1730 | | |<---------------------| 1731 | | | | 1732 | | | FLOW_MOD(S2) | 1733 | | |<---------------------| 1734 | | | | 1735 | | | FLOW_MOD(D2) | 1736 | | |<---------------------| 1737 | | | . | 1738 | | | . | 1739 | | | | 1740 | | | FLOW_MOD(Sn) | 1741 | | |<---------------------| 1742 | | | | 1743 | | | FLOW_MOD(Dn) | 1744 | | |<---------------------| 1745 | | | | 1746 | | Traffic (S1..Sn, | | 1747 | | D1..Dn)| | 1748 | |<-------------------| | 1749 | | | | 1750 | | | | 1752 Legend: 1754 G-ARP: Gratuitous ARP 1755 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1756 Destination Endpoint n 1757 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1758 Endpoint n 1760 Discussion: 1762 The Reactive Path Provisioning Rate can be obtained by finding the 1763 total number of frames received at TP2 after the test duration. 1765 B.4.7. Proactive Path Provisioning Rate 1767 Procedure: 1769 Test Traffic Test Traffic Network Devices OpenFlow SDN 1770 Generator TP1 Generator TP2 Controller Application 1771 | | | | | 1772 | |G-ARP (D1..Dn) | | | 1773 | |-------------->| | | 1774 | | | | | 1775 | | |PACKET_IN(D1.Dn)| | 1776 | | |--------------->| | 1777 | | | | | 1778 |Traffic (S1..Sn,D1..Dn) | | | 1779 Tsf1)|---------------------------->| | | 1780 | | | | | 1781 | | | | | 1783 | | | | | 1784 | | | | . | 1785 | | | | | 1787 | | | | | 1788 | | | FLOW_MOD(S1) | | 1789 | | |<---------------| | 1790 | | | | | 1791 | | | FLOW_MOD(D1) | | 1792 | | |<---------------| | 1793 | | | | | 1794 | | | . | | 1795 | | | FLOW_MOD(Sn) | | 1796 | | |<---------------| | 1797 | | | | | 1798 | | | FLOW_MOD(Dn) | | 1799 | | |<---------------| | 1800 | | | | | 1801 | |Traffic (S1.Sn,| | | 1802 | | D1.Dn)| | | 1803 | (Tdf1)|<--------------| | | 1804 | | | | | 1806 Legend: 1808 G-ARP: Gratuitous ARP 1809 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1810 Destination Endpoint n 1811 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1812 Endpoint n 1814 Discussion: 1816 The Proactive Path Provisioning Rate can be obtained by finding the 1817 total number of frames received at TP2 after the test duration 1819 B.4.8. Network Topology Change Detection Time 1821 Procedure: 1823 Network Devices OpenFlow SDN 1824 Controller Application 1825 | | | 1826 | | | 1828 | | | 1829 T0 |PORT_STATUS with link down | | 1830 | from S1 | | 1831 |--------------------------->| | 1832 | | | 1833 |First PACKET_OUT with LLDP | | 1834 |to OF Switch | | 1835 T1 |<---------------------------| | 1836 | | | 1837 | | | 1840 Discussion: 1842 The Network Topology Change Detection Time can be obtained by 1843 finding the difference between the time the OpenFlow switch S1 sends 1844 the PORT_STATUS message (T0) and the time that the OpenFlow 1845 controller sends the first topology re-discovery message (T1) to 1846 OpenFlow switches. 1848 B.5. Scalability 1850 B.5.1. Control Sessions Capacity 1852 Procedure: 1854 Network Devices OpenFlow 1855 Controller 1856 | | 1857 | OFPT_HELLO Exchange for Switch 1 | 1858 |<------------------------------------->| 1859 | | 1860 | OFPT_HELLO Exchange for Switch 2 | 1861 |<------------------------------------->| 1862 | . | 1863 | . | 1864 | . | 1865 | OFPT_HELLO Exchange for Switch n | 1866 |X<----------------------------------->X| 1867 | | 1869 Discussion: 1871 The value of Switch n-1 will provide Control Sessions Capacity. 1873 B.5.2. Network Discovery Size 1875 Procedure: 1877 Network Devices OpenFlow SDN 1878 Controller Application 1879 | | | 1880 | | | 1882 | | | 1883 | OFPT_HELLO Exchange | | 1884 |<-------------------------->| | 1885 | | | 1886 | PACKET_OUT with LLDP | | 1887 | to all switches | | 1888 |<---------------------------| | 1889 | | | 1890 | PACKET_IN with LLDP| | 1891 | rcvd from switch-1| | 1892 |--------------------------->| | 1893 | | | 1894 | PACKET_IN with LLDP| | 1895 | rcvd from switch-2| | 1896 |--------------------------->| | 1897 | . | | 1898 | . | | 1899 | | | 1900 | PACKET_IN with LLDP| | 1901 | rcvd from switch-n| | 1902 |--------------------------->| | 1903 | | | 1904 | | | 1906 | | | 1907 | | Query the controller for| 1908 | | discovered n/w topo.(N1)| 1909 | |<--------------------------| 1910 | | | 1911 | | | 1913 | | | 1914 | | | 1917 | | | 1919 Legend: 1921 n/w topo: Network Topology 1922 OF: OpenFlow 1924 Discussion: 1926 The value of N1 provides the Network Discovery Size value. The test 1927 duration can be set to the stipulated time within which the user 1928 expects the controller to complete the discovery process. 1930 B.5.3. Forwarding Table Capacity 1931 Procedure: 1933 Test Traffic Network Devices OpenFlow SDN 1934 Generator TP1 Controller Application 1935 | | | | 1936 | | | | 1937 |G-ARP (H1..Hn) | | | 1938 |----------------->| | | 1939 | | | | 1940 | |PACKET_IN(D1..Dn) | | 1941 | |------------------>| | 1942 | | | | 1943 | | || 1944 | | | | 1945 | | | |(F1) 1947 | | | | 1948 | | || 1949 | | | | 1950 | | | |(F2) 1952 | | | | 1953 | | || 1954 | | | | 1955 | | | |(F3) 1957 | | | | 1958 | | | | 1960 | | | | 1962 Legend: 1964 G-ARP: Gratuitous ARP 1965 H1..Hn: Host 1 .. Host n 1966 FWD: Forwarding Table 1968 Discussion: 1970 Query the controller forwarding table entries for multiple times 1971 until the three consecutive queries return the same value. The last 1972 value retrieved from the controller will provide the Forwarding 1973 Table Capacity value. The query interval is user configurable. The 5 1974 seconds shown in this example is for representational purpose. 1976 B.6. Security 1978 B.6.1. Exception Handling 1980 Procedure: 1982 Test Traffic Test Traffic Network Devices OpenFlow SDN 1983 Generator TP1 Generator TP2 Controller Application 1984 | | | | | 1985 | |G-ARP (D1..Dn) | | | 1986 | |------------------>| | | 1987 | | | | | 1988 | | |PACKET_IN(D1..Dn)| | 1989 | | |---------------->| | 1990 | | | | | 1991 |Traffic (S1..Sn,D1..Dn) | | | 1992 |----------------------------->| | | 1993 | | | | | 1994 | | |PACKET_IN(S1..Sa,| | 1995 | | | D1..Da)| | 1996 | | |---------------->| | 1997 | | | | | 1998 | | |PACKET_IN(Sa+1.. | | 1999 | | |.Sn,Da+1..Dn) | | 2000 | | |(1% incorrect OFP| | 2001 | | | Match header)| | 2002 | | |---------------->| | 2003 | | | | | 2004 | | | FLOW_MOD(D1..Dn)| | 2005 | | |<----------------| | 2006 | | | | | 2007 | | | FLOW_MOD(S1..Sa)| | 2008 | | | OFP headers| | 2009 | | |<----------------| | 2010 | | | | | 2011 | |Traffic (S1..Sa, | | | 2012 | | D1..Da)| | | 2013 | |<------------------| | | 2014 | | | | | 2015 | | | | | 2018 | | | | | 2019 | | | | | 2022 | | | | | 2023 | | | | | 2027 | | | | | 2028 | | | | | 2031 | | | | | 2033 Legend: 2035 G-ARP: Gratuitous ARP 2036 PACKET_IN(Sa+1..Sn,Da+1..Dn): OpenFlow PACKET_IN with wrong 2037 version number 2038 Rn1: Total number of frames received at Test Port 2 with 2039 1% incorrect frames 2040 Rn2: Total number of frames received at Test Port 2 with 2041 2% incorrect frames 2043 Discussion: 2045 The traffic rate sent towards OpenFlow switch from Test Port 1 2046 should be 1% higher than the Path Programming Rate. Rn1 will provide 2047 the Path Provisioning Rate of controller at 1% of incorrect frames 2048 handling and Rn2 will provide the Path Provisioning Rate of 2049 controller at 2% of incorrect frames handling. 2051 The procedure defined above provides test steps to determine the 2052 effect of handling error packets on Path Programming Rate. Same 2053 procedure can be adopted to determine the effects on other 2054 performance tests listed in this benchmarking tests. 2056 B.6.2. Denial of Service Handling 2058 Procedure: 2060 Test Traffic Test Traffic Network Devic OpenFlow SDN 2061 Generator TP1 Generator TP2 Controller Application 2062 | | | | | 2063 | |G-ARP (D1..Dn) | | | 2064 | |------------------>| | | 2065 | | | | | 2066 | | |PACKET_IN(D1..Dn)| | 2067 | | |---------------->| | 2068 | | | | | 2069 |Traffic (S1..Sn,D1..Dn) | | | 2070 |----------------------------->| | | 2071 | | | | | 2072 | | |PACKET_IN(S1..Sn,| | 2073 | | | D1..Dn)| | 2074 | | |---------------->| | 2075 | | | | | 2076 | | |TCP SYN Attack | | 2077 | | |from a switch | | 2078 | | |---------------->| | 2079 | | | | | 2080 | | |FLOW_MOD(D1..Dn) | | 2081 | | |<----------------| | 2082 | | | | | 2083 | | | FLOW_MOD(S1..Sn)| | 2084 | | | OFP headers| | 2085 | | |<----------------| | 2086 | | | | | 2087 | |Traffic (S1..Sn, | | | 2088 | | D1..Dn)| | | 2089 | |<------------------| | | 2090 | | | | | 2091 | | | | | 2094 | | | | | 2095 | | | | | 2098 | | | | | 2100 Legend: 2102 G-ARP: Gratuitous ARP 2104 Discussion: 2106 TCP SYN attack should be launched from one of the emulated/simulated 2107 OpenFlow Switch. Rn1 provides the Path Programming Rate of 2108 controller uponhandling denial of service attack. 2110 The procedure defined above provides test steps to determine the 2111 effect of handling denial of service on Path Programming Rate. Same 2112 procedure can be adopted to determine the effects on other 2113 performance tests listed in this benchmarking tests. 2115 B.7. Reliability 2117 B.7.1. Controller Failover Time 2119 Procedure: 2121 Test Traffic Test Traffic Network Device OpenFlow SDN 2122 Generator TP1 Generator TP2 Controller Application 2123 | | | | | 2124 | |G-ARP (D1) | | | 2125 | |------------>| | | 2126 | | | | | 2127 | | |PACKET_IN(D1) | | 2128 | | |---------------->| | 2129 | | | | | 2130 |Traffic (S1..Sn,D1) | | | 2131 |-------------------------->| | | 2132 | | | | | 2133 | | | | | 2134 | | |PACKET_IN(S1,D1) | | 2135 | | |---------------->| | 2136 | | | | | 2137 | | |FLOW_MOD(D1) | | 2138 | | |<----------------| | 2139 | | |FLOW_MOD(S1) | | 2140 | | |<----------------| | 2141 | | | | | 2142 | |Traffic (S1,D1)| | | 2143 | |<------------| | | 2144 | | | | | 2145 | | |PACKET_IN(S2,D1) | | 2146 | | |---------------->| | 2147 | | | | | 2148 | | |FLOW_MOD(S2) | | 2149 | | |<----------------| | 2150 | | | | | 2151 | | |PACKET_IN(Sn-1,D1)| | 2152 | | |---------------->| | 2153 | | | | | 2154 | | |PACKET_IN(Sn,D1) | | 2155 | | |---------------->| | 2156 | | | . | | 2157 | | | . | | 2160 | | | FLOW_MOD(Sn-1) | | 2161 | | | <-X----------| | 2162 | | | | | 2163 | | |FLOW_MOD(Sn) | | 2164 | | |<----------------| | 2165 | | | | | 2166 | |Traffic (Sn,D1)| | | 2167 | |<------------| | | 2168 | | | | | 2169 | | | | | 2174 Legend: 2176 G-ARP: Gratuitous ARP. 2178 Discussion: 2180 The time difference between the last valid frame received before the 2181 traffic loss and the first frame received after the traffic loss 2182 will provide the controller failover time. 2184 If there is no frame loss during controller failover time, the 2185 controller failover time can be deemed negligible. 2187 B.7.2. Network Re-Provisioning Time 2189 Procedure: 2191 Test Traffic Test Traffic Network Devices OpenFlow SDN 2192 Generator TP1 Generator TP2 Controller Application 2193 | | | | | 2194 | |G-ARP (D1) | | | 2195 | |-------------->| | | 2196 | | | | | 2197 | | |PACKET_IN(D1) | | 2198 | | |---------------->| | 2199 | G-ARP (S1) | | | 2200 |---------------------------->| | | 2201 | | | | | 2202 | | |PACKET_IN(S1) | | 2203 | | |---------------->| | 2204 | | | | | 2205 |Traffic (S1,D1,Seq.no (1..n))| | | 2206 |---------------------------->| | | 2207 | | | | | 2208 | | |PACKET_IN(S1,D1) | | 2209 | | |---------------->| | 2210 | | | | | 2211 | |Traffic (D1,S1,| | | 2212 | | Seq.no (1..n))| | | 2213 | |-------------->| | | 2214 | | | | | 2215 | | |PACKET_IN(D1,S1) | | 2216 | | |---------------->| | 2217 | | | | | 2218 | | |FLOW_MOD(D1) | | 2219 | | |<----------------| | 2220 | | | | | 2221 | | |FLOW_MOD(S1) | | 2222 | | |<----------------| | 2223 | | | | | 2224 | |Traffic (S1,D1,| | | 2225 | | Seq.no(1))| | | 2226 | |<--------------| | | 2227 | | | | | 2228 | |Traffic (S1,D1,| | | 2229 | | Seq.no(2))| | | 2230 | |<--------------| | | 2231 | | | | | 2232 | | | | | 2233 | Traffic (D1,S1,Seq.no(1))| | | 2234 |<----------------------------| | | 2235 | | | | | 2236 | Traffic (D1,S1,Seq.no(2))| | | 2237 |<----------------------------| | | 2238 | | | | | 2239 | Traffic (D1,S1,Seq.no(x))| | | 2240 |<----------------------------| | | 2241 | | | | | 2242 | |Traffic (S1,D1,| | | 2243 | | Seq.no(x))| | | 2244 | |<--------------| | | 2245 | | | | | 2246 | | | | | 2247 | | | | | 2251 | | | | | 2252 | | |PORT_STATUS(Sa) | | 2253 | | |---------------->| | 2254 | | | | | 2255 | |Traffic (S1,D1,| | | 2256 | | Seq.no(n-1))| | | 2257 | | X<-----------| | | 2258 | | | | | 2259 | Traffic (D1,S1,Seq.no(n-1))| | | 2260 | X------------------------| | | 2261 | | | | | 2262 | | | | | 2263 | | |FLOW_MOD(D1) | | 2264 | | |<----------------| | 2265 | | | | | 2266 | | |FLOW_MOD(S1) | | 2267 | | |<----------------| | 2268 | | | | | 2269 | Traffic (D1,S1,Seq.no(n))| | | 2270 |<----------------------------| | | 2271 | | | | | 2272 | |Traffic (S1,D1,| | | 2273 | | Seq.no(n))| | | 2274 | |<--------------| | | 2275 | | | | | 2276 | | | | | 2281 Legend: 2283 G-ARP: Gratuitous ARP message. 2284 Seq.no: Sequence number. 2285 Sa: Neighbour switch of the switch that was brought down. 2287 Discussion: 2289 The time difference between the last valid frame received before the 2290 traffic loss (Packet number with sequence number x) and the first 2291 frame received after the traffic loss (packet with sequence number 2292 n) will provide the network path re-provisioning time. 2294 Note that the test is valid only when the controller provisions the 2295 alternate path upon network failure. 2297 Authors' Addresses 2299 Bhuvaneswaran Vengainathan 2300 Veryx Technologies Inc. 2301 1 International Plaza, Suite 550 2302 Philadelphia 2303 PA 19113 2305 Email: bhuvaneswaran.vengainathan@veryxtech.com 2307 Anton Basil 2308 Veryx Technologies Inc. 2309 1 International Plaza, Suite 550 2310 Philadelphia 2311 PA 19113 2313 Email: anton.basil@veryxtech.com 2315 Mark Tassinari 2316 Hewlett-Packard, 2317 8000 Foothills Blvd, 2318 Roseville, CA 95747 2320 Email: mark.tassinari@hpe.com 2322 Vishwas Manral 2323 Nano Sec, 2324 CA 2326 Email: vishwas.manral@gmail.com 2328 Sarah Banks 2329 VSS Monitoring 2330 930 De Guigne Drive, 2331 Sunnyvale, CA 2333 Email: sbanks@encrypted.net