idnits 2.17.1 draft-ietf-bmwg-sdn-controller-benchmark-meth-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 540 has weird spacing: '...warding plane...' -- The document date (March 21, 2016) is 2952 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2544' is defined on line 1263, but no explicit reference was found in the text == Unused Reference: 'RFC2330' is defined on line 1266, but no explicit reference was found in the text == Unused Reference: 'RFC6241' is defined on line 1270, but no explicit reference was found in the text == Unused Reference: 'RFC6020' is defined on line 1274, but no explicit reference was found in the text == Unused Reference: 'RFC5440' is defined on line 1278, but no explicit reference was found in the text == Unused Reference: 'I-D.sdn-controller-benchmark-term' is defined on line 1284, but no explicit reference was found in the text == Unused Reference: 'I-D.i2rs-architecture' is defined on line 1292, but no explicit reference was found in the text == Unused Reference: 'OpenContrail' is defined on line 1297, but no explicit reference was found in the text == Unused Reference: 'OpenDaylight' is defined on line 1301, but no explicit reference was found in the text == Outdated reference: A later version (-10) exists of draft-ietf-bmwg-sdn-controller-benchmark-term-01 == Outdated reference: A later version (-15) exists of draft-ietf-i2rs-architecture-09 Summary: 0 errors (**), 0 flaws (~~), 13 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Bhuvaneswaran Vengainathan 2 Network Working Group Anton Basil 3 Intended Status: Informational Veryx Technologies 4 Expires: September 19, 2016 Mark Tassinari 5 Hewlett-Packard 6 Vishwas Manral 7 Nano Sec 8 Sarah Banks 9 VSS Monitoring 10 March 21, 2016 12 Benchmarking Methodology for SDN Controller Performance 13 draft-ietf-bmwg-sdn-controller-benchmark-meth-01 15 Abstract 17 This document defines the methodologies for benchmarking control 18 plane performance of SDN controllers. Terminology related to 19 benchmarking SDN controllers is described in the companion 20 terminology document. SDN controllers have been implemented with 21 many varying designs in order to achieve their intended network 22 functionality. Hence, the authors have taken the approach of 23 considering an SDN controller as a black box, defining the 24 methodology in a manner that is agnostic to protocols and network 25 services supported by controllers. The intent of this document is to 26 provide a standard mechanism to measure the performance of all 27 controller implementations. 29 Status of this Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current. 39 Internet-Drafts are draft documents valid for a maximum of six 40 months and may be updated, replaced, or obsoleted by other documents 41 at any time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress. 44 This Internet-Draft will expire on September 19, 2016. 46 Copyright Notice 48 Copyright (c) 2016 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with 56 respect to this document. Code Components extracted from this 57 document must include Simplified BSD License text as described in 58 Section 4.e of the Trust Legal Provisions and are provided without 59 warranty as described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction...................................................3 64 2. Scope..........................................................4 65 3. Test Setup.....................................................4 66 3.1. Test setup - Controller working in Standalone Mode........5 67 3.2. Test setup - Controller working in Cluster Mode...........6 68 4. Test Considerations............................................7 69 4.1. Network Topology..........................................7 70 4.2. Test Traffic..............................................7 71 4.3. Connection Setup..........................................7 72 4.4. Measurement Point Specification and Recommendation........8 73 4.5. Connectivity Recommendation...............................8 74 4.6. Test Repeatability........................................8 75 5. Benchmarking Tests.............................................9 76 5.1. Performance...............................................9 77 5.1.1. Network Topology Discovery Time......................9 78 5.1.2. Asynchronous Message Processing Time................11 79 5.1.3. Asynchronous Message Processing Rate................12 80 5.1.4. Reactive Path Provisioning Time.....................14 81 5.1.5. Proactive Path Provisioning Time....................15 82 5.1.6. Reactive Path Provisioning Rate.....................16 83 5.1.7. Proactive Path Provisioning Rate....................18 84 5.1.8. Network Topology Change Detection Time..............19 85 5.2. 6.2 Scalability..........................................20 86 5.2.1. Control Session Capacity............................20 87 5.2.2. Network Discovery Size..............................21 88 5.2.3. 6.2.3 Forwarding Table Capacity.....................22 89 5.3. 6.3 Security.............................................23 90 5.3.1. 6.3.1 Exception Handling............................23 91 5.3.2. Denial of Service Handling..........................25 93 5.4. Reliability..............................................26 94 5.4.1. Controller Failover Time............................26 95 5.4.2. Network Re-Provisioning Time........................27 96 6. References....................................................29 97 6.1. Normative References.....................................29 98 6.2. Informative References...................................30 99 7. IANA Considerations...........................................30 100 8. Security Considerations.......................................30 101 9. Acknowledgments...............................................30 102 Appendix A. Example Test Topologies..............................31 103 A.1. Leaf-Spine Topology - Three Tier Network Architecture....31 104 A.2. Leaf-Spine Topology - Two Tier Network Architecture......31 105 Appendix B. Benchmarking Methodology using OpenFlow Controllers..32 106 B.1. Protocol Overview........................................32 107 B.2. Messages Overview........................................32 108 B.3. Connection Overview......................................32 109 B.4. Performance Benchmarking Tests...........................33 110 B.4.1. Network Topology Discovery Time.....................33 111 B.4.2. Asynchronous Message Processing Time................34 112 B.4.3. Asynchronous Message Processing Rate................35 113 B.4.4. Reactive Path Provisioning Time.....................36 114 B.4.5. Proactive Path Provisioning Time....................37 115 B.4.6. Reactive Path Provisioning Rate.....................38 116 B.4.7. Proactive Path Provisioning Rate....................39 117 B.4.8. Network Topology Change Detection Time..............40 118 B.5. Scalability..............................................41 119 B.5.1. Control Sessions Capacity...........................41 120 B.5.2. Network Discovery Size..............................41 121 B.5.3. Forwarding Table Capacity...........................42 122 B.6. Security.................................................44 123 B.6.1. Exception Handling..................................44 124 B.6.2. Denial of Service Handling..........................45 125 B.7. Reliability..............................................47 126 B.7.1. Controller Failover Time............................47 127 B.7.2. Network Re-Provisioning Time........................48 128 Authors' Addresses...............................................51 130 1. Introduction 132 This document provides generic methodologies for benchmarking SDN 133 controller performance. An SDN controller may support many 134 northbound and southbound protocols, implement a wide range of 135 applications, and work solely, or as a group to achieve the desired 136 functionality. This document considers an SDN controller as a black 137 box, regardless of design and implementation. The tests defined in 138 the document can be used to benchmark SDN controller for 139 performance, scalability, reliability and security independent of 140 northbound and southbound protocols. These tests can be performed on 141 an SDN controller running as a virtual machine (VM) instance or on a 142 bare metal server. This document is intended for those who want to 143 measure the SDN controller performance as well as compare various 144 SDN controllers performance. 146 Conventions used in this document 148 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 149 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 150 document are to be interpreted as described in RFC 2119. 152 2. Scope 154 This document defines methodology to measure the networking metrics 155 of SDN controllers. For the purpose of this memo, the SDN controller 156 is a function that manages and controls Network Devices. Any SDN 157 controller without a control capability is out of scope for this 158 memo. The tests defined in this document enable benchmarking of SDN 159 Controllers in two ways; as a standalone controller and as a cluster 160 of homogeneous controllers. These tests are recommended for 161 execution in lab environments rather than in live network 162 deployments. Performance benchmarking of a federation of controllers 163 is beyond the scope of this document. 165 3. Test Setup 167 The tests defined in this document enable measurement of an SDN 168 controllers performance in standalone mode and cluster mode. This 169 section defines common reference topologies that are later referred 170 to in individual tests. 172 3.1. Test setup - Controller working in Standalone Mode 174 +-----------------------------------------------------------+ 175 | Application Plane Test Emulator | 176 | | 177 | +-----------------+ +-------------+ | 178 | | Application | | Service | | 179 | +-----------------+ +-------------+ | 180 | | 181 +-----------------------------+(I2)-------------------------+ 182 | 183 | 184 | (Northbound interface) 185 +-------------------------------+ 186 | +----------------+ | 187 | | SDN Controller | | 188 | +----------------+ | 189 | | 190 | Device Under Test (DUT) | 191 +-------------------------------+ 192 | (Southbound interface) 193 | 194 | 195 +-----------------------------+(I1)-------------------------+ 196 | | 197 | +-----------+ +-----------+ | 198 | | Network |l1 ln-1| Network | | 199 | | Device 1 |---- .... ----| Device n | | 200 | +-----------+ +-----------+ | 201 | |l0 |ln | 202 | | | | 203 | | | | 204 | +---------------+ +---------------+ | 205 | | Test Traffic | | Test Traffic | | 206 | | Generator | | Generator | | 207 | | (TP1) | | (TP2) | | 208 | +---------------+ +---------------+ | 209 | | 210 | Forwarding Plane Test Emulator | 211 +-----------------------------------------------------------+ 213 Figure 1 215 3.2. Test setup - Controller working in Cluster Mode 217 +-----------------------------------------------------------+ 218 | Application Plane Test Emulator | 219 | | 220 | +-----------------+ +-------------+ | 221 | | Application | | Service | | 222 | +-----------------+ +-------------+ | 223 | | 224 +-----------------------------+(I2)-------------------------+ 225 | 226 | 227 | (Northbound interface) 228 +---------------------------------------------------------+ 229 | | 230 | ------------------ ------------------ | 231 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 232 | ------------------ ------------------ | 233 | | 234 | Device Under Test (DUT) | 235 +---------------------------------------------------------+ 236 | (Southbound interface) 237 | 238 | 239 +-----------------------------+(I1)-------------------------+ 240 | | 241 | +-----------+ +-----------+ | 242 | | Network |l1 ln-1| Network | | 243 | | Device 1 |---- .... ----| Device n | | 244 | +-----------+ +-----------+ | 245 | |l0 |ln | 246 | | | | 247 | | | | 248 | +---------------+ +---------------+ | 249 | | Test Traffic | | Test Traffic | | 250 | | Generator | | Generator | | 251 | | (TP1) | | (TP2) | | 252 | +---------------+ +---------------+ | 253 | | 254 | Forwarding Plane Test Emulator | 255 +-----------------------------------------------------------+ 257 Figure 2 259 4. Test Considerations 261 4.1. Network Topology 263 The test cases SHOULD use Leaf-Spine topology with at least 1 264 Network Device in the topology for benchmarking. The test traffic 265 generators TP1 and TP2 SHOULD be connected to the first and the last 266 leaf Network Device. If a test case uses test topology with 1 267 Network Device, the test traffic generators TP1 and TP2 SHOULD be 268 connected to the same node. However to achieve a complete 269 performance characterization of the SDN controller, it is 270 recommended that the controller be benchmarked for many network 271 topologies and a varying number of Network Devices. This document 272 includes a few sample test topologies, defined in Section 10 - 273 Appendix A for reference. Further, care should be taken to make sure 274 that a loop prevention mechanism is enabled either in the SDN 275 controller, or in the network when the topology contains redundant 276 network paths. 278 4.2. Test Traffic 280 Test traffic is used to notify the controller about the arrival of 281 new flows. The test cases SHOULD use multiple frame sizes as 282 recommended in RFC2544 for benchmarking. 284 4.3. Test Emulator Requirements 286 The Test Emulator SHOULD time stamp the transmitted and received 287 control messages to/from the controller on the established network 288 connections. The test cases use these values to compute the 289 controller processing time. 291 4.4. Connection Setup 293 There may be controller implementations that support unencrypted and 294 encrypted network connections with Network Devices. Further, the 295 controller may have backward compatibility with Network Devices 296 running older versions of southbound protocols. It is recommended 297 that the controller performance be measured with one or more 298 applicable connection setup methods defined below. 300 1. Unencrypted connection with Network Devices, running same 301 protocol version. 302 2. Unencrypted connection with Network Devices, running different 303 protocol versions. 304 Example: 306 a. Controller running current protocol version and switch 307 running older protocol version 308 b. Controller running older protocol version and switch 309 running current protocol version 310 3. Encrypted connection with Network Devices, running same 311 protocol version 312 4. Encrypted connection with Network Devices, running different 313 protocol versions. 314 Example: 315 a. Controller running current protocol version and switch 316 running older protocol version 317 b. Controller running older protocol version and switch 318 running current protocol version 320 4.5. Measurement Point Specification and Recommendation 322 The measurement accuracy depends on several factors including the 323 point of observation where the indications are captured. For 324 example, the notification can be observed at the controller or test 325 emulator. The test operator SHOULD make the observations/ 326 measurements at the interfaces of test emulator unless it is 327 explicitly mentioned otherwise in the individual test. 329 4.6. Connectivity Recommendation 331 The SDN controller in the test setup SHOULD be connected directly 332 with the forwarding and the management plane test emulators to avoid 333 any delays or failure introduced by the intermediate devices during 334 benchmarking tests. 336 4.7. Test Repeatability 338 To increase the confidence in measured result, it is recommended 339 that each test SHOULD be repeated a minimum of 10 times. 341 Test Reporting 343 Each test has a reporting format that contains some global and 344 identical reporting components, and some individual components that 345 are specific to individual tests. The following test configuration 346 parameters and controller settings parameters MUST be reflected in 347 the test report. 349 Test Configuration Parameters: 351 1. Controller name and version 352 2. Northbound protocols and versions 353 3. Southbound protocols and versions 354 4. Controller redundancy mode (Standalone or Cluster Mode) 355 5. Connection setup (Unencrypted or Encrypted) 356 6. Network Topology (Mesh or Tree or Linear) 357 7. Network Device Type (Physical or Virtual or Emulated) 358 8. Number of Nodes 359 9. Number of Links 360 10. Test Traffic Type 361 11. Controller System Configuration (e.g., CPU, Memory, Operating 362 System, Interface Speed etc.,) 363 12. Reference Test Setup (e.g., Section 3.1 etc.,) 365 Controller Settings Parameters: 366 1. Topology re-discovery timeout 367 2. Controller redundancy mode (e.g., active-standby etc.,) 369 To ensure the repeatability of test, the following capabilities of 370 test emulator SHOULD be reported 372 1. Maximum number of Network Devices that the forwarding plane 373 emulates 374 2. Control message processing time (e.g., Topology Discovery 375 Messages) 377 One way to determine the above two values are to simulate the 378 required control sessions and messages from the control plane. 380 5. Benchmarking Tests 382 5.1. Performance 384 5.1.1. Network Topology Discovery Time 386 Objective: 388 Measure the time taken by the SDN controller to discover the network 389 topology (nodes and links), expressed in milliseconds. 391 Reference Test Setup: 393 The test SHOULD use one of the test setups described in section 3.1 394 or section 3.2 of this document. 396 Prerequisite: 398 1. The controller MUST support network discovery. 400 2. Tester should be able to retrieve the discovered topology 401 information either through the controller's management interface, 402 or northbound interface to determine if the discovery was 403 successful and complete. 404 3. Ensure that the controller's topology re-discovery timeout has 405 been set to the maximum value to avoid initiation of re-discovery 406 process in the middle of the test. 408 Procedure: 410 1. Ensure that the controller is operational, its network 411 applications, northbound and southbound interfaces are up and 412 running. 413 2. Establish the network connections between controller and Network 414 Devices. 415 3. Record the time for the first discovery message (Tm1) received 416 from the controller at forwarding plane test emulator interface 417 I1. 418 4. Query the controller every 3 seconds to obtain the discovered 419 network topology information through the northbound interface or 420 the management interface and compare it with the deployed network 421 topology information. 422 5. Stop the test when the discovered topology information matches the 423 deployed network topology, or when the discovered topology 424 information for 3 consecutive queries return the same details. 425 6. Record the time last discovery message (Tmn) sent to controller 426 from the forwarding plane test emulator interface (I1) when the 427 test completed successfully. (e.g., the topology matches). 429 Measurement: 431 Topology Discovery Time Tr1 = Tmn-Tm1. 433 Tr1 + Tr2 + Tr3 .. Trn 434 Average Topology Discovery Time = ----------------------- 435 Total Test Iterations 437 Reporting Format: 439 The Topology Discovery Time results MUST be reported in the format 440 of a table, with a row for each successful iteration. The last row 441 of the table indicates the average Topology Discovery Time. 443 If this test is repeated with varying number of nodes over the same 444 topology, the results SHOULD be reported in the form of a graph. The 445 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 446 SHOULD be the average Topology Discovery Time. 448 If this test is repeated with same number of nodes over different 449 topologies, the results SHOULD be reported in the form of a graph. 450 The X coordinate SHOULD be the Topology Type, the Y coordinate 451 SHOULD be the average Topology Discovery Time. 453 5.1.2. Asynchronous Message Processing Time 455 Objective: 457 Measure the time taken by the SDN controller to process an 458 asynchronous message, expressed in milliseconds. 460 Reference Test Setup: 462 This test SHOULD use one of the test setup described in section 3.1 463 or section 3.2 of this document. 465 Prerequisite: 467 1. The controller MUST have completed the network topology discovery 468 for the connected Network Devices. 470 Procedure: 472 1. Generate asynchronous messages from every connected Network 473 Device, to the SDN controller, one at a time in series from the 474 forwarding plane test emulator for the test duration. 475 2. Record every request transmit (T1) timestamp and the 476 corresponding response (R1) received timestamp at the 477 forwarding plane test emulator interface (I1) for every 478 successful message exchange. 480 Measurement: 482 (R1-T1) + (R2-T2)..(Rn-Tn) 483 Asynchronous Message Processing Time Tr1 = ----------------------- 484 Nrx 486 Where Nrx is the total number of successful messages exchanged 488 Tr1 + Tr2 + Tr3..Trn 489 Average Asynchronous Message Processing Time= -------------------- 490 Total Test Iterations 492 Reporting Format: 494 The Asynchronous Message Processing Time results MUST be reported in 495 the format of a table with a row for each iteration. The last row of 496 the table indicates the average Asynchronous Message Processing 497 Time. 499 The report should capture the following information in addition to 500 the configuration parameters captured in section 5. - Successful 501 messages exchanged (Nrx) 503 If this test is repeated with varying number of nodes with same 504 topology, the results SHOULD be reported in the form of a graph. The 505 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 506 SHOULD be the average Asynchronous Message Processing Time. 508 If this test is repeated with same number of nodes using different 509 topologies, the results SHOULD be reported in the form of a graph. 510 The X coordinate SHOULD be the Topology Type, the Y coordinate 511 SHOULD be the average Asynchronous Message Processing Time. 513 5.1.3. Asynchronous Message Processing Rate 515 Objective: 517 To measure the maximum rate of asynchronous messages (session 518 aliveness check message, new flow arrival notification message etc.) 519 a controller can process within the test duration, expressed in 520 messages processed per second. 522 Reference Test Setup: 524 The test SHOULD use one of the test setups described in section 3.1 525 or section 3.2 of this document. 527 Prerequisite: 529 1. The controller MUST have completed the network topology discovery 530 for the connected Network Devices. 532 Procedure: 534 1. Generate asynchronous messages continuously at the maximum 535 possible rate on the established connections from all the 536 connected Network Devices in the forwarding plane test emulator 537 for the Test Duration (Td). 538 2. Record the total number of responses received from the controller 539 (Nrx) as well as the number of messages sent(Ntx) to the 540 controller within the test duration(Td) at the forwarding plane 541 test emulator interface (I1). 543 Measurement: 545 Nrx 546 Asynchronous Message Processing Rate Tr1 = ----- 547 Td 549 Tr1 + Tr2 + Tr3..Trn 550 Average Asynchronous Message Processing Rate= -------------------- 551 Total Test Iterations 553 Loss Ratio = (Ntx-Nrx)/100. 555 Reporting Format: 557 The Asynchronous Message Processing Rate results MUST be reported in 558 the format of a table with a row for each iteration. The last row of 559 the table indicates the average Asynchronous Message Processing 560 Rate. 562 The report should capture the following information in addition to 563 the configuration parameters captured in section 5. 565 - Offered rate (Ntx) 567 - Loss Ratio 569 If this test is repeated with varying number of nodes over same 570 topology, the results SHOULD be reported in the form of a graph. The 571 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 572 SHOULD be the average Asynchronous Message Processing Rate. 574 If this test is repeated with same number of nodes over different 575 topologies, the results SHOULD be reported in the form of a graph. 576 The X coordinate SHOULD be the Topology Type, the Y coordinate 577 SHOULD be the average Asynchronous Message Processing Rate. 579 5.1.4. Reactive Path Provisioning Time 581 Objective: 583 To measure the time taken by the controller to setup a path 584 reactively between source and destination node, expressed in 585 milliseconds. 587 Reference Test Setup: 589 The test SHOULD use one of the test setups described in section 3.1 590 or section 3.2 of this document. 592 Prerequisite: 594 1. The controller MUST contain the network topology information for 595 the deployed network topology. 596 2. The controller should have the knowledge about the location of 597 destination endpoint for which the path has to be provisioned. 598 This can be achieved through dynamic learning or static 599 provisioning. 600 3. Ensure that the default action for 'flow miss' in Network Device 601 is configured to 'send to controller'. 602 4. Ensure that each Network Device in a path requires the controller 603 to make the forwarding decision while paving the entire path. 605 Procedure: 607 1. Send a single traffic stream from the test traffic generator TP1 608 to test traffic generator TP2. 609 2. Record the time of the first flow provisioning request message 610 sent to the controller (Tsf1) from the Network Device at the 611 forwarding plane test emulator interface (I1). 612 3. Wait for the arrival of first traffic frame at the Traffic 613 Endpoint TP2 or the expiry of test duration (Td). 614 4. Record the time of the last flow provisioning response message 615 received from the controller (Tdf1) to the Network Device at the 616 forwarding plane test emulator interface (I1). 618 Measurement: 620 Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1. 622 Tr1 + Tr2 + Tr3 .. Trn 623 Average Reactive Path Provisioning Time = ----------------------- 624 Total Test Iterations 626 Reporting Format: 628 The Reactive Path Provisioning Time results MUST be reported in the 629 format of a table with a row for each iteration. The last row of the 630 table indicates the Average Reactive Path Provisioning Time 632 The report should capture the following information in addition to 633 the configuration parameters captured in section 5. 635 - Number of Network Devices in the path 637 5.1.5. Proactive Path Provisioning Time 639 Objective: 641 To measure the time taken by the controller to setup a path 642 proactively between source and destination node, expressed in 643 milliseconds. 645 Reference Test Setup: 647 The test SHOULD use one of the test setups described in section 3.1 648 or section 3.2 of this document. 650 Prerequisite: 652 1. The controller MUST contain the network topology information for 653 the deployed network topology. 654 2. The controller should have the knowledge about the location of 655 destination endpoint for which the path has to be provisioned. 656 This can be achieved through dynamic learning or static 657 provisioning. 658 3. Ensure that the default action for flow miss in Network Device is 659 'drop'. 661 Procedure: 663 1. Send a single traffic stream from test traffic generator TP1 to 664 TP2. 665 2. Install the flow entries to reach from test traffic generator TP1 666 to the test traffic generator TP2 through controller's northbound 667 or management interface. 668 3. Wait for the arrival of first traffic frame at the test traffic 669 generator TP2 or the expiry of test duration (Td). 671 4. Record the time when the proactive flow is provisioned in the 672 Controller (Tsf1) at the management plane test emulator interface 673 I2. 674 5. Record the time of the last flow provisioning message received 675 from the controller (Tdf1) at the forwarding plane test emulator 676 interface I1. 678 Measurement: 680 Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1. 682 Tr1 + Tr2 + Tr3 .. Trn 683 Average Proactive Path Provisioning Time = ----------------------- 684 Total Test Iterations 686 Reporting Format: 688 The Proactive Path Provisioning Time results MUST be reported in the 689 format of a table with a row for each iteration. The last row of the 690 table indicates the Average Proactive Path Provisioning Time. 692 The report should capture the following information in addition to 693 the configuration parameters captured in section 5. 695 - Number of Network Devices in the path 697 5.1.6. Reactive Path Provisioning Rate 699 Objective: 701 Measure the maximum number of independent paths a controller can 702 concurrently establish between source and destination nodes 703 reactively within the test duration, expressed in paths per second. 705 Reference Test Setup: 707 The test SHOULD use one of the test setups described in section 3.1 708 or section 3.2 of this document. 710 Prerequisite: 712 1. The controller MUST contain the network topology information for 713 the deployed network topology. 714 2. The controller should have the knowledge about the location of 715 destination addresses for which the paths have to be provisioned. 717 This can be achieved through dynamic learning or static 718 provisioning. 719 3. Ensure that the default action for 'flow miss' in Network Device 720 is configured to 'send to controller'. 721 4. Ensure that each Network Device in a path requires the controller 722 to make the forwarding decision while provisioning the entire 723 path. 725 Procedure: 727 1. Send traffic with unique source and destination addresses from 728 test traffic generator TP1. 729 2. Record total number of unique traffic frames (Ndf) received at the 730 test traffic generator TP2 within the test duration (Td). 732 Measurement: 734 Ndf 735 Reactive Path Provisioning Rate Tr1 = ------ 736 Td 738 Tr1 + Tr2 + Tr3 .. Trn 739 Average Reactive Path Provisioning Rate = ------------------------ 740 Total Test Iterations 742 Reporting Format: 744 The Reactive Path Provisioning Rate results MUST be reported in the 745 format of a table with a row for each iteration. The last row of the 746 table indicates the Average Reactive Path Provisioning Rate. 748 The report should capture the following information in addition to 749 the configuration parameters captured in section 5. 751 - Number of Network Devices in the path 753 - Offered rate 755 5.1.7. Proactive Path Provisioning Rate 757 Objective: 759 Measure the maximum number of independent paths a controller can 760 concurrently establish between source and destination nodes 761 proactively within the test duration, expressed in paths per second. 763 Reference Test Setup: 765 The test SHOULD use one of the test setups described in section 3.1 766 or section 3.2 of this document. 768 Prerequisite: 770 1. The controller MUST contain the network topology information for 771 the deployed network topology. 773 2. The controller should have the knowledge about the location of 774 destination addresses for which the paths have to be provisioned. 775 This can be achieved through dynamic learning or static 776 provisioning. 778 3. Ensure that the default action for flow miss in Network Device is 779 'drop'. 781 Procedure: 783 1. Send traffic continuously with unique source and destination 784 addresses from test traffic generator TP1. 786 2. Install corresponding flow entries to reach from simulated 787 sources at the test traffic generator TP1 to the simulated 788 destinations at test traffic generator TP2 through controller's 789 northbound or management interface. 791 3. Record total number of unique traffic frames received Ndf) at the 792 test traffic generator TP2 within the test duration (Td). 794 Measurement: 796 Ndf 797 Proactive Path Provisioning Rate Tr1 = ------ 798 Td 800 Tr1 + Tr2 + Tr3 .. Trn 801 Average Proactive Path Provisioning Rate = ----------------------- 802 Total Test Iterations 804 Reporting Format: 806 The Proactive Path Provisioning Rate results MUST be reported in the 807 format of a table with a row for each iteration. The last row of the 808 table indicates the Average Proactive Path Provisioning Rate. 810 The report should capture the following information in addition to 811 the configuration parameters captured in section 5. 813 - Number of Network Devices in the path 815 - Offered rate 817 5.1.8. Network Topology Change Detection Time 819 Objective: 821 Measure the time taken by the controller to detect any changes in 822 the network topology, expressed in milliseconds. 824 Reference Test Setup: 826 The test SHOULD use one of the test setups described in section 3.1 827 or section 3.2 of this document. 829 Prerequisite: 831 1. The controller MUST have discovered the network topology 832 information for the deployed network topology. 834 2. The periodic network discovery operation should be configured to 835 twice the Test duration (Td) value. 837 Procedure: 839 1. Trigger a topology change event by bringing down an active 840 Network Device in the topology. 842 2. Record the time when the first topology change notification is 843 sent to the controller (Tcn) at the forwarding plane test emulator 844 interface (I1). 846 3. Stop the test when the controller sends the first topology re- 847 discovery message to the Network Device or the expiry of test 848 interval (Td). 850 4. Record the time when the first topology re-discovery message is 851 received from the controller (Tcd) at the forwarding plane test 852 emulator interface (I1) 854 Measurement: 856 Network Topology Change Detection Time Tr1 = Tcd-Tcn. 858 Tr1 + Tr2 + Tr3 .. Trn 859 Average Network Topology Change Detection Time = ------------------ 860 Total Test Iterations 862 Reporting Format: 864 The Network Topology Change Detection Time results MUST be reported 865 in the format of a table with a row for each iteration. The last 866 row of the table indicates the average Network Topology Change Time. 868 5.2. 6.2 Scalability 870 5.2.1. Control Session Capacity 872 Objective: 874 Measure the maximum number of control sessions that the controller 875 can maintain. 877 Reference Test Setup: 879 The test SHOULD use one of the test setups described in section 3.1 880 or section 3.2 of this document. 882 Procedure: 884 1. Establish control connection with controller from every Network 885 Device emulated in the forwarding plane test emulator. 886 2. Stop the test when the controller starts dropping the control 887 connection. 888 3. Record the number of successful connections established with the 889 controller (CCn) at the forwarding plane test emulator. 891 Measurement: 893 Control Sessions Capacity = CCn. 895 Reporting Format: 897 The Control Session Capacity results MUST be reported in addition to 898 the configuration parameters captured in section 5. 900 5.2.2. Network Discovery Size 902 Objective: 904 Measure the network size (number of nodes, links, and hosts) that a 905 controller can discover. 907 Reference Test Setup: 909 The test SHOULD use one of the test setups described in section 3.1 910 or section 3.2 of this document. 912 Prerequisite: 914 1. The controller MUST support automatic network discovery. 915 2. Tester should be able to retrieve the discovered topology 916 information either through controller's management interface or 917 northbound interface. 919 Procedure: 921 1. Establish the network connections between controller and network 922 nodes. 923 2. Query the controller for the discovered network topology 924 information and compare it with the deployed network topology 925 information. 926 3. 3a. Increase the number of nodes by 1 when the comparison is 927 successful and repeat the test. 928 4. 3b. Decrease the number of nodes by 1 when the comparison fails 929 and repeat the test. 930 5. Continue the test until the comparison of step 3b is successful. 931 6. Record the number of nodes for the last iteration (Ns) where the 932 topology comparison was successful. 934 Measurement: 936 Network Discovery Size = Ns. 938 Reporting Format: 940 The Network Discovery Size results MUST be reported in addition to 941 the configuration parameters captured in section 5. 943 5.2.3. 6.2.3 Forwarding Table Capacity 945 Objective: 947 Measure the maximum number of flow entries a controller can manage 948 in its Forwarding table. 950 Reference Test Setup: 952 The test SHOULD use one of the test setups described in section 3.1 953 or section 3.2 of this document. 955 Prerequisite: 957 1. The controller Forwarding table should be empty. 958 2. Flow Idle time MUST be set to higher or infinite value. 959 3. The controller MUST have completed network topology discovery. 960 4. Tester should be able to retrieve the forwarding table information 961 either through controller's management interface or northbound 962 interface. 964 Procedure: 966 Reactive Flow Provisioning Mode: 968 1. Send bi-directional traffic continuously with unique source and/or 969 destination addresses from test traffic generators TP1 and TP2 at 970 the asynchronous message processing rate of controller. 971 2. Query the controller at a regular interval (e.g., 5 seconds) for 972 the number of learnt flow entries from its northbound interface. 973 3. Stop the test when the retrieved value is constant for three 974 consecutive iterations and record the value received from the last 975 query (Nrp). 977 Proactive Flow Provisioning Mode: 979 1. Install unique flows continuously through controller's northbound 980 or management interface until a failure response is received from 981 the controller. 982 2. Record the total number of successful responses (Nrp). 984 Note: 986 Some controller designs for proactive flow provisioning mode may 987 require the switch to send flow setup requests in order to generate 988 flow setup responses. In such cases, it is recommended to generate 989 bi-directional traffic for the provisioned flows. 991 Measurement: 993 Proactive Flow Provisioning Mode: 995 Max Flow Entries = Total number of flows provisioned (Nrp) 997 Reactive Flow Provisioning Mode: 999 Max Flow Entries = Total number of learnt flow entries (Nrp) 1001 Forwarding Table Capacity = Max Flow Entries. 1003 Reporting Format: 1005 The Forwarding Table Capacity results MUST be tabulated with the 1006 following information in addition to the configuration parameters 1007 captured in section 5. 1009 - Provisioning Type (Proactive/Reactive) 1011 5.3. 6.3 Security 1013 5.3.1. 6.3.1 Exception Handling 1015 Objective: 1017 Determine the effect of handling error packets and notifications on 1018 performance tests. The impact MUST be measured for the following 1019 performance tests 1021 a. Path Provisioning Rate 1022 b. Path Provisioning Time 1024 c. Network Topology Change Detection Time 1026 Reference Test Setup: 1028 The test SHOULD use one of the test setups described in section 3.1 1029 or section 3.2 of this document. 1031 Prerequisite: 1033 1. This test MUST be performed after obtaining the baseline 1034 measurement results for the above performance tests. 1035 2. Ensure that the invalid messages are not dropped by the 1036 intermediate devices connecting the controller and Network 1037 Devices. 1039 Procedure: 1041 1. Perform the above listed performance tests and send 1% of messages 1042 from the Asynchronous Message Processing Rate as invalid messages 1043 from the connected Network Devices emulated at the forwarding 1044 plane test emulator. 1045 2. Perform the above listed performance tests and send 2% of messages 1046 from the Asynchronous Message Processing Rate as invalid messages 1047 from the connected Network Devices emulated at the forwarding 1048 plane test emulator. 1050 Note: 1052 Invalid messages can be frames with incorrect protocol fields or any 1053 form of failure notifications sent towards controller. 1055 Measurement: 1057 Measurement MUST be done as per the equation defined in the 1058 corresponding performance test measurement section. 1060 Reporting Format: 1062 The Exception Handling results MUST be reported in the format of 1063 table with a column for each of the below parameters and row for 1064 each of the listed performance tests. 1066 - Without Exceptions 1067 - With 1% Exceptions 1069 - With 2% Exceptions 1071 5.3.2. Denial of Service Handling 1073 Objective: 1075 Determine the effect of handling DoS attacks on performance and 1076 scalability tests the impact MUST be measured for the following 1077 tests: 1079 a. Path Provisioning Rate 1081 b. Path Provisioning Time 1083 c. Network Topology Change Detection Time 1085 d. Network Discovery Size 1087 Reference Test Setup: 1089 The test SHOULD use one of the test setups described in section 3.1 1090 or section 3.2 of this document. 1092 Prerequisite: 1094 This test MUST be performed after obtaining the baseline measurement 1095 results for the above tests. 1097 Procedure: 1099 1. Perform the listed tests and launch a DoS attack towards 1100 controller while the test is running. 1102 Note: 1104 DoS attacks can be launched on one of the following interfaces. 1106 a. Northbound (e.g., Sending a huge number of requests on 1107 northbound interface) 1108 b. Management (e.g., Ping requests to controller's management 1109 interface) 1110 c. Southbound (e.g., TCP SYNC messages on southbound interface) 1112 Measurement: 1114 Measurement MUST be done as per the equation defined in the 1115 corresponding test's measurement section. 1117 Reporting Format: 1119 The DoS Attacks Handling results MUST be reported in the format of 1120 table with a column for each of the below parameters and row for 1121 each of the listed tests. 1123 - Without any attacks 1125 - With attacks 1127 The report should also specify the nature of attack and the 1128 interface. 1130 5.4. Reliability 1132 5.4.1. Controller Failover Time 1134 Objective: 1136 Measure the time taken to switch from an active controller to the 1137 backup controller, when the controllers work in redundancy mode and 1138 the active controller fails. 1140 Reference Test Setup: 1142 The test SHOULD use the test setup described in section 3.2 of this 1143 document. 1145 Prerequisite: 1147 1. Master controller election MUST be completed. 1148 2. Nodes are connected to the controller cluster as per the 1149 Redundancy Mode (RM). 1150 3. The controller cluster should have completed the network topology 1151 discovery. 1152 4. The Network Device MUST send all new flows to the controller when 1153 it receives from the test traffic generator. 1155 5. Controller should have learnt the location of destination (D1) at 1156 test traffic generator TP2. 1158 Procedure: 1160 1. Send uni-directional traffic continuously with incremental 1161 sequence number and source addresses from test traffic generator 1162 TP1 at the rate that the controller processes without any drops. 1163 2. Ensure that there are no packet drops observed at the test traffic 1164 generator TP2. 1165 3. Bring down the active controller. 1166 4. Stop the test when a first frame received on TP2 after failover 1167 operation. 1168 5. Record the time at which the last valid frame received (T1) at 1169 test traffic generator TP2 before sequence error and the first 1170 valid frame received (T2) after the sequence error at TP2 1172 Measurement: 1174 Controller Failover Time = (T2 - T1) 1176 Packet Loss = Number of missing packet sequences. 1178 Reporting Format: 1180 The Controller Failover Time results MUST be tabulated with the 1181 following information. 1183 - Number of cluster nodes 1185 - Redundancy mode 1187 - Controller Failover 1189 - Time Packet Loss 1191 - Cluster keep-alive interval 1193 5.4.2. Network Re-Provisioning Time 1195 Objective: 1197 Compute the time taken to re-route the traffic by the controller 1198 when there is a failure in existing traffic paths. 1200 Reference Test Setup: 1202 This test SHOULD use one of the test setup described in section 3.1 1203 or section 3.2 of this document. 1205 Prerequisite: 1206 1. Network with the given number of nodes and redundant paths MUST be 1207 deployed. 1208 2. Ensure that the controller MUST have knowledge about the location 1209 of test traffic generators TP1 and TP2. 1210 3. Ensure that the controller does not pre-provision the alternate 1211 path in the emulated Network Devices at the forwarding plane test 1212 emulator. 1214 Procedure: 1216 1. Send bi-directional traffic continuously with unique sequence 1217 number from TP1 and TP2. 1218 2. Bring down a link or switch in the traffic path. 1219 3. Stop the test after receiving first frame after network re- 1220 convergence. 1221 4. Record the time of last received frame prior to the frame loss at 1222 TP2 (TP2-Tlfr) and the time of first frame received after the 1223 frame loss at TP2 (TP2-Tffr). 1224 5. Record the time of last received frame prior to the frame loss at 1225 TP1 (TP1-Tlfr) and the time of first frame received after the 1226 frame loss at TP1 (TP1-Tffr). 1228 Measurement: 1230 Forward Direction Path Re-Provisioning Time (FDRT) 1231 = (TP2-Tffr - TP2-Tlfr) 1233 Reverse Direction Path Re-Provisioning Time (RDRT) 1234 = (TP1-Tffr - TP1-Tlfr) 1236 Network Re-Provisioning Time = (FDRT+RDRT)/2 1238 Forward Direction Packet Loss = Number of missing sequence frames 1239 at TP1 1241 Reverse Direction Packet Loss = Number of missing sequence frames 1242 at TP2 1244 Reporting Format: 1246 The Network Re-Provisioning Time results MUST be tabulated with the 1247 following information. 1249 - Number of nodes in the primary path 1251 - Number of nodes in the alternate path 1253 - Network Re-Provisioning Time 1255 - Forward Direction Packet Loss 1257 - Reverse Direction Packet Loss 1259 6. References 1261 6.1. Normative References 1263 [RFC2544] S. Bradner, J. McQuaid, "Benchmarking Methodology for 1264 Network Interconnect Devices",RFC 2544, March 1999. 1266 [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, 1267 "Framework for IP Performance Metrics",RFC 2330, 1268 May 1998. 1270 [RFC6241] R. Enns, M. Bjorklund, J. Schoenwaelder, A. Bierman, 1271 "Network Configuration Protocol (NETCONF)",RFC 6241, 1272 July 2011. 1274 [RFC6020] M. Bjorklund, "YANG - A Data Modeling Language for 1275 the Network Configuration Protocol (NETCONF)", RFC 6020, 1276 October 2010 1278 [RFC5440] JP. Vasseur, JL. Le Roux, "Path Computation Element (PCE) 1279 Communication Protocol (PCEP)", RFC 5440, March 2009. 1281 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 1282 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 1284 [I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil, 1285 Mark.T, Vishwas Manral, Sarah Banks, "Terminology for 1286 Benchmarking SDN Controller Performance", 1287 draft-ietf-bmwg-sdn-controller-benchmark-term-01 1288 (Work in progress), March 21, 2016 1290 6.2. Informative References 1292 [I-D.i2rs-architecture] A. Atlas, J. Halpern, S. Hares, D. Ward, 1293 T. Nadeau, "An Architecture for the Interface to the 1294 Routing System", draft-ietf-i2rs-architecture-09 1295 (Work in progress), March 6, 2015 1297 [OpenContrail] Ankur Singla, Bruno Rijsman, "OpenContrail 1298 Architecture Documentation", 1299 http://opencontrail.org/opencontrail-architecture-documentation 1301 [OpenDaylight] OpenDaylight Controller:Architectural Framework, 1302 https://wiki.opendaylight.org/view/OpenDaylight_Controller 1304 7. IANA Considerations 1306 This document does not have any IANA requests. 1308 8. Security Considerations 1310 Benchmarking tests described in this document are limited to the 1311 performance characterization of controller in lab environment with 1312 isolated network. 1314 9. Acknowledgments 1316 The authors would like to thank the following individuals for 1317 providing their valuable comments to the earlier versions of this 1318 document: Al Morton (AT&T), Sandeep Gangadharan (HP), M. Georgescu 1319 (NAIST), Andrew McGregor (Google), Scott Bradner (Harvard 1320 University), Jay Karthik (Cisco), Ramakrishnan (Dell), Khasanov 1321 Boris (Huawei), Brian Castelli (Spirent) 1323 This document was prepared using 2-Word-v2.0.template.dot. 1325 Appendix A. Example Test Topologies 1327 A.1. Leaf-Spine Topology - Three Tier Network Architecture 1329 +----------+ 1330 | SDN | 1331 | Node | (Core) 1332 +----------+ 1333 / \ 1334 / \ 1335 +------+ +------+ 1336 | SDN | | SDN | (Spine) 1337 | Node |.. | Node | 1338 +------+ +------+ 1339 / \ / \ 1340 / \ / \ 1341 l1 / / \ ln-1 1342 / / \ \ 1343 +--------+ +-------+ 1344 | SDN | | SDN | 1345 | Node |.. | Node | (Leaf) 1346 +--------+ +-------+ 1348 A.2. Leaf-Spine Topology - Two Tier Network Architecture 1350 +------+ +------+ 1351 | SDN | | SDN | (Spine) 1352 | Node |.. | Node | 1353 +------+ +------+ 1354 / \ / \ 1355 / \ / \ 1356 l1 / / \ ln-1 1357 / / \ \ 1358 +--------+ +-------+ 1359 | SDN | | SDN | 1360 | Node |.. | Node | (Leaf) 1361 +--------+ +-------+ 1363 Appendix B. Benchmarking Methodology using OpenFlow Controllers 1365 This section gives an overview of OpenFlow protocol and provides 1366 test methodology to benchmark SDN controllers supporting OpenFlow 1367 southbound protocol. 1369 B.1. Protocol Overview 1371 OpenFlow is an open standard protocol defined by Open Networking 1372 Foundation (ONF), used for programming the forwarding plane of 1373 network switches or routers via a centralized controller. 1375 B.2. Messages Overview 1377 OpenFlow protocol supports three messages types namely controller- 1378 to-switch, asynchronous and symmetric. 1380 Controller-to-switch messages are initiated by the controller and 1381 used to directly manage or inspect the state of the switch. These 1382 messages allow controllers to query/configure the switch (Features, 1383 Configuration messages), collect information from switch (Read-State 1384 message), send packets on specified port of switch (Packet-out 1385 message), and modify switch forwarding plane and state (Modify- 1386 State, Role-Request messages etc.). 1388 Asynchronous messages are generated by the switch without a 1389 controller soliciting them. These messages allow switches to update 1390 controllers to denote an arrival of new flow (Packet-in), switch 1391 state change (Flow-Removed, Port-status) and error (Error). 1393 Symmetric messages are generated in either direction without 1394 solicitation. These messages allow switches and controllers to set 1395 up connection (Hello), verify for liveness (Echo) and offer 1396 additional functionalities (Experimenter). 1398 B.3. Connection Overview 1400 OpenFlow channel is used to exchange OpenFlow message between an 1401 OpenFlow switch and an OpenFlow controller. The OpenFlow channel 1402 connection can be setup using plain TCP or TLS. By default, a switch 1403 establishes single connection with SDN controller. A switch may 1404 establish multiple parallel connections to single controller 1405 (auxiliary connection) or multiple controllers to handle controller 1406 failures and load balancing. 1408 B.4. Performance Benchmarking Tests 1410 B.4.1. Network Topology Discovery Time 1412 Procedure: 1414 Network Devices OpenFlow SDN 1415 Controller Application 1416 | | | 1417 | | | 1419 | | | 1420 | | | 1422 | | | 1423 | OFPT_HELLO Exchange | | 1424 |<-------------------------->| | 1425 | | | 1426 | PACKET_OUT with LLDP | | 1427 | to all switches | | 1428 (Tm1)|<---------------------------| | 1429 | | | 1430 | PACKET_IN with LLDP| | 1431 | rcvd from switch-1| | 1432 |--------------------------->| | 1433 | | | 1434 | PACKET_IN with LLDP| | 1435 | rcvd from switch-2| | 1436 |--------------------------->| | 1437 | . | | 1438 | . | | 1439 | | | 1440 | PACKET_IN with LLDP| | 1441 | rcvd from switch-n| | 1442 (Tmn)|--------------------------->| | 1443 | | | 1444 | | | 1446 | | | 1447 | | Query the controller for| 1448 | | discovered n/w topo.(Di)| 1449 | |<--------------------------| 1450 | | | 1451 | | | 1453 | | | 1455 Legend: 1457 NB: Northbound 1458 SB: Southbound 1459 OF: OpenFlow 1460 Tm1: Time of reception of first LLDP message from controller 1461 Tmn: Time of last LLDP message sent to controller 1463 Discussion: 1465 The Network Topology Discovery Time can be obtained by calculating 1466 the time difference between the first PACKET_OUT with LLDP message 1467 received from the controller (Tm1) and the last PACKET_IN with LLDP 1468 message sent to the controller (Tmn) when the comparison is 1469 successful. 1471 B.4.2. Asynchronous Message Processing Time 1473 Procedure: 1475 Network Devices OpenFlow SDN 1476 Controller Application 1477 | | | 1478 |PACKET_IN with single | | 1479 |OFP match header | | 1480 (T0)|--------------------------->| | 1481 | | | 1482 | PACKET_OUT with single OFP | | 1483 | action header | | 1484 (R0)|<---------------------------| | 1485 | . | | 1486 | . | | 1487 | . | | 1488 | | | 1489 |PACKET_IN with single OFP | | 1490 |match header | | 1491 (Tn)|--------------------------->| | 1492 | | | 1493 | PACKET_OUT with single OFP | | 1494 | action header| | 1495 (Rn)|<---------------------------| | 1496 | | | 1497 | | | 1499 | | | 1500 | | | 1503 | | | 1505 Legend: 1507 T0,T1, ..Tn are PACKET_IN messages transmit timestamps. 1508 R0,R1, ..Rn are PACKET_OUT messages receive timestamps. 1509 Nrx : Number of successful PACKET_IN/PACKET_OUT message 1510 exchanges 1512 Discussion: 1514 The Asynchronous Message Processing Time will be obtained by sum of 1515 ((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx. 1517 B.4.3. Asynchronous Message Processing Rate 1519 Procedure: 1521 Network Devices OpenFlow SDN 1522 Controller Application 1523 | | | 1524 |PACKET_IN with multiple OFP | | 1525 |match headers | | 1526 |--------------------------->| | 1527 | | | 1528 | PACKET_OUT with multiple | | 1529 | OFP action headers| | 1530 |<---------------------------| | 1531 | | | 1532 |PACKET_IN with multiple OFP | | 1533 |match headers | | 1534 |--------------------------->| | 1535 | | | 1536 | PACKET_OUT with multiple | | 1537 | OFP action headers| | 1538 |<---------------------------| | 1539 | . | | 1540 | . | | 1541 | . | | 1542 | | | 1543 |PACKET_IN with multiple OFP | | 1544 |match headers | | 1545 |--------------------------->| | 1546 | | | 1547 | PACKET_OUT with multiple | | 1548 | OFP action headers| | 1549 |<---------------------------| | 1550 | | | 1551 | | | 1553 | | | 1554 | | | 1556 | | | 1558 Discussion: 1560 The Asynchronous Message Processing Rate will be obtained by 1561 calculating the number of OFP action headers received in all 1562 PACKET_OUT messages during the test duration. 1564 B.4.4. Reactive Path Provisioning Time 1566 Procedure: 1568 Test Traffic Test Traffic Network Devices OpenFlow 1569 Generator TP1 Generator TP2 Controller 1570 | | | | 1571 | |G-ARP (D1) | | 1572 | |--------------------->| | 1573 | | | | 1574 | | |PACKET_IN(D1) | 1575 | | |------------------>| 1576 | | | | 1577 |Traffic (S1,D1) | | 1578 (Tsf1)|----------------------------------->| | 1579 | | | | 1580 | | | | 1581 | | | | 1582 | | |PACKET_IN(S1,D1) | 1583 | | |------------------>| 1584 | | | | 1585 | | | FLOW_MOD(D1) | 1586 | | |<------------------| 1587 | | | | 1588 | |Traffic (S1,D1) | | 1589 | (Tdf1)|<---------------------| | 1590 | | | | 1592 Legend: 1594 G-ARP: Gratuitous ARP message. 1595 Tsf1: Time of first frame sent from TP1 1596 Tdf1: Time of first frame received from TP2 1598 Discussion: 1600 The Reactive Path Provisioning Time can be obtained by finding the 1601 time difference between the transmit and receive time of the traffic 1602 (Tsf1-Tdf1). 1604 B.4.5. Proactive Path Provisioning Time 1606 Procedure: 1608 Test Traffic Test Traffic Network Devices OpenFlow SDN 1609 Generator TP1 Generator TP2 Controller Application 1610 | | | | | 1611 | |G-ARP (D1) | | | 1612 | |-------------->| | | 1613 | | | | | 1614 | | |PACKET_IN(D1) | | 1615 | | |--------------->| | 1616 | | | | | 1617 |Traffic (S1,D1) | | | 1618 Tsf1)|---------------------------->| | | 1619 | | | | | 1620 | | | | | 1622 | | | | | 1623 | | | FLOW_MOD(D1) | | 1624 | | |<---------------| | 1625 | | | | | 1626 | |Traffic (S1,D1)| | | 1627 | (Tdf1)|<--------------| | | 1628 | | | | | 1630 Legend: 1632 G-ARP: Gratuitous ARP message. 1633 Tsf1: Time of first frame sent from TP1 1634 Tdf1: Time of first frame received from TP2 1636 Discussion: 1638 The Proactive Path Provisioning Time can be obtained by finding the 1639 time difference between the transmit and receive time of the traffic 1640 (Tsf1-Tdf1). 1642 B.4.6. Reactive Path Provisioning Rate 1644 Procedure: 1646 Test Traffic Test Traffic Network Devices OpenFlow 1647 Generator TP1 Generator TP2 Controller 1648 | | | | 1649 | | | | 1650 | | | | 1651 | |G-ARP (D1..Dn) | | 1652 | |--------------------| | 1653 | | | | 1654 | | |PACKET_IN(D1..Dn) | 1655 | | |--------------------->| 1656 | | | | 1657 |Traffic (S1..Sn,D1..Dn) | | 1658 |--------------------------------->| | 1659 | | | | 1660 | | |PACKET_IN(S1.Sn,D1.Dn)| 1661 | | |--------------------->| 1662 | | | | 1663 | | | FLOW_MOD(S1) | 1664 | | |<---------------------| 1665 | | | | 1666 | | | FLOW_MOD(D1) | 1667 | | |<---------------------| 1668 | | | | 1669 | | | FLOW_MOD(S2) | 1670 | | |<---------------------| 1671 | | | | 1672 | | | FLOW_MOD(D2) | 1673 | | |<---------------------| 1674 | | | . | 1675 | | | . | 1676 | | | | 1677 | | | FLOW_MOD(Sn) | 1678 | | |<---------------------| 1679 | | | | 1680 | | | FLOW_MOD(Dn) | 1681 | | |<---------------------| 1682 | | | | 1683 | | Traffic (S1..Sn, | | 1684 | | D1..Dn)| | 1685 | |<-------------------| | 1686 | | | | 1687 | | | | 1689 Legend: 1691 G-ARP: Gratuitous ARP 1692 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1693 Destination Endpoint n 1694 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1695 Endpoint n 1697 Discussion: 1699 The Reactive Path Provisioning Rate can be obtained by finding the 1700 total number of frames received at TP2 after the test duration. 1702 B.4.7. Proactive Path Provisioning Rate 1704 Procedure: 1706 Test Traffic Test Traffic Network Devices OpenFlow SDN 1707 Generator TP1 Generator TP2 Controller Application 1708 | | | | | 1709 | |G-ARP (D1..Dn) | | | 1710 | |-------------->| | | 1711 | | | | | 1712 | | |PACKET_IN(D1.Dn)| | 1713 | | |--------------->| | 1714 | | | | | 1715 |Traffic (S1..Sn,D1..Dn) | | | 1716 Tsf1)|---------------------------->| | | 1717 | | | | | 1718 | | | | | 1720 | | | | | 1721 | | | | . | 1722 | | | | | 1724 | | | | | 1725 | | | FLOW_MOD(S1) | | 1726 | | |<---------------| | 1727 | | | | | 1728 | | | FLOW_MOD(D1) | | 1729 | | |<---------------| | 1730 | | | | | 1731 | | | . | | 1732 | | | FLOW_MOD(Sn) | | 1733 | | |<---------------| | 1734 | | | | | 1735 | | | FLOW_MOD(Dn) | | 1736 | | |<---------------| | 1737 | | | | | 1738 | |Traffic (S1.Sn,| | | 1739 | | D1.Dn)| | | 1740 | (Tdf1)|<--------------| | | 1741 | | | | | 1743 Legend: 1745 G-ARP: Gratuitous ARP 1746 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1747 Destination Endpoint n 1748 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1749 Endpoint n 1751 Discussion: 1753 The Proactive Path Provisioning Rate can be obtained by finding the 1754 total number of frames received at TP2 after the test duration 1756 B.4.8. Network Topology Change Detection Time 1758 Procedure: 1760 Network Devices OpenFlow SDN 1761 Controller Application 1762 | | | 1763 | | | 1765 | | | 1766 T0 |PORT_STATUS with link down | | 1767 | from S1 | | 1768 |--------------------------->| | 1769 | | | 1770 |First PACKET_OUT with LLDP | | 1771 |to OF Switch | | 1772 T1 |<---------------------------| | 1773 | | | 1774 | | | 1777 Discussion: 1779 The Network Topology Change Detection Time can be obtained by 1780 finding the difference between the time the OpenFlow switch S1 sends 1781 the PORT_STATUS message (T0) and the time that the OpenFlow 1782 controller sends the first topology re-discovery message (T1) to 1783 OpenFlow switches. 1785 B.5. Scalability 1787 B.5.1. Control Sessions Capacity 1789 Procedure: 1791 Network Devices OpenFlow 1792 Controller 1793 | | 1794 | OFPT_HELLO Exchange for Switch 1 | 1795 |<------------------------------------->| 1796 | | 1797 | OFPT_HELLO Exchange for Switch 2 | 1798 |<------------------------------------->| 1799 | . | 1800 | . | 1801 | . | 1802 | OFPT_HELLO Exchange for Switch n | 1803 |X<----------------------------------->X| 1804 | | 1806 Discussion: 1808 The value of Switch n-1 will provide Control Sessions Capacity. 1810 B.5.2. Network Discovery Size 1812 Procedure: 1814 Network Devices OpenFlow SDN 1815 Controller Application 1816 | | | 1817 | | | 1819 | | | 1820 | OFPT_HELLO Exchange | | 1821 |<-------------------------->| | 1822 | | | 1823 | PACKET_OUT with LLDP | | 1824 | to all switches | | 1825 |<---------------------------| | 1826 | | | 1827 | PACKET_IN with LLDP| | 1828 | rcvd from switch-1| | 1829 |--------------------------->| | 1830 | | | 1831 | PACKET_IN with LLDP| | 1832 | rcvd from switch-2| | 1833 |--------------------------->| | 1834 | . | | 1835 | . | | 1836 | | | 1837 | PACKET_IN with LLDP| | 1838 | rcvd from switch-n| | 1839 |--------------------------->| | 1840 | | | 1841 | | | 1843 | | | 1844 | | Query the controller for| 1845 | | discovered n/w topo.(N1)| 1846 | |<--------------------------| 1847 | | | 1848 | | | 1850 | | | 1851 | | | 1854 | | | 1856 Legend: 1858 n/w topo: Network Topology 1859 OF: OpenFlow 1861 Discussion: 1863 The value of N1 provides the Network Discovery Size value. The test 1864 duration can be set to the stipulated time within which the user 1865 expects the controller to complete the discovery process. 1867 B.5.3. Forwarding Table Capacity 1868 Procedure: 1870 Test Traffic Network Devices OpenFlow SDN 1871 Generator TP1 Controller Application 1872 | | | | 1873 | | | | 1874 |G-ARP (H1..Hn) | | | 1875 |----------------->| | | 1876 | | | | 1877 | |PACKET_IN(D1..Dn) | | 1878 | |------------------>| | 1879 | | | | 1880 | | || 1881 | | | | 1882 | | | |(F1) 1884 | | | | 1885 | | || 1886 | | | | 1887 | | | |(F2) 1889 | | | | 1890 | | || 1891 | | | | 1892 | | | |(F3) 1894 | | | | 1895 | | | | 1897 | | | | 1899 Legend: 1901 G-ARP: Gratuitous ARP 1902 H1..Hn: Host 1 .. Host n 1903 FWD: Forwarding Table 1905 Discussion: 1907 Query the controller forwarding table entries for multiple times 1908 until the three consecutive queries return the same value. The last 1909 value retrieved from the controller will provide the Forwarding 1910 Table Capacity value. The query interval is user configurable. The 5 1911 seconds shown in this example is for representational purpose. 1913 B.6. Security 1915 B.6.1. Exception Handling 1917 Procedure: 1919 Test Traffic Test Traffic Network Devices OpenFlow SDN 1920 Generator TP1 Generator TP2 Controller Application 1921 | | | | | 1922 | |G-ARP (D1..Dn) | | | 1923 | |------------------>| | | 1924 | | | | | 1925 | | |PACKET_IN(D1..Dn)| | 1926 | | |---------------->| | 1927 | | | | | 1928 |Traffic (S1..Sn,D1..Dn) | | | 1929 |----------------------------->| | | 1930 | | | | | 1931 | | |PACKET_IN(S1..Sa,| | 1932 | | | D1..Da)| | 1933 | | |---------------->| | 1934 | | | | | 1935 | | |PACKET_IN(Sa+1.. | | 1936 | | |.Sn,Da+1..Dn) | | 1937 | | |(1% incorrect OFP| | 1938 | | | Match header)| | 1939 | | |---------------->| | 1940 | | | | | 1941 | | | FLOW_MOD(D1..Dn)| | 1942 | | |<----------------| | 1943 | | | | | 1944 | | | FLOW_MOD(S1..Sa)| | 1945 | | | OFP headers| | 1946 | | |<----------------| | 1947 | | | | | 1948 | |Traffic (S1..Sa, | | | 1949 | | D1..Da)| | | 1950 | |<------------------| | | 1951 | | | | | 1952 | | | | | 1955 | | | | | 1956 | | | | | 1959 | | | | | 1960 | | | | | 1964 | | | | | 1965 | | | | | 1968 | | | | | 1970 Legend: 1972 G-ARP: Gratuitous ARP 1973 PACKET_IN(Sa+1..Sn,Da+1..Dn): OpenFlow PACKET_IN with wrong 1974 version number 1975 Rn1: Total number of frames received at Test Port 2 with 1976 1% incorrect frames 1977 Rn2: Total number of frames received at Test Port 2 with 1978 2% incorrect frames 1980 Discussion: 1982 The traffic rate sent towards OpenFlow switch from Test Port 1 1983 should be 1% higher than the Path Programming Rate. Rn1 will provide 1984 the Path Provisioning Rate of controller at 1% of incorrect frames 1985 handling and Rn2 will provide the Path Provisioning Rate of 1986 controller at 2% of incorrect frames handling. 1988 The procedure defined above provides test steps to determine the 1989 effect of handling error packets on Path Programming Rate. Same 1990 procedure can be adopted to determine the effects on other 1991 performance tests listed in this benchmarking tests. 1993 B.6.2. Denial of Service Handling 1995 Procedure: 1997 Test Traffic Test Traffic Network Devic OpenFlow SDN 1998 Generator TP1 Generator TP2 Controller Application 1999 | | | | | 2000 | |G-ARP (D1..Dn) | | | 2001 | |------------------>| | | 2002 | | | | | 2003 | | |PACKET_IN(D1..Dn)| | 2004 | | |---------------->| | 2005 | | | | | 2006 |Traffic (S1..Sn,D1..Dn) | | | 2007 |----------------------------->| | | 2008 | | | | | 2009 | | |PACKET_IN(S1..Sn,| | 2010 | | | D1..Dn)| | 2011 | | |---------------->| | 2012 | | | | | 2013 | | |TCP SYN Attack | | 2014 | | |from a switch | | 2015 | | |---------------->| | 2016 | | | | | 2017 | | |FLOW_MOD(D1..Dn) | | 2018 | | |<----------------| | 2019 | | | | | 2020 | | | FLOW_MOD(S1..Sn)| | 2021 | | | OFP headers| | 2022 | | |<----------------| | 2023 | | | | | 2024 | |Traffic (S1..Sn, | | | 2025 | | D1..Dn)| | | 2026 | |<------------------| | | 2027 | | | | | 2028 | | | | | 2031 | | | | | 2032 | | | | | 2035 | | | | | 2037 Legend: 2039 G-ARP: Gratuitous ARP 2041 Discussion: 2043 TCP SYN attack should be launched from one of the emulated/simulated 2044 OpenFlow Switch. Rn1 provides the Path Programming Rate of 2045 controller uponhandling denial of service attack. 2047 The procedure defined above provides test steps to determine the 2048 effect of handling denial of service on Path Programming Rate. Same 2049 procedure can be adopted to determine the effects on other 2050 performance tests listed in this benchmarking tests. 2052 B.7. Reliability 2054 B.7.1. Controller Failover Time 2056 Procedure: 2058 Test Traffic Test Traffic Network Device OpenFlow SDN 2059 Generator TP1 Generator TP2 Controller Application 2060 | | | | | 2061 | |G-ARP (D1) | | | 2062 | |------------>| | | 2063 | | | | | 2064 | | |PACKET_IN(D1) | | 2065 | | |---------------->| | 2066 | | | | | 2067 |Traffic (S1..Sn,D1) | | | 2068 |-------------------------->| | | 2069 | | | | | 2070 | | | | | 2071 | | |PACKET_IN(S1,D1) | | 2072 | | |---------------->| | 2073 | | | | | 2074 | | |FLOW_MOD(D1) | | 2075 | | |<----------------| | 2076 | | |FLOW_MOD(S1) | | 2077 | | |<----------------| | 2078 | | | | | 2079 | |Traffic (S1,D1)| | | 2080 | |<------------| | | 2081 | | | | | 2082 | | |PACKET_IN(S2,D1) | | 2083 | | |---------------->| | 2084 | | | | | 2085 | | |FLOW_MOD(S2) | | 2086 | | |<----------------| | 2087 | | | | | 2088 | | |PACKET_IN(Sn-1,D1)| | 2089 | | |---------------->| | 2090 | | | | | 2091 | | |PACKET_IN(Sn,D1) | | 2092 | | |---------------->| | 2093 | | | . | | 2094 | | | . | | 2097 | | | FLOW_MOD(Sn-1) | | 2098 | | | <-X----------| | 2099 | | | | | 2100 | | |FLOW_MOD(Sn) | | 2101 | | |<----------------| | 2102 | | | | | 2103 | |Traffic (Sn,D1)| | | 2104 | |<------------| | | 2105 | | | | | 2106 | | | | | 2111 Legend: 2113 G-ARP: Gratuitous ARP. 2115 Discussion: 2117 The time difference between the last valid frame received before the 2118 traffic loss and the first frame received after the traffic loss 2119 will provide the controller failover time. 2121 If there is no frame loss during controller failover time, the 2122 controller failover time can be deemed negligible. 2124 B.7.2. Network Re-Provisioning Time 2126 Procedure: 2128 Test Traffic Test Traffic Network Devices OpenFlow SDN 2129 Generator TP1 Generator TP2 Controller Application 2130 | | | | | 2131 | |G-ARP (D1) | | | 2132 | |-------------->| | | 2133 | | | | | 2134 | | |PACKET_IN(D1) | | 2135 | | |---------------->| | 2136 | G-ARP (S1) | | | 2137 |---------------------------->| | | 2138 | | | | | 2139 | | |PACKET_IN(S1) | | 2140 | | |---------------->| | 2141 | | | | | 2142 |Traffic (S1,D1,Seq.no (1..n))| | | 2143 |---------------------------->| | | 2144 | | | | | 2145 | | |PACKET_IN(S1,D1) | | 2146 | | |---------------->| | 2147 | | | | | 2148 | |Traffic (D1,S1,| | | 2149 | | Seq.no (1..n))| | | 2150 | |-------------->| | | 2151 | | | | | 2152 | | |PACKET_IN(D1,S1) | | 2153 | | |---------------->| | 2154 | | | | | 2155 | | |FLOW_MOD(D1) | | 2156 | | |<----------------| | 2157 | | | | | 2158 | | |FLOW_MOD(S1) | | 2159 | | |<----------------| | 2160 | | | | | 2161 | |Traffic (S1,D1,| | | 2162 | | Seq.no(1))| | | 2163 | |<--------------| | | 2164 | | | | | 2165 | |Traffic (S1,D1,| | | 2166 | | Seq.no(2))| | | 2167 | |<--------------| | | 2168 | | | | | 2169 | | | | | 2170 | Traffic (D1,S1,Seq.no(1))| | | 2171 |<----------------------------| | | 2172 | | | | | 2173 | Traffic (D1,S1,Seq.no(2))| | | 2174 |<----------------------------| | | 2175 | | | | | 2176 | Traffic (D1,S1,Seq.no(x))| | | 2177 |<----------------------------| | | 2178 | | | | | 2179 | |Traffic (S1,D1,| | | 2180 | | Seq.no(x))| | | 2181 | |<--------------| | | 2182 | | | | | 2183 | | | | | 2184 | | | | | 2188 | | | | | 2189 | | |PORT_STATUS(Sa) | | 2190 | | |---------------->| | 2191 | | | | | 2192 | |Traffic (S1,D1,| | | 2193 | | Seq.no(n-1))| | | 2194 | | X<-----------| | | 2195 | | | | | 2196 | Traffic (D1,S1,Seq.no(n-1))| | | 2197 | X------------------------| | | 2198 | | | | | 2199 | | | | | 2200 | | |FLOW_MOD(D1) | | 2201 | | |<----------------| | 2202 | | | | | 2203 | | |FLOW_MOD(S1) | | 2204 | | |<----------------| | 2205 | | | | | 2206 | Traffic (D1,S1,Seq.no(n))| | | 2207 |<----------------------------| | | 2208 | | | | | 2209 | |Traffic (S1,D1,| | | 2210 | | Seq.no(n))| | | 2211 | |<--------------| | | 2212 | | | | | 2213 | | | | | 2218 Legend: 2220 G-ARP: Gratuitous ARP message. 2221 Seq.no: Sequence number. 2222 Sa: Neighbour switch of the switch that was brought down. 2224 Discussion: 2226 The time difference between the last valid frame received before the 2227 traffic loss (Packet number with sequence number x) and the first 2228 frame received after the traffic loss (packet with sequence number 2229 n) will provide the network path re-provisioning time. 2231 Note that the test is valid only when the controller provisions the 2232 alternate path upon network failure. 2234 Authors' Addresses 2236 Bhuvaneswaran Vengainathan 2237 Veryx Technologies Inc. 2238 1 International Plaza, Suite 550 2239 Philadelphia 2240 PA 19113 2242 Email: bhuvaneswaran.vengainathan@veryxtech.com 2244 Anton Basil 2245 Veryx Technologies Inc. 2246 1 International Plaza, Suite 550 2247 Philadelphia 2248 PA 19113 2250 Email: anton.basil@veryxtech.com 2252 Mark Tassinari 2253 Hewlett-Packard, 2254 8000 Foothills Blvd, 2255 Roseville, CA 95747 2257 Email: mark.tassinari@hpe.com 2259 Vishwas Manral 2260 Nano Sec, 2261 CA 2263 Email: vishwas.manral@gmail.com 2265 Sarah Banks 2266 VSS Monitoring 2267 930 De Guigne Drive, 2268 Sunnyvale, CA 2270 Email: sbanks@encrypted.net