idnits 2.17.1 draft-ietf-bmwg-sdn-controller-benchmark-meth-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 19, 2015) is 3111 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2544' is defined on line 1240, but no explicit reference was found in the text == Unused Reference: 'RFC2330' is defined on line 1243, but no explicit reference was found in the text == Unused Reference: 'RFC6241' is defined on line 1247, but no explicit reference was found in the text == Unused Reference: 'RFC6020' is defined on line 1251, but no explicit reference was found in the text == Unused Reference: 'RFC5440' is defined on line 1255, but no explicit reference was found in the text == Unused Reference: 'I-D.sdn-controller-benchmark-term' is defined on line 1261, but no explicit reference was found in the text == Unused Reference: 'I-D.i2rs-architecture' is defined on line 1269, but no explicit reference was found in the text == Unused Reference: 'OpenContrail' is defined on line 1274, but no explicit reference was found in the text == Unused Reference: 'OpenDaylight' is defined on line 1278, but no explicit reference was found in the text == Outdated reference: A later version (-10) exists of draft-ietf-bmwg-sdn-controller-benchmark-term-00 == Outdated reference: A later version (-15) exists of draft-ietf-i2rs-architecture-09 Summary: 0 errors (**), 0 flaws (~~), 12 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Bhuvaneswaran Vengainathan 2 Network Working Group Anton Basil 3 Intended Status: Informational Veryx Technologies 4 Expires: April 18, 2016 Mark Tassinari 5 Hewlett-Packard 6 Vishwas Manral 7 Ionos Corp 8 Sarah Banks 9 VSS Monitoring 10 October 19, 2015 12 Benchmarking Methodology for SDN Controller Performance 13 draft-ietf-bmwg-sdn-controller-benchmark-meth-00 15 Abstract 17 This document defines the methodologies for benchmarking performance 18 of SDN controllers. Terminology related to benchmarking SDN 19 controllers is described in the companion terminology document. SDN 20 controllers have been implemented with many varying designs in order 21 to achieve their intended network functionality. Hence, the authors 22 have taken the approach of considering an SDN controller as a black 23 box, defining the methodology in a manner that is agnostic to 24 protocols and network services supported by controllers. The intent 25 of this document is to provide a standard mechanism to measure the 26 performance of all controller implementations. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current. 38 Internet-Drafts are draft documents valid for a maximum of six 39 months and may be updated, replaced, or obsoleted by other documents 40 at any time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress. 43 This Internet-Draft will expire on April 18, 2016. 45 Copyright Notice 47 Copyright (c) 2015 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with 55 respect to this document. Code Components extracted from this 56 document must include Simplified BSD License text as described in 57 Section 4.e of the Trust Legal Provisions and are provided without 58 warranty as described in the Simplified BSD License. 60 Table of Contents 62 1. Introduction ................................................ 3 63 2. Scope ....................................................... 4 64 3. Test Setup .................................................. 4 65 3.1. Test setup - Controller working in Standalone Mode ..... 4 66 3.2. Test setup - Controller working in Cluster Mode ........ 5 67 4. Test Considerations ......................................... 6 68 4.1. Network Topology ....................................... 6 69 4.2. Test Traffic ........................................... 7 70 4.3. Connection Setup ....................................... 7 71 4.4. Measurement Point Specification and Recommendation ..... 7 72 4.5. Connectivity Recommendation ............................ 8 73 4.6. Test Repeatability ..................................... 8 74 5. Benchmarking Tests .......................................... 9 75 5.1. Performance ............................................ 9 76 5.1.1. Network Topology Discovery Time ................... 9 77 5.1.2. Asynchronous Message Processing Time ............. 10 78 5.1.3. Asynchronous Message Processing Rate ............. 11 79 5.1.4. Reactive Path Provisioning Time .................. 13 80 5.1.5. Proactive Path Provisioning Time ................. 14 81 5.1.6. Reactive Path Provisioning Rate .................. 16 82 5.1.7. Proactive Path Provisioning Rate ................. 17 83 5.1.8. Network Topology Change Detection Time ........... 18 84 5.2. 6.2 Scalability........................................ 20 85 5.2.1. Control Session Capacity ......................... 20 86 5.2.2. Network Discovery Size ........................... 20 87 5.2.3. 6.2.3 Forwarding Table Capacity .................. 21 88 5.3. 6.3 Security .......................................... 23 89 5.3.1. 6.3.1 Exception Handling ......................... 23 90 5.3.2. Denial of Service Handling ....................... 24 92 5.4. Reliability ........................................... 26 93 5.4.1. Controller Failover Time ......................... 26 94 5.4.2. Network Re-Provisioning Time ..................... 27 95 6. References ................................................. 29 96 6.1. Normative References .................................. 29 97 6.2. Informative References ................................ 29 98 7. IANA Considerations ........................................ 30 99 8. Security Considerations .................................... 30 100 9. Acknowledgments ............................................ 30 101 Appendix A. Example Test Topologies ........................... 31 102 A.1. Leaf-Spine Topology - Three Tier Network Architecture . 31 103 A.2. Leaf-Spine Topology - Two Tier Network Architecture ... 31 104 Appendix B. Benchmarking Methodology using OpenFlow Controllers 32 105 B.1. Protocol Overview ..................................... 32 106 B.2. Messages Overview ..................................... 32 107 B.3. Connection Overview ................................... 32 108 B.4. Performance Benchmarking Tests ........................ 33 109 B.4.1. Network Topology Discovery Time .................. 33 110 B.4.2. Asynchronous Message Processing Time ............. 34 111 B.4.3. Asynchronous Message Processing Rate ............. 35 112 B.4.4. Reactive Path Provisioning Time .................. 36 113 B.4.5. Proactive Path Provisioning Time ................. 37 114 B.4.6. Reactive Path Provisioning Rate .................. 38 115 B.4.7. Proactive Path Provisioning Rate ................. 39 116 B.4.8. Network Topology Change Detection Time ........... 40 117 B.5. Scalability ........................................... 41 118 B.5.1. Control Sessions Capacity ........................ 41 119 B.5.2. Network Discovery Size ........................... 41 120 B.5.3. Forwarding Table Capacity ........................ 42 121 B.6. Security .............................................. 44 122 B.6.1. Exception Handling ............................... 44 123 B.6.2. Denial of Service Handling ....................... 45 124 B.7. Reliability ........................................... 47 125 B.7.1. Controller Failover Time ......................... 47 126 B.7.2. Network Re-Provisioning Time ..................... 48 127 Authors' Addresses ............................................ 51 129 1. Introduction 131 This document provides generic methodologies for benchmarking SDN 132 controller performance. An SDN controller may support many 133 northbound and southbound protocols, implement a wide range of 134 applications, and work solely, or as a group to achieve the desired 135 functionality. This document considers an SDN controller as a black 136 box, regardless of design and implementation. The tests defined in 137 the document can be used to benchmark SDN controller for 138 performance, scalability, reliability and security independent of 139 northbound and southbound protocols. These tests can be performed on 140 an SDN controller running as a virtual machine (VM) instance or on a 141 bare metal server. This document is intended for those who want to 142 measure the SDN controller performance as well as compare various 143 SDN controllers performance. 145 Conventions used in this document 147 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 148 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 149 document are to be interpreted as described in RFC 2119. 151 2. Scope 153 This document defines methodology to measure the networking metrics 154 of SDN controllers. For the purpose of this memo, the SDN controller 155 is a function that manages and controls SDN nodes. Any SDN controller 156 without a control capability is out of scope for this memo. The 157 tests defined in this document enable benchmarking of SDN 158 Controllers in two ways; as a standalone controller and as a cluster 159 of homogeneous controllers. These tests are recommended for 160 execution in lab environments rather than in live network 161 deployments. Performance benchmarking of a federation of controllers 162 is beyond the scope of this document. 164 3. Test Setup 166 The tests defined in this document enable measurement of an SDN 167 controllers performance in standalone mode and cluster mode. This 168 section defines common reference topologies that are later referred 169 to in individual tests. 171 3.1. Test setup - Controller working in Standalone Mode 173 +-----------------------------------------------------------+ 174 | Management Plane Test Emulator | 175 | | 176 | -------------------- | 177 | | SDN Applications | | 178 | -------------------- | 179 | | 180 +-----------------------------+(I2)-------------------------+ 181 | 182 | 183 | (Northbound interface) 184 +-------------------------------+ 185 | +----------------+ | 186 | | SDN Controller | | 187 | +----------------+ | 188 | | 189 | Device Under Test (DUT) | 190 +-------------------------------+ 191 | (Southbound interface) 192 | 193 | 194 +-----------------------------+(I1)-------------------------+ 195 | | 196 | +---------+ +---------+ | 197 | | SDN |l1 ln-1| SDN | | 198 | | Node 1 |----- .... -----| Node n | | 199 | +---------+ +---------+ | 200 | |l0 |ln | 201 | | | | 202 | | | | 203 | +---------------+ +---------------+ | 204 | | Test Traffic | | Test Traffic | | 205 | | Generator | | Generator | | 206 | | (TP1) | | (TP2) | | 207 | +---------------+ +---------------+ | 208 | | 209 | Forwarding Plane Test Emulator | 210 +-----------------------------------------------------------+ 212 Figure 1 214 3.2. Test setup - Controller working in Cluster Mode 216 +-----------------------------------------------------------+ 217 | Management Plane Test Emulator | 218 | | 219 | -------------------- | 220 | | SDN Applications | | 221 | -------------------- | 222 | | 223 +-----------------------------+(I2)-------------------------+ 224 | 225 | 226 | (Northbound interface) 227 +---------------------------------------------------------+ 228 | | 229 | ------------------ ------------------ | 230 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 231 | ------------------ ------------------ | 232 | | 233 | Device Under Test (DUT) | 234 +---------------------------------------------------------+ 235 | (Southbound interface) 236 | 237 | 238 +-----------------------------+(I1)-------------------------+ 239 | | 240 | +---------+ +---------+ | 241 | | SDN |l1 ln-1| SDN | | 242 | | Node 1 |----- .... -----| Node n | | 243 | +---------+ +---------+ | 244 | |l0 |ln | 245 | | | | 246 | | | | 247 | +---------------+ +---------------+ | 248 | | Test Traffic | | Test Traffic | | 249 | | Generator | | Generator | | 250 | | (TP1) | | (TP2) | | 251 | +---------------+ +---------------+ | 252 | | 253 | Forwarding Plane Test Emulator | 254 +-----------------------------------------------------------+ 256 Figure 2 258 4. Test Considerations 260 4.1. Network Topology 262 The test cases SHOULD use Leaf-Spine topology with at least 1 SDN 263 node in the topology for benchmarking. The test traffic generators 264 TP1 and TP2 SHOULD be connected to the first and the last SDN leaf 265 node. If a test case uses test topology with 1 SDN node, the test 266 traffic generators TP1 and TP2 SHOULD be connected to the same node. 267 However to achieve a complete performance characterization of the 268 SDN controller, it is recommended that the controller be benchmarked 269 for many network topologies and a varying number of SDN nodes. This 270 document includes a few sample test topologies, defined in Section 271 10 - Appendix A for reference. Further, care should be taken to make 272 sure that a loop prevention mechanism is enabled either in the SDN 273 controller, or in the network when the topology contains redundant 274 network paths. 276 4.2. Test Traffic 278 Test traffic is used to notify the controller about the arrival of 279 new flows. The test cases SHOULD use multiple frame sizes as 280 recommended in RFC2544 for benchmarking. 282 4.3. Connection Setup 284 There may be controller implementations that support unencrypted and 285 encrypted network connections with SDN nodes. Further, the 286 controller may have backward compatibility with SDN nodes running 287 older versions of southbound protocols. It is recommended that the 288 controller performance be measured with one or more applicable 289 connection setup methods defined below. 291 1. Unencrypted connection with SDN nodes, running same protocol 292 version. 293 2. Unencrypted connection with SDN nodes, running different 294 protocol versions. 295 Example: 296 a. Controller running current protocol version and switch 297 running older protocol version 298 b. Controller running older protocol version and switch 299 running current protocol version 300 3. Encrypted connection with SDN nodes, running same protocol 301 version 302 4. Encrypted connection with SDN nodes, running different protocol 303 versions. 304 Example: 305 a. Controller running current protocol version and switch 306 running older protocol version 307 b. Controller running older protocol version and switch 308 running current protocol version 310 4.4. Measurement Point Specification and Recommendation 312 The measurement accuracy depends on several factors including the 313 point of observation where the indications are captured. For 314 example, the notification can be observed at the controller or test 315 emulator. The test operator SHOULD make the observations/ 316 measurements at the interfaces of test emulator unless it is 317 explicitly mentioned otherwise in the individual test. 319 4.5. Connectivity Recommendation 321 The SDN controller in the test setup SHOULD be connected directly 322 with the forwarding and the management plane test emulators to avoid 323 any delays or failure introduced by the intermediate devices during 324 benchmarking tests. 326 4.6. Test Repeatability 328 To increase the confidence in measured result, it is recommended 329 that each test SHOULD be repeated a minimum of 10 times. 331 Test Reporting 333 Each test has a reporting format that contains some global and 334 identical reporting components, and some individual components that 335 are specific to individual tests. The following test configuration 336 parameters and controller settings parameters MUST be reflected in 337 the test report. 339 Test Configuration Parameters: 341 1. Controller name and version 342 2. Northbound protocols and versions 343 3. Southbound protocols and versions 344 4. Controller redundancy mode (Standalone or Cluster Mode) 345 5. Connection setup (Unencrypted or Encrypted) 346 6. Network Topology (Mesh or Tree or Linear) 347 7. SDN Node Type (Physical or Virtual or Emulated) 348 8. Number of Nodes 349 9. Number of Links 350 10. Test Traffic Type 351 11. Controller System Configuration (e.g., CPU, Memory, Operating 352 System, Interface Speed etc.,) 353 12. Reference Test Setup (e.g., Section 3.1 etc.,) 355 Controller Settings Parameters: 356 1. Topology re-discovery timeout 357 2. Controller redundancy mode (e.g., active-standby etc.,) 359 5. Benchmarking Tests 361 5.1. Performance 363 5.1.1. Network Topology Discovery Time 365 Objective: 367 Measure the time taken by the SDN controller to discover the network 368 topology (nodes and links), expressed in milliseconds. 370 Reference Test Setup: 372 The test SHOULD use one of the test setups described in section 3.1 373 or section 3.2 of this document. 375 Prerequisite: 377 1. The controller MUST support network discovery. 378 2. Tester should be able to retrieve the discovered topology 379 information either through the controller's management interface, 380 or northbound interface to determine if the discovery was 381 successful and complete. 382 3. Ensure that the controller's topology re-discovery timeout has 383 been set to the maximum value to avoid initiation of re-discovery 384 process in the middle of the test. 386 Procedure: 388 1. Ensure that the controller is operational, its network 389 applications, northbound and southbound interfaces are up and 390 running. 391 2. Establish the network connections between controller and SDN 392 nodes. 393 3. Record the time for the first discovery message (Tm1) received 394 from the controller at forwarding plane test emulator interface 395 I1. 396 4. Query the controller every 3 seconds to obtain the discovered 397 network topology information through the northbound interface or 398 the management interface and compare it with the deployed network 399 topology information. 400 5. Stop the test when the discovered topology information matches the 401 deployed network topology, or when the discovered topology 402 information for 3 consecutive queries return the same details. 403 6. Record the time last discovery message (Tmn) sent to controller 404 from the forwarding plane test emulator interface (I1) when the 405 test completed successfully. (e.g., the topology matches). 407 Measurement: 409 Topology Discovery Time Tr1 = Tmn-Tm1. 411 Tr1 + Tr2 + Tr3 .. Trn 412 Average Topology Discovery Time = ----------------------- 413 Total Test Iterations 415 Reporting Format: 417 The Topology Discovery Time results MUST be reported in the format 418 of a table, with a row for each successful iteration. The last row 419 of the table indicates the average Topology Discovery Time. 421 If this test is repeated with varying number of nodes over the same 422 topology, the results SHOULD be reported in the form of a graph. The 423 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 424 SHOULD be the average Topology Discovery Time. 426 If this test is repeated with same number of nodes over different 427 topologies, the results SHOULD be reported in the form of a graph. 428 The X coordinate SHOULD be the Topology Type, the Y coordinate 429 SHOULD be the average Topology Discovery Time. 431 5.1.2. Asynchronous Message Processing Time 433 Objective: 435 Measure the time taken by the SDN controller to process an 436 asynchronous message, expressed in milliseconds. 438 Reference Test Setup: 440 This test SHOULD use one of the test setup described in section 3.1 441 or section 3.2 of this document. 443 Prerequisite: 445 1. The controller MUST have completed the network topology discovery 446 for the connected SDN nodes. 448 Procedure: 450 1. Generate asynchronous messages from every connected SDN node, 451 to the SDN controller, one at a time in series from the 452 forwarding plane test emulator for the test duration. 454 2. Record every request transmit (T1) timestamp and the 455 corresponding response (R1) received timestamp at the 456 forwarding plane test emulator interface (I1) for every 457 successful message exchange. 459 Measurement: 461 (R1-T1) + (R2-T2)..(Rn-Tn) 462 Asynchronous Message Processing Time Tr1 = ----------------------- 463 Nrx 465 Where Nrx is the total number of successful messages exchanged 467 Tr1 + Tr2 + Tr3..Trn 468 Average Asynchronous Message Processing Time= -------------------- 469 Total Test Iterations 471 Reporting Format: 473 The Asynchronous Message Processing Time results MUST be reported in 474 the format of a table with a row for each iteration. The last row of 475 the table indicates the average Asynchronous Message Processing 476 Time. 478 The report should capture the following information in addition to 479 the configuration parameters captured in section 5. - Successful 480 messages exchanged (Nrx) 482 If this test is repeated with varying number of nodes with same 483 topology, the results SHOULD be reported in the form of a graph. The 484 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 485 SHOULD be the average Asynchronous Message Processing Time. 487 If this test is repeated with same number of nodes using different 488 topologies, the results SHOULD be reported in the form of a graph. 489 The X coordinate SHOULD be the Topology Type, the Y coordinate 490 SHOULD be the average Asynchronous Message Processing Time. 492 5.1.3. Asynchronous Message Processing Rate 494 Objective: 496 To measure the maximum rate of asynchronous messages (session 497 aliveness check message, new flow arrival notification message etc.) 498 a controller can process within the test duration, expressed in 499 messages processed per second. 501 Reference Test Setup: 503 The test SHOULD use one of the test setups described in section 3.1 504 or section 3.2 of this document. 506 Prerequisite: 508 1. The controller MUST have completed the network topology discovery 509 for the connected SDN nodes. 511 Procedure: 513 1. Generate asynchronous messages continuously at the maximum 514 possible rate on the established connections from all the 515 connected SDN nodes in the forwarding plane test emulator for the 516 Test Duration (Td). 517 2. Record the total number of responses received from the controller 518 (Nrx) as well as the number of messages sent(Ntx) to the 519 controller within the test duration(Td) at the forwarding plane 520 test emulator interface (I1). 522 Measurement: 524 Nrx 525 Asynchronous Message Processing Rate Tr1 = ----- 526 Td 528 Tr1 + Tr2 + Tr3..Trn 529 Average Asynchronous Message Processing Rate= -------------------- 530 Total Test Iterations 532 Loss Ratio = (Ntx-Nrx)/100. 534 Reporting Format: 536 The Asynchronous Message Processing Rate results MUST be reported in 537 the format of a table with a row for each iteration. The last row of 538 the table indicates the average Asynchronous Message Processing 539 Rate. 541 The report should capture the following information in addition to 542 the configuration parameters captured in section 5. 544 - Offered rate (Ntx) 545 - Loss Ratio 547 If this test is repeated with varying number of nodes over same 548 topology, the results SHOULD be reported in the form of a graph. The 549 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 550 SHOULD be the average Asynchronous Message Processing Rate. 552 If this test is repeated with same number of nodes over different 553 topologies, the results SHOULD be reported in the form of a graph. 554 The X coordinate SHOULD be the Topology Type, the Y coordinate 555 SHOULD be the average Asynchronous Message Processing Rate. 557 5.1.4. Reactive Path Provisioning Time 559 Objective: 561 To measure the time taken by the controller to setup a path 562 reactively between source and destination node, expressed in 563 milliseconds. 565 Reference Test Setup: 567 The test SHOULD use one of the test setups described in section 3.1 568 or section 3.2 of this document. 570 Prerequisite: 572 1. The controller MUST contain the network topology information for 573 the deployed network topology. 574 2. The controller should have the knowledge about the location of 575 destination endpoint for which the path has to be provisioned. 576 This can be achieved through dynamic learning or static 577 provisioning. 578 3. Ensure that the default action for 'flow miss' in SDN node is 579 configured to 'send to controller'. 580 4. Ensure that each SDN node in a path requires the controller to 581 make the forwarding decision while paving the entire path. 583 Procedure: 585 1. Send a single traffic stream from the test traffic generator TP1 586 to test traffic generator TP2. 587 2. Record the time of the first flow provisioning request message 588 sent to the controller (Tsf1) from the SDN node at the forwarding 589 plane test emulator interface (I1). 591 3. Wait for the arrival of first traffic frame at the Traffic 592 Endpoint TP2 or the expiry of test duration (Td). 593 4. Record the time of the last flow provisioning response message 594 received from the controller (Tdf1) to the SDN node at the 595 forwarding plane test emulator interface (I1). 597 Measurement: 599 Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1. 601 Tr1 + Tr2 + Tr3 .. Trn 602 Average Reactive Path Provisioning Time = ----------------------- 603 Total Test Iterations 605 Reporting Format: 607 The Reactive Path Provisioning Time results MUST be reported in the 608 format of a table with a row for each iteration. The last row of the 609 table indicates the Average Reactive Path Provisioning Time 611 The report should capture the following information in addition to 612 the configuration parameters captured in section 5. 614 - Number of SDN nodes in the path 616 5.1.5. Proactive Path Provisioning Time 618 Objective: 620 To measure the time taken by the controller to setup a path 621 proactively between source and destination node, expressed in 622 milliseconds. 624 Reference Test Setup: 626 The test SHOULD use one of the test setups described in section 3.1 627 or section 3.2 of this document. 629 Prerequisite: 631 1. The controller MUST contain the network topology information for 632 the deployed network topology. 634 2. The controller should have the knowledge about the location of 635 destination endpoint for which the path has to be provisioned. 636 This can be achieved through dynamic learning or static 637 provisioning. 638 3. Ensure that the default action for flow miss in SDN node is 639 'drop'. 641 Procedure: 643 1. Send a single traffic stream from test traffic generator TP1 to 644 TP2. 645 2. Install the flow entries to reach from test traffic generator TP1 646 to the test traffic generator TP2 through controller's northbound 647 or management interface. 648 3. Wait for the arrival of first traffic frame at the test traffic 649 generator TP2 or the expiry of test duration (Td). 650 4. Record the time when the proactive flow is provisioned in the 651 Controller (Tsf1) at the management plane test emulator interface 652 I2. 653 5. Record the time of the last flow provisioning message received 654 from the controller (Tdf1) at the forwarding plane test emulator 655 interface I1. 657 Measurement: 659 Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1. 661 Tr1 + Tr2 + Tr3 .. Trn 662 Average Proactive Path Provisioning Time = ----------------------- 663 Total Test Iterations 665 Reporting Format: 667 The Proactive Path Provisioning Time results MUST be reported in the 668 format of a table with a row for each iteration. The last row of the 669 table indicates the Average Proactive Path Provisioning Time. 671 The report should capture the following information in addition to 672 the configuration parameters captured in section 5. 674 - Number of SDN nodes in the path 676 5.1.6. Reactive Path Provisioning Rate 678 Objective: 680 Measure the maximum number of independent paths a controller can 681 concurrently establish between source and destination nodes 682 reactively within the test duration, expressed in paths per second. 684 Reference Test Setup: 686 The test SHOULD use one of the test setups described in section 3.1 687 or section 3.2 of this document. 689 Prerequisite: 691 1. The controller MUST contain the network topology information for 692 the deployed network topology. 693 2. The controller should have the knowledge about the location of 694 destination addresses for which the paths have to be provisioned. 695 This can be achieved through dynamic learning or static 696 provisioning. 697 3. Ensure that the default action for 'flow miss' in SDN node is 698 configured to 'send to controller'. 699 4. Ensure that each SDN node in a path requires the controller to 700 make the forwarding decision while provisioning the entire path. 702 Procedure: 704 1. Send traffic with unique source and destination addresses from 705 test traffic generator TP1. 706 2. Record total number of unique traffic frames (Ndf) received at the 707 test traffic generator TP2 within the test duration (Td). 709 Measurement: 711 Ndf 712 Reactive Path Provisioning Rate Tr1 = ------ 713 Td 715 Tr1 + Tr2 + Tr3 .. Trn 716 Average Reactive Path Provisioning Rate = ------------------------ 717 Total Test Iterations 719 Reporting Format: 721 The Reactive Path Provisioning Rate results MUST be reported in the 722 format of a table with a row for each iteration. The last row of the 723 table indicates the Average Reactive Path Provisioning Rate. 725 The report should capture the following information in addition to 726 the configuration parameters captured in section 5. 728 - Number of SDN nodes in the path 730 - Offered rate 732 5.1.7. Proactive Path Provisioning Rate 734 Objective: 736 Measure the maximum number of independent paths a controller can 737 concurrently establish between source and destination nodes 738 proactively within the test duration, expressed in paths per second. 740 Reference Test Setup: 742 The test SHOULD use one of the test setups described in section 3.1 743 or section 3.2 of this document. 745 Prerequisite: 747 1. The controller MUST contain the network topology information for 748 the deployed network topology. 750 2. The controller should have the knowledge about the location of 751 destination addresses for which the paths have to be provisioned. 752 This can be achieved through dynamic learning or static 753 provisioning. 755 3. Ensure that the default action for flow miss in SDN node is 756 'drop'. 758 Procedure: 760 1. Send traffic continuously with unique source and destination 761 addresses from test traffic generator TP1. 763 2. Install corresponding flow entries to reach from simulated 764 sources at the test traffic generator TP1 to the simulated 765 destinations at test traffic generator TP2 through controller's 766 northbound or management interface. 768 3. Record total number of unique traffic frames received Ndf) at the 769 test traffic generator TP2 within the test duration (Td). 771 Measurement: 773 Ndf 774 Proactive Path Provisioning Rate Tr1 = ------ 775 Td 777 Tr1 + Tr2 + Tr3 .. Trn 778 Average Proactive Path Provisioning Rate = ----------------------- 779 Total Test Iterations 781 Reporting Format: 783 The Proactive Path Provisioning Rate results MUST be reported in the 784 format of a table with a row for each iteration. The last row of the 785 table indicates the Average Proactive Path Provisioning Rate. 787 The report should capture the following information in addition to 788 the configuration parameters captured in section 5. 790 - Number of SDN nodes in the path 792 - Offered rate 794 5.1.8. Network Topology Change Detection Time 796 Objective: 798 Measure the time taken by the controller to detect any changes in 799 the network topology, expressed in milliseconds. 801 Reference Test Setup: 803 The test SHOULD use one of the test setups described in section 3.1 804 or section 3.2 of this document. 806 Prerequisite: 808 1. The controller MUST have discovered the network topology 809 information for the deployed network topology. 811 2. The periodic network discovery operation should be configured to 812 twice the Test duration (Td) value. 814 Procedure: 816 1. Trigger a topology change event by bringing down an active SDN 817 node in the topology. 819 2. Record the time when the first topology change notification is 820 sent to the controller (Tcn) at the forwarding plane test emulator 821 interface (I1). 823 3. Stop the test when the controller sends the first topology re- 824 discovery message to the SDN node or the expiry of test interval 825 (Td). 827 4. Record the time when the first topology re-discovery message is 828 received from the controller (Tcd) at the forwarding plane test 829 emulator interface (I1) 831 Measurement: 833 Network Topology Change Detection Time Tr1 = Tcd-Tcn. 835 Tr1 + Tr2 + Tr3 .. Trn 836 Average Network Topology Change Detection Time = ------------------ 837 Total Test Iterations 839 Reporting Format: 841 The Network Topology Change Detection Time results MUST be reported 842 in the format of a table with a row for each iteration. The last 843 row of the table indicates the average Network Topology Change Time. 845 5.2. 6.2 Scalability 847 5.2.1. Control Session Capacity 849 Objective: 851 Measure the maximum number of control sessions that the controller 852 can maintain. 854 Reference Test Setup: 856 The test SHOULD use one of the test setups described in section 3.1 857 or section 3.2 of this document. 859 Procedure: 861 1. Establish control connection with controller from every SDN node 862 emulated in the forwarding plane test emulator. 863 2. Stop the test when the controller starts dropping the control 864 connection. 865 3. Record the number of successful connections established with the 866 controller (CCn) at the forwarding plane test emulator. 868 Measurement: 870 Control Sessions Capacity = CCn. 872 Reporting Format: 874 The Control Session Capacity results MUST be reported in addition to 875 the configuration parameters captured in section 5. 877 5.2.2. Network Discovery Size 879 Objective: 881 Measure the network size (number of nodes, links, and hosts) that a 882 controller can discover. 884 Reference Test Setup: 886 The test SHOULD use one of the test setups described in section 3.1 887 or section 3.2 of this document. 889 Prerequisite: 891 1. The controller MUST support automatic network discovery. 892 2. Tester should be able to retrieve the discovered topology 893 information either through controller's management interface or 894 northbound interface. 896 Procedure: 898 1. Establish the network connections between controller and network 899 nodes. 900 2. Query the controller for the discovered network topology 901 information and compare it with the deployed network topology 902 information. 903 3. 3a. Increase the number of nodes by 1 when the comparison is 904 successful and repeat the test. 905 4. 3b. Decrease the number of nodes by 1 when the comparison fails 906 and repeat the test. 907 5. Continue the test until the comparison of step 3b is successful. 908 6. Record the number of nodes for the last iteration (Ns) where the 909 topology comparison was successful. 911 Measurement: 913 Network Discovery Size = Ns. 915 Reporting Format: 917 The Network Discovery Size results MUST be reported in addition to 918 the configuration parameters captured in section 5. 920 5.2.3. 6.2.3 Forwarding Table Capacity 922 Objective: 924 Measure the maximum number of flow entries a controller can manage 925 in its Forwarding table. 927 Reference Test Setup: 929 The test SHOULD use one of the test setups described in section 3.1 930 or section 3.2 of this document. 932 Prerequisite: 934 1. The controller Forwarding table should be empty. 935 2. Flow Idle time MUST be set to higher or infinite value. 936 3. The controller MUST have completed network topology discovery. 937 4. Tester should be able to retrieve the forwarding table information 938 either through controller's management interface or northbound 939 interface. 941 Procedure: 943 Reactive Flow Provisioning Mode: 945 1. Send bi-directional traffic continuously with unique source and/or 946 destination addresses from test traffic generators TP1 and TP2 at 947 the asynchronous message processing rate of controller. 948 2. Query the controller at a regular interval (e.g., 5 seconds) for 949 the number of learnt flow entries from its northbound interface. 950 3. Stop the test when the retrieved value is constant for three 951 consecutive iterations and record the value received from the last 952 query (Nrp). 954 Proactive Flow Provisioning Mode: 956 1. Install unique flows continuously through controller's northbound 957 or management interface until a failure response is received from 958 the controller. 959 2. Record the total number of successful responses (Nrp). 961 Note: 963 Some controller designs for proactive flow provisioning mode may 964 require the switch to send flow setup requests in order to generate 965 flow setup responses. In such cases, it is recommended to generate 966 bi-directional traffic for the provisioned flows. 968 Measurement: 970 Proactive Flow Provisioning Mode: 972 Max Flow Entries = Total number of flows provisioned (Nrp) 974 Reactive Flow Provisioning Mode: 976 Max Flow Entries = Total number of learnt flow entries (Nrp) 978 Forwarding Table Capacity = Max Flow Entries. 980 Reporting Format: 982 The Forwarding Table Capacity results MUST be tabulated with the 983 following information in addition to the configuration parameters 984 captured in section 5. 986 - Provisioning Type (Proactive/Reactive) 988 5.3. 6.3 Security 990 5.3.1. 6.3.1 Exception Handling 992 Objective: 994 Determine the effect of handling error packets and notifications on 995 performance tests. The impact MUST be measured for the following 996 performance tests 998 a. Path Provisioning Rate 1000 b. Path Provisioning Time 1002 c. Network Topology Change Detection Time 1004 Reference Test Setup: 1006 The test SHOULD use one of the test setups described in section 3.1 1007 or section 3.2 of this document. 1009 Prerequisite: 1011 1. This test MUST be performed after obtaining the baseline 1012 measurement results for the above performance tests. 1013 2. Ensure that the invalid messages are not dropped by the 1014 intermediate devices connecting the controller and SDN nodes. 1016 Procedure: 1018 1. Perform the above listed performance tests and send 1% of messages 1019 from the Asynchronous Message Processing Rate as invalid messages 1020 from the connected SDN nodes emulated at the forwarding plane test 1021 emulator. 1022 2. Perform the above listed performance tests and send 2% of messages 1023 from the Asynchronous Message Processing Rate as invalid messages 1024 from the connected SDN nodes emulated at the forwarding plane test 1025 emulator. 1027 Note: 1029 Invalid messages can be frames with incorrect protocol fields or any 1030 form of failure notifications sent towards controller. 1032 Measurement: 1034 Measurement MUST be done as per the equation defined in the 1035 corresponding performance test measurement section. 1037 Reporting Format: 1039 The Exception Handling results MUST be reported in the format of 1040 table with a column for each of the below parameters and row for 1041 each of the listed performance tests. 1043 - Without Exceptions 1045 - With 1% Exceptions 1047 - With 2% Exceptions 1049 5.3.2. Denial of Service Handling 1051 Objective: 1053 Determine the effect of handling DoS attacks on performance and 1054 scalability tests the impact MUST be measured for the following 1055 tests: 1057 a. Path Provisioning Rate 1059 b. Path Provisioning Time 1061 c. Network Topology Change Detection Time 1063 d. Network Discovery Size 1065 Reference Test Setup: 1067 The test SHOULD use one of the test setups described in section 3.1 1068 or section 3.2 of this document. 1070 Prerequisite: 1072 This test MUST be performed after obtaining the baseline measurement 1073 results for the above tests. 1075 Procedure: 1077 1. Perform the listed tests and launch a DoS attack towards 1078 controller while the test is running. 1080 Note: 1082 DoS attacks can be launched on one of the following interfaces. 1084 a. Northbound (e.g., Sending a huge number of requests on 1085 northbound interface) 1086 b. Management (e.g., Ping requests to controller's management 1087 interface) 1088 c. Southbound (e.g., TCP SYNC messages on southbound interface) 1090 Measurement: 1092 Measurement MUST be done as per the equation defined in the 1093 corresponding test's measurement section. 1095 Reporting Format: 1097 The DoS Attacks Handling results MUST be reported in the format of 1098 table with a column for each of the below parameters and row for 1099 each of the listed tests. 1101 - Without any attacks 1103 - With attacks 1105 The report should also specify the nature of attack and the 1106 interface. 1108 5.4. Reliability 1110 5.4.1. Controller Failover Time 1112 Objective: 1114 Measure the time taken to switch from an active controller to the 1115 backup controller, when the controllers work in redundancy mode and 1116 the active controller fails. 1118 Reference Test Setup: 1120 The test SHOULD use the test setup described in section 3.2 of this 1121 document. 1123 Prerequisite: 1125 1. Master controller election MUST be completed. 1126 2. Nodes are connected to the controller cluster as per the 1127 Redundancy Mode (RM). 1128 3. The controller cluster should have completed the network topology 1129 discovery. 1130 4. The SDN Node MUST send all new flows to the controller when it 1131 receives from the test traffic generator. 1132 5. Controller should have learnt the location of destination (D1) at 1133 test traffic generator TP2. 1135 Procedure: 1137 1. Send uni-directional traffic continuously with incremental 1138 sequence number and source addresses from test traffic generator 1139 TP1 at the rate that the controller processes without any drops. 1140 2. Ensure that there are no packet drops observed at the test traffic 1141 generator TP2. 1142 3. Bring down the active controller. 1143 4. Stop the test when a first frame received on TP2 after failover 1144 operation. 1145 5. Record the time at which the last valid frame received (T1) at 1146 test traffic generator TP2 before sequence error and the first 1147 valid frame received (T2) after the sequence error at TP2 1149 Measurement: 1151 Controller Failover Time = (T2 - T1) 1152 Packet Loss = Number of missing packet sequences. 1154 Reporting Format: 1156 The Controller Failover Time results MUST be tabulated with the 1157 following information. 1159 - Number of cluster nodes 1161 - Redundancy mode 1163 - Controller Failover 1165 - Time Packet Loss 1167 - Cluster keep-alive interval 1169 5.4.2. Network Re-Provisioning Time 1171 Objective: 1173 Compute the time taken to re-route the traffic by the controller 1174 when there is a failure in existing traffic paths. 1176 Reference Test Setup: 1178 This test SHOULD use one of the test setup described in section 3.1 1179 or section 3.2 of this document. 1181 Prerequisite: 1182 1. Network with the given number of nodes and redundant paths MUST be 1183 deployed. 1184 2. Ensure that the controller MUST have knowledge about the location 1185 of test traffic generators TP1 and TP2. 1186 3. Ensure that the controller does not pre-provision the alternate 1187 path in the emulated SDN nodes at the forwarding plane test 1188 emulator. 1190 Procedure: 1192 1. Send bi-directional traffic continuously with unique sequence 1193 number from TP1 and TP2. 1194 2. Bring down a link or switch in the traffic path. 1196 3. Stop the test after receiving first frame after network re- 1197 convergence. 1198 4. Record the time of last received frame prior to the frame loss at 1199 TP2 (TP2-Tlfr) and the time of first frame received after the 1200 frame loss at TP2 (TP2-Tffr). 1201 5. Record the time of last received frame prior to the frame loss at 1202 TP1 (TP1-Tlfr) and the time of first frame received after the 1203 frame loss at TP1 (TP1-Tffr). 1205 Measurement: 1207 Forward Direction Path Re-Provisioning Time (FDRT) 1208 = (TP2-Tffr - TP2-Tlfr) 1210 Reverse Direction Path Re-Provisioning Time (RDRT) 1211 = (TP1-Tffr - TP1-Tlfr) 1213 Network Re-Provisioning Time = (FDRT+RDRT)/2 1215 Forward Direction Packet Loss = Number of missing sequence frames 1216 at TP1 1218 Reverse Direction Packet Loss = Number of missing sequence frames 1219 at TP2 1221 Reporting Format: 1223 The Network Re-Provisioning Time results MUST be tabulated with the 1224 following information. 1226 - Number of nodes in the primary path 1228 - Number of nodes in the alternate path 1230 - Network Re-Provisioning Time 1232 - Forward Direction Packet Loss 1234 - Reverse Direction Packet Loss 1236 6. References 1238 6.1. Normative References 1240 [RFC2544] S. Bradner, J. McQuaid, "Benchmarking Methodology for 1241 Network Interconnect Devices",RFC 2544, March 1999. 1243 [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, 1244 "Framework for IP Performance Metrics",RFC 2330, 1245 May 1998. 1247 [RFC6241] R. Enns, M. Bjorklund, J. Schoenwaelder, A. Bierman, 1248 "Network Configuration Protocol (NETCONF)",RFC 6241, 1249 July 2011. 1251 [RFC6020] M. Bjorklund, "YANG - A Data Modeling Language for 1252 the Network Configuration Protocol (NETCONF)", RFC 6020, 1253 October 2010 1255 [RFC5440] JP. Vasseur, JL. Le Roux, "Path Computation Element (PCE) 1256 Communication Protocol (PCEP)", RFC 5440, March 2009. 1258 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 1259 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 1261 [I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil, 1262 Mark.T, Vishwas Manral, Sarah Banks, "Terminology for 1263 Benchmarking SDN Controller Performance", 1264 draft-ietf-bmwg-sdn-controller-benchmark-term-00 1265 (Work in progress), October 19, 2015 1267 6.2. Informative References 1269 [I-D.i2rs-architecture] A. Atlas, J. Halpern, S. Hares, D. Ward, 1270 T. Nadeau, "An Architecture for the Interface to the 1271 Routing System", draft-ietf-i2rs-architecture-09 1272 (Work in progress), March 6, 2015 1274 [OpenContrail] Ankur Singla, Bruno Rijsman, "OpenContrail 1275 Architecture Documentation", 1276 http://opencontrail.org/opencontrail-architecture-documentation 1278 [OpenDaylight] OpenDaylight Controller:Architectural Framework, 1279 https://wiki.opendaylight.org/view/OpenDaylight_Controller 1281 7. IANA Considerations 1283 This document does not have any IANA requests. 1285 8. Security Considerations 1287 Benchmarking tests described in this document are limited to the 1288 performance characterization of controller in lab environment with 1289 isolated network. 1291 9. Acknowledgments 1293 The authors would like to thank the following individuals for 1294 providing their valuable comments to the earlier versions of this 1295 document: Al Morton (AT&T), Sandeep Gangadharan (HP), 1296 M. Georgescu (NAIST), Andrew McGregor (Google), Scott Bradner 1297 (Harvard University), Jay Karthik (Cisco), Ramakrishnan (Dell), 1298 Khasanov Boris (Huawei), Brian Castelli (Spirent) 1300 This document was prepared using 2-Word-v2.0.template.dot. 1302 Appendix A. Example Test Topologies 1304 A.1. Leaf-Spine Topology - Three Tier Network Architecture 1306 +----------+ 1307 | SDN | 1308 | Node | (Core) 1309 +----------+ 1310 / \ 1311 / \ 1312 +------+ +------+ 1313 | SDN | | SDN | (Spine) 1314 | Node |.. | Node | 1315 +------+ +------+ 1316 / \ / \ 1317 / \ / \ 1318 l1 / / \ ln-1 1319 / / \ \ 1320 +--------+ +-------+ 1321 | SDN | | SDN | 1322 | Node |.. | Node | (Leaf) 1323 +--------+ +-------+ 1325 A.2. Leaf-Spine Topology - Two Tier Network Architecture 1327 +------+ +------+ 1328 | SDN | | SDN | (Spine) 1329 | Node |.. | Node | 1330 +------+ +------+ 1331 / \ / \ 1332 / \ / \ 1333 l1 / / \ ln-1 1334 / / \ \ 1335 +--------+ +-------+ 1336 | SDN | | SDN | 1337 | Node |.. | Node | (Leaf) 1338 +--------+ +-------+ 1340 Appendix B. Benchmarking Methodology using OpenFlow Controllers 1342 This section gives an overview of OpenFlow protocol and provides 1343 test methodology to benchmark SDN controllers supporting OpenFlow 1344 southbound protocol. 1346 B.1. Protocol Overview 1348 OpenFlow is an open standard protocol defined by Open Networking 1349 Foundation (ONF), used for programming the forwarding plane of 1350 network switches or routers via a centralized controller. 1352 B.2. Messages Overview 1354 OpenFlow protocol supports three messages types namely controller- 1355 to-switch, asynchronous and symmetric. 1357 Controller-to-switch messages are initiated by the controller and 1358 used to directly manage or inspect the state of the switch. These 1359 messages allow controllers to query/configure the switch (Features, 1360 Configuration messages), collect information from switch (Read-State 1361 message), send packets on specified port of switch (Packet-out 1362 message), and modify switch forwarding plane and state (Modify- 1363 State, Role-Request messages etc.). 1365 Asynchronous messages are generated by the switch without a 1366 controller soliciting them. These messages allow switches to update 1367 controllers to denote an arrival of new flow (Packet-in), switch 1368 state change (Flow-Removed, Port-status) and error (Error). 1370 Symmetric messages are generated in either direction without 1371 solicitation. These messages allow switches and controllers to set 1372 up connection (Hello), verify for liveness (Echo) and offer 1373 additional functionalities (Experimenter). 1375 B.3. Connection Overview 1377 OpenFlow channel is used to exchange OpenFlow message between an 1378 OpenFlow switch and an OpenFlow controller. The OpenFlow channel 1379 connection can be setup using plain TCP or TLS. By default, a switch 1380 establishes single connection with SDN controller. A switch may 1381 establish multiple parallel connections to single controller 1382 (auxiliary connection) or multiple controllers to handle controller 1383 failures and load balancing. 1385 B.4. Performance Benchmarking Tests 1387 B.4.1. Network Topology Discovery Time 1389 Procedure: 1391 SDN Nodes OpenFlow SDN 1392 Controller Application 1393 | | | 1394 | | | 1396 | | | 1397 | | | 1399 | | | 1400 | OFPT_HELLO Exchange | | 1401 |<-------------------------->| | 1402 | | | 1403 | PACKET_OUT with LLDP | | 1404 | to all switches | | 1405 (Tm1)|<---------------------------| | 1406 | | | 1407 | PACKET_IN with LLDP| | 1408 | rcvd from switch-1| | 1409 |--------------------------->| | 1410 | | | 1411 | PACKET_IN with LLDP| | 1412 | rcvd from switch-2| | 1413 |--------------------------->| | 1414 | . | | 1415 | . | | 1416 | | | 1417 | PACKET_IN with LLDP| | 1418 | rcvd from switch-n| | 1419 (Tmn)|--------------------------->| | 1420 | | | 1421 | | | 1423 | | | 1424 | | Query the controller for| 1425 | | discovered n/w topo.(Di)| 1426 | |<--------------------------| 1427 | | | 1428 | | | 1430 | | | 1432 Legend: 1434 NB: Northbound 1435 SB: Southbound 1436 OF: OpenFlow 1437 Tm1: Time of reception of first LLDP message from controller 1438 Tmn: Time of last LLDP message sent to controller 1440 Discussion: 1442 The Network Topology Discovery Time can be obtained by calculating 1443 the time difference between the first PACKET_OUT with LLDP message 1444 received from the controller (Tm1) and the last PACKET_IN with LLDP 1445 message sent to the controller (Tmn) when the comparison is 1446 successful. 1448 B.4.2. Asynchronous Message Processing Time 1450 Procedure: 1452 SDN Nodes OpenFlow SDN 1453 Controller Application 1454 | | | 1455 |PACKET_IN with single | | 1456 |OFP match header | | 1457 (T0)|--------------------------->| | 1458 | | | 1459 | PACKET_OUT with single OFP | | 1460 | action header | | 1461 (R0)|<---------------------------| | 1462 | . | | 1463 | . | | 1464 | . | | 1465 | | | 1466 |PACKET_IN with single OFP | | 1467 |match header | | 1468 (Tn)|--------------------------->| | 1469 | | | 1470 | PACKET_OUT with single OFP | | 1471 | action header| | 1472 (Rn)|<---------------------------| | 1473 | | | 1474 | | | 1476 | | | 1477 | | | 1480 | | | 1482 Legend: 1484 T0,T1, ..Tn are PACKET_IN messages transmit timestamps. 1485 R0,R1, ..Rn are PACKET_OUT messages receive timestamps. 1486 Nrx : Number of successful PACKET_IN/PACKET_OUT message 1487 exchanges 1489 Discussion: 1491 The Asynchronous Message Processing Time will be obtained by sum of 1492 ((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx. 1494 B.4.3. Asynchronous Message Processing Rate 1496 Procedure: 1498 SDN Nodes OpenFlow SDN 1499 Controller Application 1500 | | | 1501 |PACKET_IN with multiple OFP | | 1502 |match headers | | 1503 |--------------------------->| | 1504 | | | 1505 | PACKET_OUT with multiple | | 1506 | OFP action headers| | 1507 |<---------------------------| | 1508 | | | 1509 |PACKET_IN with multiple OFP | | 1510 |match headers | | 1511 |--------------------------->| | 1512 | | | 1513 | PACKET_OUT with multiple | | 1514 | OFP action headers| | 1515 |<---------------------------| | 1516 | . | | 1517 | . | | 1518 | . | | 1519 | | | 1520 |PACKET_IN with multiple OFP | | 1521 |match headers | | 1522 |--------------------------->| | 1523 | | | 1524 | PACKET_OUT with multiple | | 1525 | OFP action headers| | 1526 |<---------------------------| | 1527 | | | 1528 | | | 1530 | | | 1531 | | | 1533 | | | 1535 Discussion: 1537 The Asynchronous Message Processing Rate will be obtained by 1538 calculating the number of OFP action headers received in all 1539 PACKET_OUT messages during the test duration. 1541 B.4.4. Reactive Path Provisioning Time 1543 Procedure: 1545 Test Traffic Test Traffic SDN Nodes OpenFlow 1546 Generator TP1 Generator TP2 Controller 1547 | | | | 1548 | |G-ARP (D1) | | 1549 | |--------------------->| | 1550 | | | | 1551 | | |PACKET_IN(D1) | 1552 | | |------------------>| 1553 | | | | 1554 |Traffic (S1,D1) | | 1555 (Tsf1)|----------------------------------->| | 1556 | | | | 1557 | | | | 1558 | | | | 1559 | | |PACKET_IN(S1,D1) | 1560 | | |------------------>| 1561 | | | | 1562 | | | FLOW_MOD(D1) | 1563 | | |<------------------| 1564 | | | | 1565 | |Traffic (S1,D1) | | 1566 | (Tdf1)|<---------------------| | 1567 | | | | 1569 Legend: 1571 G-ARP: Gratuitous ARP message. 1572 Tsf1: Time of first frame sent from TP1 1573 Tdf1: Time of first frame received from TP2 1575 Discussion: 1577 The Reactive Path Provisioning Time can be obtained by finding the 1578 time difference between the transmit and receive time of the traffic 1579 (Tsf1-Tdf1). 1581 B.4.5. Proactive Path Provisioning Time 1583 Procedure: 1585 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 1586 Generator TP1 Generator TP2 Controller Application 1587 | | | | | 1588 | |G-ARP (D1) | | | 1589 | |-------------->| | | 1590 | | | | | 1591 | | |PACKET_IN(D1) | | 1592 | | |--------------->| | 1593 | | | | | 1594 |Traffic (S1,D1) | | | 1595 Tsf1)|---------------------------->| | | 1596 | | | | | 1597 | | | | | 1599 | | | | | 1600 | | | FLOW_MOD(D1) | | 1601 | | |<---------------| | 1602 | | | | | 1603 | |Traffic (S1,D1)| | | 1604 | (Tdf1)|<--------------| | | 1605 | | | | | 1607 Legend: 1609 G-ARP: Gratuitous ARP message. 1610 Tsf1: Time of first frame sent from TP1 1611 Tdf1: Time of first frame received from TP2 1613 Discussion: 1615 The Proactive Path Provisioning Time can be obtained by finding the 1616 time difference between the transmit and receive time of the traffic 1617 (Tsf1-Tdf1). 1619 B.4.6. Reactive Path Provisioning Rate 1621 Procedure: 1623 Test Traffic Test Traffic SDN Nodes OpenFlow 1624 Generator TP1 Generator TP2 Controller 1625 | | | | 1626 | | | | 1627 | | | | 1628 | |G-ARP (D1..Dn) | | 1629 | |--------------------| | 1630 | | | | 1631 | | |PACKET_IN(D1..Dn) | 1632 | | |--------------------->| 1633 | | | | 1634 |Traffic (S1..Sn,D1..Dn) | | 1635 |--------------------------------->| | 1636 | | | | 1637 | | |PACKET_IN(S1.Sn,D1.Dn)| 1638 | | |--------------------->| 1639 | | | | 1640 | | | FLOW_MOD(S1) | 1641 | | |<---------------------| 1642 | | | | 1643 | | | FLOW_MOD(D1) | 1644 | | |<---------------------| 1645 | | | | 1646 | | | FLOW_MOD(S2) | 1647 | | |<---------------------| 1648 | | | | 1649 | | | FLOW_MOD(D2) | 1650 | | |<---------------------| 1651 | | | . | 1652 | | | . | 1653 | | | | 1654 | | | FLOW_MOD(Sn) | 1655 | | |<---------------------| 1656 | | | | 1657 | | | FLOW_MOD(Dn) | 1658 | | |<---------------------| 1659 | | | | 1660 | | Traffic (S1..Sn, | | 1661 | | D1..Dn)| | 1662 | |<-------------------| | 1663 | | | | 1664 | | | | 1666 Legend: 1668 G-ARP: Gratuitous ARP 1669 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1670 Destination Endpoint n 1671 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1672 Endpoint n 1674 Discussion: 1676 The Reactive Path Provisioning Rate can be obtained by finding the 1677 total number of frames received at TP2 after the test duration. 1679 B.4.7. Proactive Path Provisioning Rate 1681 Procedure: 1683 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 1684 Generator TP1 Generator TP2 Controller Application 1685 | | | | | 1686 | |G-ARP (D1..Dn) | | | 1687 | |-------------->| | | 1688 | | | | | 1689 | | |PACKET_IN(D1.Dn)| | 1690 | | |--------------->| | 1691 | | | | | 1692 |Traffic (S1..Sn,D1..Dn) | | | 1693 Tsf1)|---------------------------->| | | 1694 | | | | | 1695 | | | | | 1697 | | | | | 1698 | | | | . | 1699 | | | | | 1701 | | | | | 1702 | | | FLOW_MOD(S1) | | 1703 | | |<---------------| | 1704 | | | | | 1705 | | | FLOW_MOD(D1) | | 1706 | | |<---------------| | 1707 | | | | | 1708 | | | . | | 1709 | | | FLOW_MOD(Sn) | | 1710 | | |<---------------| | 1711 | | | | | 1712 | | | FLOW_MOD(Dn) | | 1713 | | |<---------------| | 1714 | | | | | 1715 | |Traffic (S1.Sn,| | | 1716 | | D1.Dn)| | | 1717 | (Tdf1)|<--------------| | | 1718 | | | | | 1720 Legend: 1722 G-ARP: Gratuitous ARP 1723 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1724 Destination Endpoint n 1725 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1726 Endpoint n 1728 Discussion: 1730 The Proactive Path Provisioning Rate can be obtained by finding the 1731 total number of frames received at TP2 after the test duration 1733 B.4.8. Network Topology Change Detection Time 1735 Procedure: 1737 SDN Nodes OpenFlow SDN 1738 Controller Application 1739 | | | 1740 | | | 1742 | | | 1743 T0 |PORT_STATUS with link down | | 1744 | from S1 | | 1745 |--------------------------->| | 1746 | | | 1747 |First PACKET_OUT with LLDP | | 1748 |to OF Switch | | 1749 T1 |<---------------------------| | 1750 | | | 1751 | | | 1754 Discussion: 1756 The Network Topology Change Detection Time can be obtained by 1757 finding the difference between the time the OpenFlow switch S1 sends 1758 the PORT_STATUS message (T0) and the time that the OpenFlow 1759 controller sends the first topology re-discovery message (T1) to 1760 OpenFlow switches. 1762 B.5. Scalability 1764 B.5.1. Control Sessions Capacity 1766 Procedure: 1768 SDN Nodes OpenFlow 1769 Controller 1770 | | 1771 | OFPT_HELLO Exchange for Switch 1 | 1772 |<------------------------------------->| 1773 | | 1774 | OFPT_HELLO Exchange for Switch 2 | 1775 |<------------------------------------->| 1776 | . | 1777 | . | 1778 | . | 1779 | OFPT_HELLO Exchange for Switch n | 1780 |X<----------------------------------->X| 1781 | | 1783 Discussion: 1785 The value of Switch n-1 will provide Control Sessions Capacity. 1787 B.5.2. Network Discovery Size 1789 Procedure: 1791 SDN Nodes OpenFlow SDN 1792 Controller Application 1793 | | | 1794 | | | 1796 | | | 1797 | OFPT_HELLO Exchange | | 1798 |<-------------------------->| | 1799 | | | 1800 | PACKET_OUT with LLDP | | 1801 | to all switches | | 1802 |<---------------------------| | 1803 | | | 1804 | PACKET_IN with LLDP| | 1805 | rcvd from switch-1| | 1806 |--------------------------->| | 1807 | | | 1808 | PACKET_IN with LLDP| | 1809 | rcvd from switch-2| | 1810 |--------------------------->| | 1811 | . | | 1812 | . | | 1813 | | | 1814 | PACKET_IN with LLDP| | 1815 | rcvd from switch-n| | 1816 |--------------------------->| | 1817 | | | 1818 | | | 1820 | | | 1821 | | Query the controller for| 1822 | | discovered n/w topo.(N1)| 1823 | |<--------------------------| 1824 | | | 1825 | | | 1827 | | | 1828 | | | 1831 | | | 1833 Legend: 1835 n/w topo: Network Topology 1836 OF: OpenFlow 1838 Discussion: 1840 The value of N1 provides the Network Discovery Size value. The test 1841 duration can be set to the stipulated time within which the user 1842 expects the controller to complete the discovery process. 1844 B.5.3. Forwarding Table Capacity 1845 Procedure: 1847 Test Traffic SDN Nodes OpenFlow SDN 1848 Generator TP1 Controller Application 1849 | | | | 1850 | | | | 1851 |G-ARP (H1..Hn) | | | 1852 |----------------->| | | 1853 | | | | 1854 | |PACKET_IN(D1..Dn) | | 1855 | |------------------>| | 1856 | | | | 1857 | | || 1858 | | | | 1859 | | | |(F1) 1861 | | | | 1862 | | || 1863 | | | | 1864 | | | |(F2) 1866 | | | | 1867 | | || 1868 | | | | 1869 | | | |(F3) 1871 | | | | 1872 | | | | 1874 | | | | 1876 Legend: 1878 G-ARP: Gratuitous ARP 1879 H1..Hn: Host 1 .. Host n 1880 FWD: Forwarding Table 1882 Discussion: 1884 Query the controller forwarding table entries for multiple times 1885 until the three consecutive queries return the same value. The last 1886 value retrieved from the controller will provide the Forwarding 1887 Table Capacity value. The query interval is user configurable. The 5 1888 seconds shown in this example is for representational purpose. 1890 B.6. Security 1892 B.6.1. Exception Handling 1894 Procedure: 1896 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 1897 Generator TP1 Generator TP2 Controller Application 1898 | | | | | 1899 | |G-ARP (D1..Dn) | | | 1900 | |------------------>| | | 1901 | | | | | 1902 | | |PACKET_IN(D1..Dn)| | 1903 | | |---------------->| | 1904 | | | | | 1905 |Traffic (S1..Sn,D1..Dn) | | | 1906 |----------------------------->| | | 1907 | | | | | 1908 | | |PACKET_IN(S1..Sa,| | 1909 | | | D1..Da)| | 1910 | | |---------------->| | 1911 | | | | | 1912 | | |PACKET_IN(Sa+1.. | | 1913 | | |.Sn,Da+1..Dn) | | 1914 | | |(1% incorrect OFP| | 1915 | | | Match header)| | 1916 | | |---------------->| | 1917 | | | | | 1918 | | | FLOW_MOD(D1..Dn)| | 1919 | | |<----------------| | 1920 | | | | | 1921 | | | FLOW_MOD(S1..Sa)| | 1922 | | | OFP headers| | 1923 | | |<----------------| | 1924 | | | | | 1925 | |Traffic (S1..Sa, | | | 1926 | | D1..Da)| | | 1927 | |<------------------| | | 1928 | | | | | 1929 | | | | | 1932 | | | | | 1933 | | | | | 1936 | | | | | 1937 | | | | | 1941 | | | | | 1942 | | | | | 1945 | | | | | 1947 Legend: 1949 G-ARP: Gratuitous ARP 1950 PACKET_IN(Sa+1..Sn,Da+1..Dn): OpenFlow PACKET_IN with wrong 1951 version number 1952 Rn1: Total number of frames received at Test Port 2 with 1953 1% incorrect frames 1954 Rn2: Total number of frames received at Test Port 2 with 1955 2% incorrect frames 1957 Discussion: 1959 The traffic rate sent towards OpenFlow switch from Test Port 1 1960 should be 1% higher than the Path Programming Rate. Rn1 will provide 1961 the Path Provisioning Rate of controller at 1% of incorrect frames 1962 handling and Rn2 will provide the Path Provisioning Rate of 1963 controller at 2% of incorrect frames handling. 1965 The procedure defined above provides test steps to determine the 1966 effect of handling error packets on Path Programming Rate. Same 1967 procedure can be adopted to determine the effects on other 1968 performance tests listed in this benchmarking tests. 1970 B.6.2. Denial of Service Handling 1972 Procedure: 1974 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 1975 Generator TP1 Generator TP2 Controller Application 1976 | | | | | 1977 | |G-ARP (D1..Dn) | | | 1978 | |------------------>| | | 1979 | | | | | 1980 | | |PACKET_IN(D1..Dn)| | 1981 | | |---------------->| | 1982 | | | | | 1983 |Traffic (S1..Sn,D1..Dn) | | | 1984 |----------------------------->| | | 1985 | | | | | 1986 | | |PACKET_IN(S1..Sn,| | 1987 | | | D1..Dn)| | 1988 | | |---------------->| | 1989 | | | | | 1990 | | |TCP SYN Attack | | 1991 | | |from a switch | | 1992 | | |---------------->| | 1993 | | | | | 1994 | | |FLOW_MOD(D1..Dn) | | 1995 | | |<----------------| | 1996 | | | | | 1997 | | | FLOW_MOD(S1..Sn)| | 1998 | | | OFP headers| | 1999 | | |<----------------| | 2000 | | | | | 2001 | |Traffic (S1..Sn, | | | 2002 | | D1..Dn)| | | 2003 | |<------------------| | | 2004 | | | | | 2005 | | | | | 2008 | | | | | 2009 | | | | | 2012 | | | | | 2014 Legend: 2016 G-ARP: Gratuitous ARP 2018 Discussion: 2020 TCP SYN attack should be launched from one of the emulated/simulated 2021 OpenFlow Switch. Rn1 provides the Path Programming Rate of 2022 controller uponhandling denial of service attack. 2024 The procedure defined above provides test steps to determine the 2025 effect of handling denial of service on Path Programming Rate. Same 2026 procedure can be adopted to determine the effects on other 2027 performance tests listed in this benchmarking tests. 2029 B.7. Reliability 2031 B.7.1. Controller Failover Time 2033 Procedure: 2035 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 2036 Generator TP1 Generator TP2 Controller Application 2037 | | | | | 2038 | |G-ARP (D1) | | | 2039 | |------------>| | | 2040 | | | | | 2041 | | |PACKET_IN(D1) | | 2042 | | |---------------->| | 2043 | | | | | 2044 |Traffic (S1..Sn,D1) | | | 2045 |-------------------------->| | | 2046 | | | | | 2047 | | | | | 2048 | | |PACKET_IN(S1,D1) | | 2049 | | |---------------->| | 2050 | | | | | 2051 | | |FLOW_MOD(D1) | | 2052 | | |<----------------| | 2053 | | |FLOW_MOD(S1) | | 2054 | | |<----------------| | 2055 | | | | | 2056 | |Traffic (S1,D1)| | | 2057 | |<------------| | | 2058 | | | | | 2059 | | |PACKET_IN(S2,D1) | | 2060 | | |---------------->| | 2061 | | | | | 2062 | | |FLOW_MOD(S2) | | 2063 | | |<----------------| | 2064 | | | | | 2065 | | |PACKET_IN(Sn-1,D1)| | 2066 | | |---------------->| | 2067 | | | | | 2068 | | |PACKET_IN(Sn,D1) | | 2069 | | |---------------->| | 2070 | | | . | | 2071 | | | . | | 2074 | | | FLOW_MOD(Sn-1) | | 2075 | | | <-X----------| | 2076 | | | | | 2077 | | |FLOW_MOD(Sn) | | 2078 | | |<----------------| | 2079 | | | | | 2080 | |Traffic (Sn,D1)| | | 2081 | |<------------| | | 2082 | | | | | 2083 | | | | | 2088 Legend: 2090 G-ARP: Gratuitous ARP. 2092 Discussion: 2094 The time difference between the last valid frame received before the 2095 traffic loss and the first frame received after the traffic loss 2096 will provide the controller failover time. 2098 If there is no frame loss during controller failover time, the 2099 controller failover time can be deemed negligible. 2101 B.7.2. Network Re-Provisioning Time 2103 Procedure: 2105 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 2106 Generator TP1 Generator TP2 Controller Application 2107 | | | | | 2108 | |G-ARP (D1) | | | 2109 | |-------------->| | | 2110 | | | | | 2111 | | |PACKET_IN(D1) | | 2112 | | |---------------->| | 2113 | G-ARP (S1) | | | 2114 |---------------------------->| | | 2115 | | | | | 2116 | | |PACKET_IN(S1) | | 2117 | | |---------------->| | 2118 | | | | | 2119 |Traffic (S1,D1,Seq.no (1..n))| | | 2120 |---------------------------->| | | 2121 | | | | | 2122 | | |PACKET_IN(S1,D1) | | 2123 | | |---------------->| | 2124 | | | | | 2125 | |Traffic (D1,S1,| | | 2126 | | Seq.no (1..n))| | | 2127 | |-------------->| | | 2128 | | | | | 2129 | | |PACKET_IN(D1,S1) | | 2130 | | |---------------->| | 2131 | | | | | 2132 | | |FLOW_MOD(D1) | | 2133 | | |<----------------| | 2134 | | | | | 2135 | | |FLOW_MOD(S1) | | 2136 | | |<----------------| | 2137 | | | | | 2138 | |Traffic (S1,D1,| | | 2139 | | Seq.no(1))| | | 2140 | |<--------------| | | 2141 | | | | | 2142 | |Traffic (S1,D1,| | | 2143 | | Seq.no(2))| | | 2144 | |<--------------| | | 2145 | | | | | 2146 | | | | | 2147 | Traffic (D1,S1,Seq.no(1))| | | 2148 |<----------------------------| | | 2149 | | | | | 2150 | Traffic (D1,S1,Seq.no(2))| | | 2151 |<----------------------------| | | 2152 | | | | | 2153 | Traffic (D1,S1,Seq.no(x))| | | 2154 |<----------------------------| | | 2155 | | | | | 2156 | |Traffic (S1,D1,| | | 2157 | | Seq.no(x))| | | 2158 | |<--------------| | | 2159 | | | | | 2160 | | | | | 2161 | | | | | 2165 | | | | | 2166 | | |PORT_STATUS(Sa) | | 2167 | | |---------------->| | 2168 | | | | | 2169 | |Traffic (S1,D1,| | | 2170 | | Seq.no(n-1))| | | 2171 | | X<-----------| | | 2172 | | | | | 2173 | Traffic (D1,S1,Seq.no(n-1))| | | 2174 | X------------------------| | | 2175 | | | | | 2176 | | | | | 2177 | | |FLOW_MOD(D1) | | 2178 | | |<----------------| | 2179 | | | | | 2180 | | |FLOW_MOD(S1) | | 2181 | | |<----------------| | 2182 | | | | | 2183 | Traffic (D1,S1,Seq.no(n))| | | 2184 |<----------------------------| | | 2185 | | | | | 2186 | |Traffic (S1,D1,| | | 2187 | | Seq.no(n))| | | 2188 | |<--------------| | | 2189 | | | | | 2190 | | | | | 2195 Legend: 2197 G-ARP: Gratuitous ARP message. 2198 Seq.no: Sequence number. 2199 Sa: Neighbour switch of the switch that was brought down. 2201 Discussion: 2203 The time difference between the last valid frame received before the 2204 traffic loss (Packet number with sequence number x) and the first 2205 frame received after the traffic loss (packet with sequence number 2206 n) will provide the network path re-provisioning time. 2208 Note that the test is valid only when the controller provisions the 2209 alternate path upon network failure. 2211 Authors' Addresses 2213 Bhuvaneswaran Vengainathan 2214 Veryx Technologies Inc. 2215 1 International Plaza, Suite 550 2216 Philadelphia 2217 PA 19113 2219 Email: bhuvaneswaran.vengainathan@veryxtech.com 2221 Anton Basil 2222 Veryx Technologies Inc. 2223 1 International Plaza, Suite 550 2224 Philadelphia 2225 PA 19113 2227 Email: anton.basil@veryxtech.com 2229 Mark Tassinari 2230 Hewlett-Packard, 2231 8000 Foothills Blvd, 2232 Roseville, CA 95747 2234 Email: mark.tassinari@hp.com 2236 Vishwas Manral 2237 Ionos Corp, 2238 4100 Moorpark Ave, 2239 San Jose, CA 2241 Email: vishwas@ionosnetworks.com 2243 Sarah Banks 2244 VSS Monitoring 2245 930 De Guigne Drive, 2246 Sunnyvale, CA 2248 Email: sbanks@encrypted.net