idnits 2.17.1 draft-bhuvan-bmwg-sdn-controller-benchmark-meth-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 19, 2015) is 3203 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2544' is defined on line 1096, but no explicit reference was found in the text == Unused Reference: 'RFC2330' is defined on line 1099, but no explicit reference was found in the text == Unused Reference: 'RFC6241' is defined on line 1103, but no explicit reference was found in the text == Unused Reference: 'RFC6020' is defined on line 1107, but no explicit reference was found in the text == Unused Reference: 'RFC5440' is defined on line 1111, but no explicit reference was found in the text == Unused Reference: 'I-D.sdn-controller-benchmark-term' is defined on line 1117, but no explicit reference was found in the text == Unused Reference: 'I-D.i2rs-architecture' is defined on line 1123, but no explicit reference was found in the text == Unused Reference: 'OpenContrail' is defined on line 1130, but no explicit reference was found in the text == Unused Reference: 'OpenDaylight' is defined on line 1134, but no explicit reference was found in the text == Outdated reference: A later version (-01) exists of draft-bhuvan-bmwg-sdn-controller-benchmark-term-00 == Outdated reference: A later version (-15) exists of draft-ietf-i2rs-architecture-09 Summary: 0 errors (**), 0 flaws (~~), 12 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet-Draft Bhuvaneswaran Vengainathan 3 Network Working Group Anton Basil 4 Intended Status: Informational Veryx Technologies 5 Expires: January 18, 2016 Mark Tassinari 6 Hewlett-Packard 7 Vishwas Manral 8 Ionos Corp 9 Sarah Banks 10 VSS Monitoring 11 July 19, 2015 13 Benchmarking Methodology for SDN Controller Performance 14 draft-bhuvan-bmwg-sdn-controller-benchmark-meth-01 16 Abstract 18 This document defines the methodologies for benchmarking performance 19 of SDN controllers. Terminology related to benchmarking SDN 20 controllers is described in the companion terminology document. 21 SDN controllers have been implemented with many varying designs in 22 order to achieve their intended network functionality. Hence, the 23 authors have taken the approach of considering an SDN controller as 24 a black box, defining the methodology in a manner that is agnostic 25 to protocols and network services supported by controllers. The 26 intent of this document is to provide a standard mechanism to 27 measure the performance of all controller implementations. 29 Status of this Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current. 39 Internet-Drafts are draft documents valid for a maximum of six 40 months and may be updated, replaced, or obsoleted by other 41 documents at any time. It is inappropriate to use Internet-Drafts 42 as reference material or to cite them other than as "work in 43 progress. 45 This Internet-Draft will expire on January 18, 2016. 47 Copyright Notice 49 Copyright (c) 2015 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with 57 respect to this document. Code Components extracted from this 58 document must include Simplified BSD License text as described in 59 Section 4.e of the Trust Legal Provisions and are provided without 60 warranty as described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 3 65 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 66 3. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 3 67 3.1 Test setup - Controller working in Standalone Mode . . . . 4 68 3.2 Test setup - Controller working in Cluster Mode . . . . . . 5 69 4. Test Considerations . . . . . . . . . . . . . . . . . . . . 6 70 4.1 Network Topology . . . . . . . . . . . . . . . . . . . . . 6 71 4.2 Test Traffic . . . . . . . . . . . . . . . . . . . . . . . 6 72 4.3 Connection Setup . . . . . . . . . . . . . . . . . . . . . 6 73 4.4 Measurement Point Specification and Recommendation . . . . . 7 74 4.5 Connectivity Recommendation . . . . . . . . . . . . . . . . 7 75 4.6 Test Repeatability . . . . . . . . . . . . . . . . 7 76 5. Test Reporting . . . . . . . . . . . . . . . . . . . . . . . 7 77 6. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 8 78 6.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . 8 79 6.1.1 Network Topology Discovery Time . . . . . . . . . . . . . 8 80 6.1.2 Asynchronous Message Processing Time . . . . . . . . . . 9 81 6.1.3 Asynchronous Message Processing Rate . . . . . . . . . . 11 82 6.1.4 Reactive Path Provisioning Time . . . . . . . . . . . . 12 83 6.1.5 Proactive Path Provisioning Time . . . . . . . . . . . . 13 84 6.1.6 Reactive Path Provisioning Rate . . . . . . . . . . . . 14 85 6.1.7 Proactive Path Provisioning Rate . . . . . . . . . . . 15 86 6.1.8 Network Topology Change Detection Time . . . . . . . . . 16 87 6.2 Scalability . . . . . . . . . . . . . . . . . . . . . . . . 17 88 6.2.1 Control Sessions Capacity. . . . . . . . . . . . . . . . . 17 89 6.2.2 Network Discovery Size . . . . . . . . . . . . . . . . . 18 90 6.2.3 Forwarding Table Capacity . . . . . . . . . . . . . . . . 19 91 6.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . 20 92 6.3.1 Exception Handling . . . . . . . . . . . . . . . . . . . 20 93 6.3.2 Denial of Service Handling . . . . . . . . . . . . . . . 21 94 6.4 Reliability . . . . . . . . . . . . . . . . . . . . . . . . 22 95 6.4.1 Controller Failover Time . . . . . . . . . . . . . . . . 22 96 6.4.2 Network Re-Provisioning Time . . . . . . . . . . . . . . 23 97 7. References . . . . . . . . . . . . . . . . . . . . . . . . 24 98 7.1 Normative References . . . . . . . . . . . . . . . . . . . 24 99 7.2 Informative References . . . . . . . . . . . . . . . . . . 25 100 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . 25 101 9. Security Considerations . . . . . . . . . . . . . . . . . . 25 102 10. Appendix A - Example Test Topologies . 26 103 11. Appendix B - Benchmarking Methodology using OF Controllers . 26 104 12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . 45 105 13. Authors' Addresses . . . . . . . . . . . . . . . . . . . . 46 107 1. Introduction 109 This document provides generic methodologies for benchmarking SDN 110 controller performance. An SDN controller may support many 111 northbound and southbound protocols, implement a wide range of 112 applications, and work solely, or as a group to achieve the desired 113 functionality. This document considers an SDN controller as a black 114 box, regardless of design and implementation. The tests defined in 115 the document can be used to benchmark SDN controller for 116 performance, scalability, reliability and security independent of 117 northbound and southbound protocols. These tests can be performed 118 on an SDN controller running as a virtual machine (VM) instance or 119 on a bare metal server. This document is intended for those who 120 want to measure the SDN controller performance as well as compare 121 various SDN controllers performance. 123 Conventions used in this document 125 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 126 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 127 document are to be interpreted as described in RFC 2119. 129 2. Scope 131 This document defines methodology to measure the networking 132 metrics of SDN controllers. The tests defined in this document 133 enable benchmarking of SDN Controllers in two ways; as a standalone 134 controller and as a cluster of homogeneous controllers. These tests 135 are recommended for execution in lab environments rather than in live 136 network deployments. Performance benchmarking of a federation of 137 controllers is beyond the scope of this document. 139 3. Test Setup 141 The tests defined in this document enable measurement of an SDN 142 controllers performance in standalone mode and cluster mode. This 143 section defines common reference topologies that are later referred 144 to in individual tests. 146 3.1 Test setup - Controller working in Standalone Mode 148 +-----------------------------------------------------------+ 149 | Management Plane Test Emulator | 150 | | 151 | -------------------- | 152 | | SDN Applications | | 153 | -------------------- | 154 | | 155 +-----------------------------+(I2)-------------------------+ 156 | 157 | 158 | (Northbound interface) 159 +-------------------------------+ 160 | +----------------+ | 161 | | SDN Controller | | 162 | +----------------+ | 163 | | 164 | Device Under Test (DUT) | 165 +-------------------------------+ 166 | (Southbound interface) 167 | 168 | 169 +-----------------------------+(I1)-------------------------+ 170 | | 171 | +---------+ +---------+ | 172 | | SDN |l1 ln-1| SDN | | 173 | | Node 1 |----- .... -----| Node n | | 174 | +---------+ +---------+ | 175 | |l0 |ln | 176 | | | | 177 | | | | 178 | +---------------+ +---------------+ | 179 | | Test Traffic | | Test Traffic | | 180 | | Generator | | Generator | | 181 | | (TP1) | | (TP2) | | 182 | +---------------+ +---------------+ | 183 | | 184 | Forwarding Plane Test Emulator | 185 +-----------------------------------------------------------+ 187 Figure 1 189 3.2 Test setup - Controller working in Cluster Mode 191 +-----------------------------------------------------------+ 192 | Management Plane Test Emulator | 193 | | 194 | -------------------- | 195 | | SDN Applications | | 196 | -------------------- | 197 | | 198 +-----------------------------+(I2)-------------------------+ 199 | 200 | 201 | (Northbound interface) 202 +---------------------------------------------------------+ 203 | | 204 | ------------------ ------------------ | 205 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 206 | ------------------ ------------------ | 207 | | 208 | Device Under Test (DUT) | 209 +---------------------------------------------------------+ 210 | (Southbound interface) 211 | 212 | 213 +-----------------------------+(I1)-------------------------+ 214 | | 215 | +---------+ +---------+ | 216 | | SDN |l1 ln-1| SDN | | 217 | | Node 1 |----- .... -----| Node n | | 218 | +---------+ +---------+ | 219 | |l0 |ln | 220 | | | | 221 | | | | 222 | +---------------+ +---------------+ | 223 | | Test Traffic | | Test Traffic | | 224 | | Generator | | Generator | | 225 | | (TP1) | | (TP2) | | 226 | +---------------+ +---------------+ | 227 | | 228 | Forwarding Plane Test Emulator | 229 +-----------------------------------------------------------+ 231 Figure 2 233 4. Test Considerations 235 4.1 Network Topology 237 The test cases SHOULD use Leaf-Spine topology with atleast 1 SDN node 238 in the topology for benchmarking. The test traffic generators TP1 239 and TP2 SHOULD be connected to the first and the last SDN leaf node. 240 If a test case uses test topology with 1 SDN node, the test traffic 241 generators TP1 and TP2 SHOULD be connected to the same node. However 242 to achieve a complete performance characterization of the SDN 243 controller, it is recommended that the controller be benchmarked for 244 many network topologies and varying number of SDN nodes. This 245 document includes a few sample test topologies, defined in 246 Section 10 - Appendix A for reference. Further, care should be taken 247 to make sure that a loop prevention mechanism is enabled either in 248 the SDN controller, or in the network when the topology contains 249 redundant network paths. 251 4.2 Test Traffic 253 Test traffic are used to notify the controller about the arrival 254 of new flows. The test cases SHOULD use multiple frame sizes as 255 recommended in RFC 2544 for benchmarking. 257 4.3 Connection Setup 259 There may be controller implementations that support unencrypted 260 and encrypted network connections with SDN nodes. Further, the 261 controller may have backward compatibility with SDN nodes running 262 older versions of southbound protocols. It is recommended that the 263 controller performance be measured with one or more applicable 264 connection setup methods defined below. 266 1. Unencrypted connection with SDN nodes, running same protocol 267 version. 268 2. Unencrypted connection with SDN nodes, running different protocol 269 versions. 270 Example: 271 1. Controller running current protocol version and switch 272 running older protocol version 273 2. Controller running older protocol version and switch 274 running current protocol version 275 3. Encrypted connection with SDN nodes, running same protocol version 276 4. Encrypted connection with SDN nodes, running different protocol 277 versions. 278 Example: 279 1. Controller running current protocol version and switch 280 running older protocol version 281 2. Controller running older protocol version and switch 282 running current protocol version 284 4.4 Measurement Point Specification and Recommendation 286 The measurement accuracy depends on several factors including the 287 point of observation where the indications are captured. For example, 288 the notification can be observed at the controller or test emulator. 289 The test operator SHOULD make the observations/measurements at the 290 interfaces of test emulator unless it is explicitly mentioned 291 otherwise in the individual test. 293 4.5 Connectivity Recommendation 295 The SDN controller in the test setup SHOULD be connected directly 296 with the forwarding and the management plane test emulators to 297 avoid any delays or failure introduced by the intermediate devices 298 during benchmarking tests. 300 4.6 Test Repeatability 302 To increase the confidence in measured result, it is recommended that 303 this test SHOULD be performed atleast 10 times with same number of 304 nodes using same topology. 306 5. Test Reporting 308 Each test has a reporting format which is specific to individual 309 tests. In addition, the following test configuration parameters and 310 controller settings parameters MUST be reflected in the test report. 312 Test Configuration Parameters: 313 1. Controller name and version 314 2. Northbound protocols and versions 315 3. Southbound protocols and versions 316 4. Controller redundancy mode (Standalone or Cluster Mode) 317 5. Connection setup (Unencrypted or Encrypted) 318 6. Network Topology (Mesh or Tree or Linear) 319 7. SDN Node Type (Physical or Virtual or Emulated) 320 8. Number of Nodes 321 9. Number of Links 322 10. Test Traffic Type 323 11. Controller System Configuration (e.g., CPU, Memory, Operating 324 System, Interface Speed etc.,) 325 12. Reference Test Setup (e.g., Section 3.1 etc.,) 327 Controller Settings Parameters: 328 1. Topology re-discovery timeout 329 2. Controller redundancy mode (e.g., active-standby etc.,) 331 6. Benchmarking Tests 333 6.1 Performance 335 6.1.1 Network Topology Discovery Time 337 Objective: 338 Measure the time taken by the SDN controller to discover the 339 network topology (nodes and links), expressed in milliseconds. 341 Reference Test Setup: 342 The test SHOULD use one of the test setups described in 343 section 3.1 or section 3.2 of this document. 345 Prerequisite: 346 1. The controller MUST support network discovery. 347 2. Tester should be able to retrieve the discovered topology 348 information either through the controller's management 349 interface or northbound interface to determine if the 350 discovery was successful and complete. 351 3. Ensure that the controller's topology re-discovery timeout 352 has been set to the maximum value to avoid initiation of 353 re-discovery process in the middle of the test. 355 Procedure: 356 1. Ensure that the controller is operational, its network 357 applications, northbound and southbound interfaces are up and 358 running. 359 2. Establish the network connections between controller and 360 SDN nodes. 361 3. Record the time for the first discovery message (Tm1) 362 received from the controller at forwarding plane test emulator 363 interface I1. 365 4. Query the controller every 3 seconds to obtain the discovered 366 network topology information through the northbound 367 interface or the management interface and compare it with the 368 deployed network topology information. 369 5. Stop the test when the discovered topology information is 370 matching with the deployed network topology or the discovered 371 topology information for 3 consecutive queries return the same 372 details. 373 6. Record the time last discovery message (Tmn) sent to 374 controller from the forwarding plane test emulator 375 interface (I1) when the test completed successfully. 376 (e.g., the topology matches). 378 Measurement: 379 Topology Discovery Time Tr1 = Tmn-Tm1. 381 Tr1 + Tr2 + Tr3 .. Trn 382 Average Topology Discovery Time = ----------------------- 383 Total Test Iterations 385 Reporting Format: 386 The Topology Discovery Time results MUST be reported in the 387 format of a table, with a row for each successful iteration. The 388 last row of the table indicates the average Topology Discovery 389 Time. 391 If this test is repeated with varying number of nodes over the 392 same topology, the results SHOULD be reported in the form of a 393 graph. The X coordinate SHOULD be the Number of nodes (N), the 394 Y coordinate SHOULD be the average Topology Discovery Time. 396 If this test is repeated with same number of nodes over different 397 topologies, the results SHOULD be reported in the form of a graph. 398 The X coordinate SHOULD be the Topology Type, the Y coordinate 399 SHOULD be the average Topology Discovery Time. 401 6.1.2 Asynchronous Message Processing Time 403 Objective: 404 Measure the time taken by the SDN controller to process an 405 asynchronous message, expressed in milliseconds. 407 Reference Test Setup: 408 This test SHOULD use one of the test setup described in 409 section 3.1 or section 3.2 of this document. 411 Prerequisite: 412 1. The controller MUST have completed the network topology 413 discovery for the connected SDN nodes. 415 Procedure: 416 1. Generate asynchronous messages from every connected SDN node, 417 to the SDN controller, one at a time in series from the 418 forwarding plane test emulator for the test duration. 419 2. Record every request transmit (T1) timestamp and the 420 corresponding response (R1) received timestamp at the 421 forwarding plane test emulator interface (I1) for every 422 successful message exchange. 424 Measurement: 425 (R1-T1) + (R2-T2)..(Rn-Tn) 426 Asynchronous Message Processing Time Tr1 = ----------------------- 427 Nrx 429 Where Nrx is the total number of successful messages exchanged 431 Tr1 + Tr2 + Tr3..Trn 432 Average Asynchronous Message Processing Time= -------------------- 433 Total Test Iterations 435 Reporting Format: 436 The Asynchronous Message Processing Time results MUST be 437 reported in the format of a table with a row for each iteration. 438 The last row of the table indicates the average Asynchronous 439 Message Processing Time. 441 The report should capture the following information in addition 442 to the configuration parameters captured in section 5. 443 - Successful messages exchanged (Nrx) 445 If this test is repeated with varying number of nodes with same 446 topology, the results SHOULD be reported in the form of a graph. 447 The X coordinate SHOULD be the Number of nodes (N), the 448 Y coordinate SHOULD be the average Asynchronous Message Processing 449 Time. 451 If this test is repeated with same number of nodes using 452 different topologies, the results SHOULD be reported in the form 453 of a graph. The X coordinate SHOULD be the Topology Type, the 454 Y coordinate SHOULD be the average Asynchronous Message Processing 455 Time. 457 6.1.3 Asynchronous Message Processing Rate 459 Objective: 460 To measure the maximum number of asynchronous messages (session 461 aliveness check message, new flow arrival notification 462 message etc.) a controller can process within the test duration, 463 expressed in messages processed per second. 465 Reference Test Setup: 466 The test SHOULD use one of the test setups described in 467 section 3.1 or section 3.2 of this document. 469 Prerequisite: 470 1. The controller MUST have completed the network topology 471 discovery for the connected SDN nodes. 473 Procedure: 474 1. Generate asynchronous messages continuously at the maximum 475 possible rate on the established connections from all the 476 connected SDN nodes in the forwarding plane test emulator 477 for the Test Duration (Td). 478 2. Record total number of responses received from the 479 controller (Nrx) as well as the number of messages sent(Ntx) to 480 the controller within the test duration(Td) at the forwarding 481 plane test emulator interface (I1) . 483 Measurement: 484 Nrx 485 Asynchronous Message Processing Rate Tr1 = ----- 486 Td 487 Tr1 + Tr2 + Tr3..Trn 488 Average Asynchronous Message Processing Rate= -------------------- 489 Total Test Iterations 491 Loss Ratio = (Ntx-Nrx)/100. 493 Reporting Format: 494 The Asynchronous Message Processing Rate results MUST be 495 reported in the format of a table with a row for each iteration. 496 The last row of the table indicates the average Asynchronous 497 Message Processing Rate. 499 The report should capture the following information in addition 500 to the configuration parameters captured in section 5. 501 - Offered rate (Ntx) 502 - Loss Ratio 503 If this test is repeated with varying number of nodes over same 504 topology, the results SHOULD be reported in the form of a graph. 505 The X coordinate SHOULD be the Number of nodes (N), the 506 Y coordinate SHOULD be the average Asynchronous Message Processing 507 Rate. 509 If this test is repeated with same number of nodes over different 510 topologies, the results SHOULD be reported in the form of a graph. 511 The X coordinate SHOULD be the Topology Type, the Y coordinate 512 SHOULD be the average Asynchronous Message Processing Rate. 514 6.1.4 Reactive Path Provisioning Time 516 Objective: 517 To measure the time taken by the controller to setup a path 518 reactively between source and destination node, expressed in 519 milliseconds. 521 Reference Test Setup: 522 The test SHOULD use one of the test setups described in 523 section 3.1 or section 3.2 of this document. 525 Prerequisite: 526 1. The controller MUST contain the network topology information 527 for the deployed network topology. 528 2. The controller should have the knowledge about the location of 529 destination endpoint for which the path has to be provisioned. 530 This can be achieved through dynamic learning or static 531 provisioning. 532 3. Ensure that the default action for flow miss in SDN node is 533 'send to controller'. 534 4. Ensure that each SDN node in a path requires the controller 535 to make the forwarding decision while paving the entire path. 537 Procedure: 538 1. Send a single traffic stream from test traffic generator TP1 to 539 test traffic generator TP2. 540 2. Record the time of the first flow provisioning request message 541 sent to the controller(Tsf1) from the SDN node at the 542 forwarding plane test emulator interface (I1). 543 3. Wait for the arrival of first traffic frame at the Traffic 544 Endpoint TP2 or the expiry of test duration (Td). 545 4. Record the time of the last flow provisioning response message 546 received from the controller(Tdf1) to the SDN node at the 547 forwarding plane test emulator interface (I1). 549 Measurement: 550 Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1. 552 Tr1 + Tr2 + Tr3 .. Trn 553 Average Reactive Path Provisioning Time = ----------------------- 554 Total Test Iterations 556 Reporting Format: 557 The Reactive Path Provisioning Time results MUST be reported in 558 the format of a table with a row for each iteration. The last row 559 of the table indicates the Average Reactive Path Provisioning Time 561 The report should capture the following information in addition 562 to the configuration parameters captured in section 5. 563 - Number of SDN nodes in the path 565 6.1.5 Proactive Path Provisioning Time 567 Objective: 568 To measure the time taken by the controller to setup a path 569 proactively between source and destination node, expressed in 570 milliseconds. 572 Reference Test Setup: 573 The test SHOULD use one of the test setups described in 574 section 3.1 or section 3.2 of this document. 576 Prerequisite: 577 1. The controller MUST contain the network topology information 578 for the deployed network topology. 579 2. The controller should have the knowledge about the location of 580 destination endpoint for which the path has to be provisioned. 581 This can be achieved through dynamic learning or static 582 provisioning. 583 3. Ensure that the default action for flow miss in SDN 584 node is 'drop'. 586 Procedure: 587 1. Send single traffic stream from test traffic generator TP1 to 588 TP2. 589 2. Install the flow entries to reach from test traffic generator 590 TP1 to the test traffic generator TP2 through controller's 591 northbound or management interface. 592 3. Wait for the arrival of first traffic frame at the test traffic 593 generator TP2 or the expiry of test duration (Td). 594 4. Record the time when proactive flow is provisioned in the 595 Controller (Tsf1) at the management plane test emulator 596 interface I2. 598 5. Record the time of the last flow provisioning message 599 received from the controller(Tdf1) at the forwarding plane 600 test emulator interface I1. 602 Measurement: 603 Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1. 605 Tr1 + Tr2 + Tr3 .. Trn 606 Average Proactive Path Provisioning Time = ----------------------- 607 Total Test Iterations 609 Reporting Format: 610 The Proactive Path Provisioning Time results MUST be reported in 611 the format of a table with a row for each iteration. The last row 612 of the table indicates the Average Proactive Path Provisioning 613 Time. 615 The report should capture the following information in addition 616 to the configuration parameters captured in section 5. 617 - Number of SDN nodes in the path 619 6.1.6 Reactive Path Provisioning Rate 621 Objective: 622 Measure the maximum number of independent paths a controller 623 can concurrently establish between source and destination nodes 624 reactively within the test duration, expressed in paths per 625 second. 627 Reference Test Setup: 628 The test SHOULD use one of the test setups described in 629 section 3.1 or section 3.2 of this document. 631 Prerequisite: 632 1. The controller MUST contain the network topology information 633 for the deployed network topology. 634 2. The controller should have the knowledge about the location of 635 destination addresses for which the paths have to be 636 provisioned. This can be achieved through dynamic learning or 637 static provisioning. 638 3. Ensure that the default action for flow miss in SDN node is 639 'send to controller'. 640 4. Ensure that each SDN node in a path requires the controller 641 to make the forwarding decision while paving the entire path. 643 Procedure: 644 1. Send traffic with unique source and destination addresses from 645 test traffic generator TP1. 646 2. Record total number of unique traffic frames (Ndf) received at 647 the test traffic generator TP2 within the test duration (Td). 649 Measurement: 650 Ndf 651 Reactive Path Provisioning Rate Tr1 = ------ 652 Td 654 Tr1 + Tr2 + Tr3 .. Trn 655 Average Reactive Path Provisioning Rate = ------------------------ 656 Total Test Iterations 658 Reporting Format: 659 The Reactive Path Provisioning Rate results MUST be reported in 660 the format of a table with a row for each iteration. The last row 661 of the table indicates the Average Reactive Path Provisioning 662 Rate. 664 The report should capture the following information in addition 665 to the configuration parameters captured in section 5. 666 - Number of SDN nodes in the path 667 - Offered rate 669 6.1.7 Proactive Path Provisioning Rate 671 Objective: 672 Measure the maximum number of independent paths a controller 673 can concurrently establish between source and destination nodes 674 proactively within the test duration, expressed in paths per 675 second. 677 Reference Test Setup: 678 The test SHOULD use one of the test setups described in 679 section 3.1 or section 3.2 of this document. 681 Prerequisite: 682 1. The controller MUST contain the network topology information 683 for the deployed network topology. 684 2. The controller should have the knowledge about the location of 685 destination addresses for which the paths have to be 686 provisioned. This can be achieved through dynamic learning or 687 static provisioning. 688 3. Ensure that the default action for flow miss in SDN 689 node is 'drop'. 691 Procedure: 692 1. Send traffic continuously with unique source and destination 693 addresses from test traffic generator TP1. 694 2. Install corresponding flow entries to reach from simulated 695 sources at the test traffic generator TP1 to the simulated 696 destinations at test traffic generator TP2 through 697 controller's northbound or management interface. 698 3. Record total number of unique traffic frames received Ndf) at 699 the test traffic generator TP2 within the test duration (Td). 701 Measurement: 702 Ndf 703 Proactive Path Provisioning Rate Tr1 = ------ 704 Td 706 Tr1 + Tr2 + Tr3 .. Trn 707 Average Proactive Path Provisioning Rate = ----------------------- 708 Total Test Iterations 710 Reporting Format: 711 The Proactive Path Provisioning Rate results MUST be reported in 712 the format of a table with a row for each iteration. The last row 713 of the table indicates the Average Proactive Path Provisioning 714 Rate. 716 The report should capture the following information in addition 717 to the configuration parameters captured in section 5. 718 - Number of SDN nodes in the path 719 - Offered rate 721 6.1.8 Network Topology Change Detection Time 723 Objective: 724 Measure the time taken by the controller to detect any changes 725 in the network topology, expressed in milliseconds. 727 Reference Test Setup: 728 The test SHOULD use one of the test setups described in 729 section 3.1 or section 3.2 of this document. 731 Prerequisite: 732 1. The controller MUST have discovered the network topology 733 information for the deployed network topology. 734 2. The periodic network discovery operation should be configured 735 to twice the Test duration (Td) value. 737 Procedure: 738 1. Trigger a topology change event by bringing down an active 739 SDN node in the topology. 740 2. Record the time when the first topology change notification 741 is sent to the controller (Tcn) at the forwarding plane 742 test emulator interface (I1). 743 3. Stop the test when the controller sends the first topology 744 re-discovery message to the SDN node or the expiry of test 745 interval (Td). 746 4. Record the time when the first topology re-discovery message 747 is received from the controller (Tcd) at the forwarding plane 748 test emulator interface (I1) 750 Measurement: 751 Network Topology Change Detection Time Tr1 = Tcd-Tcn. 753 Tr1 + Tr2 + Tr3 .. Trn 754 Average Network Topology Change 755 Detection Time = --------------------------- 756 Total Test Iterations 758 Reporting Format: 759 The Network Topology Change Detection Time results MUST be 760 reported in the format of a table with a row for each iteration. 761 The last row of the table indicates the average Network Topology 762 Change Time. 764 6.2 Scalability 766 6.2.1 Control Session Capacity 768 Objective: 769 Measure the maximum number of control sessions that the controller 770 can maintain. 772 Reference Test Setup: 773 The test SHOULD use one of the test setups described in 774 section 3.1 or section 3.2 of this document. 776 Procedure: 777 1. Establish control connection with controller from every SDN 778 nodes emulated in the forwarding plane test emulator. 779 2. Stop the test when the controller starts dropping the control 780 connection. 781 3. Record the number of successful connections established with 782 the controller (CCn) at the forwarding plane test emulator. 784 Measurement: 785 Control Sessions Capacity = CCn. 787 Reporting Format: 788 The Control Session Capacity results MUST be reported in addition 789 to the configuration parameters captured in section 5. 791 6.2.2 Network Discovery Size 793 Objective: 794 Measure the network size (number of nodes, links, and hosts) 795 that a controller can discover. 797 Reference Test Setup: 798 The test SHOULD use one of the test setups described in 799 section 3.1 or section 3.2 of this document. 801 Prerequisite: 802 1. The controller MUST support automatic network discovery. 803 2. Tester should be able to retrieve the discovered topology 804 information either through controller's management interface 805 or northbound interface. 807 Procedure: 808 1. Establish the network connections between controller and 809 network nodes. 810 2. Query the controller for the discovered network topology 811 information and compare it with the deployed network topology 812 information. 813 3a. Increase the number of nodes by 1 when the comparison is 814 successful and repeat the test. 815 3b. Decrease the number of nodes by 1 when the comparison fails 816 and repeat the test. 817 4. Continue the test until the comparison of step 3b is 818 successful. 819 5. Record the number of nodes for the last iteration (Ns) where 820 the topology comparison was successful. 822 Measurement: 824 Network Discovery Size = Ns. 826 Reporting Format: 827 The Network Discovery Size results MUST be reported in addition 828 to the configuration parameters captured in section 5. 830 6.2.3 Forwarding Table Capacity 832 Objective: 833 Measure the maximum number of flow entries a controller can 834 manage in its Forwarding table. 836 Reference Test Setup: 837 The test SHOULD use one of the test setups described in 838 section 3.1 or section 3.2 of this document. 840 Prerequisite: 841 1. The controller Forwarding table should be empty. 842 2. Flow Idle time MUST be set to higher or infinite value. 843 3. The controller MUST have completed network topology 844 discovery. 845 4. Tester should be able to retrieve the forwarding table 846 information either through controller's management interface 847 or northbound interface. 849 Procedure: 850 Reactive Flow Provisioning Mode: 851 1. Send bi-directional traffic continuously with unique source 852 and/or destination addresses from test traffic generators 853 TP1 and TP2 at the asynchronous message processing rate of 854 controller. 855 2. Query the controller at a regular interval (e.g., 5 seconds) 856 for the number of learnt flow entries from its northbound 857 interface. 858 3. Stop the test when the retrieved value is constant for three 859 consecutive iterations and record the value received from the 860 last query (Nrp). 862 Proactive Flow Provisioning Mode: 863 1. Install unique flows continuously through controller's 864 northbound or management interface until a failure response 865 is received from the controller. 866 2. Record the total number of successful responses (Nrp). 868 Note: 869 Some controller designs for proactive flow provisioning mode may 870 require the switch to send flow setup requests in order to 871 generate flow setup responses. In such cases, it is recommended 872 to generate bi-directional traffic for the provisioned flows. 874 Measurement: 875 Proactive Flow Provisioning Mode: 877 Max Flow Entries = Total number of flows provisioned (Nrp) 879 Reactive Flow Provisioning Mode: 881 Max Flow Entries = Total number of learnt flow entries (Nrp) 883 Forwarding Table Capacity = Max Flow Entries. 885 Reporting Format: 886 The Forwarding Table Capacity results MUST be tabulated with the 887 following information in addition to the configuration parameters 888 captured in section 5. 889 - Provisioning Type (Proactive/Reactive) 891 6.3 Security 893 6.3.1 Exception Handling 895 Objective: 896 Determine the effect of handling error packets and 897 notifications on performance tests. The impact MUST be measured 898 for the following performance tests 899 a. Path Provisioning Rate 900 b. Path Provisioning Time 901 c. Network Topology Change Detection Time 903 Reference Test Setup: 904 The test SHOULD use one of the test setups described in 905 section 3.1 or section 3.2 of this document. 907 Prerequisite: 908 1. This test MUST be performed after obtaining the baseline 909 measurement results for the above performance tests. 910 2. Ensure that the invalid messages are not dropped by the 911 intermediate devices connecting the controller and SDN nodes. 913 Procedure: 914 1. Perform the above listed performance tests and send 1% of 915 messages from the Asynchronous Message Processing Rate as 916 invalid messages from the connected SDN nodes emulated at the 917 forwarding plane test emulator. 919 2. Perform the above listed performance tests and send 2% of 920 messages from the Asynchronous Message Processing Rate as 921 invalid messages from the connected SDN nodes emulated at the 922 forwarding plane test emulator. 924 Note: 925 1. Invalid messages can be frames with incorrect protocol fields 926 or any form of failure notifications sent towards controller. 928 Measurement: 929 Measurement MUST be done as per the equation defined in the 930 corresponding performance test measurement section. 932 Reporting Format: 933 The Exception Handling results MUST be reported in the format 934 of table with a column for each of the below parameters and row 935 for each of the listed performance tests. 936 - Without Exceptions 937 - With 1% Exceptions 938 - With 2% Exceptions 940 6.3.2 Denial of Service Handling 942 Objective: 943 Determine the effect of handling DoS attacks on performance 944 and scalability tests the impact MUST be measured for the 945 following tests: 946 a. Path Provisioning Rate 947 b. Path Provisioning Time 948 c. Network Topology Change Detection Time 949 d. Network Discovery Size 951 Reference Test Setup: 952 The test SHOULD use one of the test setups described in 953 section 3.1 or section 3.2 of this document. 955 Prerequisite: 956 This test MUST be performed after obtaining the baseline 957 measurement results for the above tests. 959 Procedure: 960 1. Perform the listed tests and launch a DoS attack towards 961 controller while the test is running. 963 Note: 964 DoS attacks can be launched on one of the following interfaces. 965 a. Northbound (e.g., Sending a huge number of requests on 966 northbound interface) 967 b. Management (e.g., Ping requests to controller's management 968 interface) 969 c. Southbound (e.g., TCP SYNC messages on southbound interface) 971 Measurement: 972 Measurement MUST be done as per the equation defined in the 973 corresponding test's measurement section. 975 Reporting Format: 976 The DoS Attacks Handling results MUST be reported in the format 977 of table with a column for each of the below parameters and row 978 for each of the listed tests. 979 - Without any attacks 980 - With attacks 982 The report should also specify the nature of attack and the 983 interface. 985 6.4 Reliability 987 6.4.1 Controller Failover Time 989 Objective: 990 Measure the time taken to switch from an active controller 991 to the backup controller, when the controllers work in 992 redundancy mode and the active controller fails. 994 Reference Test Setup: 995 The test SHOULD use the test setup described in section 3.2 of 996 this document. 998 Prerequisite: 999 1. Master controller election MUST be completed. 1000 2. Nodes are connected to the controller cluster as per the 1001 Redundancy Mode (RM). 1002 3. The controller cluster should have completed the network 1003 topology discovery. 1004 4. The SDN Node MUST send all new flows to the controller when 1005 it receives from the test traffic generator. 1006 5. Controller should have learnt the location of destination 1007 (D1) at test traffic generator TP2. 1009 Procedure: 1010 1. Send uni-directional traffic continuously with incremental 1011 sequence number and source addresses from test test traffic 1012 generator TP1 at the rate that the controller processes without 1013 any drops. 1014 2. Ensure that there are no packet drops observed at the test 1015 traffic generator TP2. 1016 3. Bring down the active controller. 1017 4. Stop the test when a first frame received on TP2 after 1018 failover operation. 1019 5. Record the time at which the last valid frame received (T1) 1020 at test traffic generator TP2 before sequence error and the 1021 first valid frame received (T2) after the sequence error at TP2 1023 Measurement: 1024 Controller Failover Time = (T2 - T1) 1025 Packet Loss = Number of missing packet sequences. 1027 Reporting Format: 1028 The Controller Failover Time results MUST be tabulated with the 1029 following information. 1030 - Number of cluster nodes 1031 - Redundancy mode 1032 - Controller Failover 1033 - Time Packet Loss 1034 - Cluster keep-alive interval 1036 6.4.2 Network Re-Provisioning Time 1038 Objective: 1039 Compute the time taken to re-route the traffic by the 1040 controller when there is a failure in existing traffic paths. 1042 Reference Test Setup: 1043 This test SHOULD use one of the test setup described in 1044 section 3.1 or section 3.2 of this document. 1046 Prerequisite: 1047 1. Network with the given number of nodes and redundant paths 1048 MUST be deployed. 1049 2. Ensure that the controller MUST have knowledge about the 1050 location of test traffic generators TP1 and TP2. 1051 3. Ensure that the controller does not pre-provision the alternate 1052 path in the emulated SDN nodes at the forwarding plane test 1053 emulator. 1055 Procedure: 1056 1. Send bi-directional traffic continuously with unique sequence 1057 number from TP1 and TP2. 1058 2. Bring down a link or switch in the traffic path. 1059 3. Stop the test after receiving first frame after network 1060 re-convergence. 1061 4. Record the time of last received frame prior to the frame loss 1062 at TP2 (TP2-Tlfr) and the time of first frame received after 1063 the frame loss at TP2 (TP2-Tffr). 1064 5. Record the time of last received frame prior to the frame loss 1065 at TP1 (TP1-Tlfr) and the time of first frame received after 1066 the frame loss at TP1 (TP1-Tffr). 1068 Measurement: 1069 Forward Direction Path Re-Provisioning Time (FDRT) 1070 = (TP2-Tffr - TP2-Tlfr) 1072 Reverse Direction Path Re-Provisioning Time (RDRT) 1073 = (TP1-Tffr - TP1-Tlfr) 1075 Network Re-Provisioning Time = (FDRT+RDRT)/2 1077 Forward Direction Packet Loss = Number of missing sequence frames 1078 at TP1 1080 Reverse Direction Packet Loss = Number of missing sequence frames 1081 at TP2 1083 Reporting Format: 1084 The Network Re-Provisioning Time results MUST be tabulated with 1085 the following information. 1086 - Number of nodes in the primary path 1087 - Number of nodes in the alternate path 1088 - Network Re-Provisioning Time 1089 - Forward Direction Packet Loss 1090 - Reverse Direction Packet Loss 1092 7. References 1094 7.1 Normative References 1096 [RFC2544] S. Bradner, J. McQuaid, "Benchmarking Methodology for 1097 Network Interconnect Devices",RFC 2544, March 1999. 1099 [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, 1100 "Framework for IP Performance Metrics",RFC 2330, 1101 May 1998. 1103 [RFC6241] R. Enns, M. Bjorklund, J. Schoenwaelder, A. Bierman, 1104 "Network Configuration Protocol (NETCONF)",RFC 6241, 1105 July 2011. 1107 [RFC6020] M. Bjorklund, "YANG - A Data Modeling Language for 1108 the Network Configuration Protocol (NETCONF)", RFC 6020, 1109 October 2010 1111 [RFC5440] JP. Vasseur, JL. Le Roux, "Path Computation Element (PCE) 1112 Communication Protocol (PCEP)", RFC 5440, March 2009. 1114 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 1115 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 1117 [I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil, 1118 Mark.T, Vishwas Manral, Sarah Banks "Terminology for 1119 Benchmarking SDN Controller Performance", 1120 draft-bhuvan-bmwg-sdn-controller-benchmark-term-00 1121 (Work in progress), March 23, 2015 1123 [I-D.i2rs-architecture] A. Atlas, J. Halpern, S. Hares, D. Ward, 1124 T. Nadeau, "An Architecture for the Interface to the 1125 Routing System", draft-ietf-i2rs-architecture-09 1126 (Work in progress), March 6, 2015 1128 7.2 Informative References 1130 [OpenContrail] Ankur Singla, Bruno Rijsman, "OpenContrail 1131 Architecture Documentation", 1132 http://opencontrail.org/opencontrail-architecture-documentation 1134 [OpenDaylight] OpenDaylight Controller:Architectural Framework, 1135 https://wiki.opendaylight.org/view/OpenDaylight_Controller 1137 8. IANA Considerations 1139 This document does not have any IANA requests. 1141 9. Security Considerations 1143 Benchmarking tests described in this document are limited to the 1144 performance characterization of controller in lab environment with 1145 isolated network. 1147 10. Appendix A - Example Test Topologies 1149 10.1. Leaf-Spine Topology - Three Tier Network Architecture 1151 +----------+ 1152 | SDN | 1153 | Node | (Core) 1154 +----------+ 1155 / \ 1156 / \ 1157 +------+ +------+ 1158 | SDN | | SDN | (Spine) 1159 | Node |.. | Node | 1160 +------+ +------+ 1161 / \ / \ 1162 / \ / \ 1163 l1 / / \ ln-1 1164 / / \ \ 1165 +--------+ +-------+ 1166 | SDN | | SDN | 1167 | Node |.. | Node | (Leaf) 1168 +--------+ +-------+ 1170 10.2. Leaf-Spine Topology - Two Tier Network Architecture 1172 +------+ +------+ 1173 | SDN | | SDN | (Spine) 1174 | Node |.. | Node | 1175 +------+ +------+ 1176 / \ / \ 1177 / \ / \ 1178 l1 / / \ ln-1 1179 / / \ \ 1180 +--------+ +-------+ 1181 | SDN | | SDN | 1182 | Node |.. | Node | (Leaf) 1183 +--------+ +-------+ 1185 11. Appendix A - Benchmarking Methodology using OpenFlow(OF) Controllers 1187 This section gives an overview of OpenFlow protocol and provides 1188 test methodology to benchmark SDN controllers supporting OpenFlow 1189 southbound protocol. 1191 11.1. Protocol Overview 1193 OpenFlow is an open standard protocol defined by Open Networking 1194 Foundation (ONF), used for programming the forwarding plane of 1195 network switches or routers via a centralized controller. 1197 11.2. Messages Overview 1199 OpenFlow protocol supports three messages types namely controller- 1200 to-switch, asynchronous and symmetric. 1202 Controller-to-switch messages are initiated by the controller and 1203 used to directly manage or inspect the state of the switch. These 1204 messages allow controllers to query/configure the switch (Features, 1205 Configuration messages), collect information from switch (Read- 1206 State message), send packets on specified port of switch (Packet- 1207 out message), and modify switch forwarding plane and state (Modify- 1208 State, Role-Request messages etc.). 1210 Asynchronous messages are generated by the switch without a 1211 controller soliciting them. These messages allow switches to update 1212 controllers to denote an arrival of new flow (Packet-in), switch 1213 state change (Flow-Removed, Port-status) and error (Error). 1215 Symmetric messages are generated in either direction without 1216 solicitation. These messages allow switches and controllers to set 1217 up connection (Hello), verify for liveness (Echo) and offer 1218 additional functionalities (Experimenter). 1220 11.3. Connection Overview 1222 OpenFlow channel is used to exchange OpenFlow message between an 1223 OpenFlow switch and an OpenFlow controller. The OpenFlow channel 1224 connection can be setup using plain TCP or TLS. By default, a 1225 switch establishes single connection with SDN controller. A switch 1226 may establish multiple parallel connections to single controller 1227 (auxiliary connection) or multiple controllers to handle controller 1228 failures and load balancing. 1230 11.4 Performance Benchmarking Tests 1232 11.4.1 Network Topology Discovery Time 1234 Procedure: 1236 SDN Nodes OpenFlow SDN 1237 Controller Application 1238 | | | 1239 | | | 1241 | | | 1242 | | | 1244 | | | 1245 | OFPT_HELLO Exchange | | 1246 |<-------------------------->| | 1247 | | | 1248 | PACKET_OUT with LLDP | | 1249 | to all switches | | 1250 (Tm1)|<---------------------------| | 1251 | | | 1252 | PACKET_IN with LLDP| | 1253 | rcvd from switch-1| | 1254 |--------------------------->| | 1255 | | | 1256 | PACKET_IN with LLDP| | 1257 | rcvd from switch-2| | 1258 |--------------------------->| | 1259 | . | | 1260 | . | | 1261 | | | 1262 | PACKET_IN with LLDP| | 1263 | rcvd from switch-n| | 1264 (Tmn)|--------------------------->| | 1265 | | | 1266 | | | 1268 | | | 1269 | | Query the controller for| 1270 | | discovered n/w topo.(Di)| 1271 | |<--------------------------| 1272 | | | 1273 | | | 1275 | | | 1277 Legend: 1278 NB: Northbound 1279 SB: Southbound 1280 OF: OpenFlow 1281 Tm1: Time of reception of first LLDP message from controller 1282 Tmn: Time of last LLDP message sent to controller 1284 Discussion: 1285 The Network Topology Discovery Time can be obtained by calculating 1286 the time difference between the first PACKET_OUT with LLDP message 1287 received from the controller (Tm1) and the last PACKET_IN with 1288 LLDP message sent to the controller (Tmn) when the comparison is 1289 successful. 1291 11.4.2 Asynchronous Message Processing Time 1293 Procedure: 1295 SDN Nodes OpenFlow SDN 1296 Controller Application 1297 | | | 1298 |PACKET_IN with single | | 1299 |OFP match header | | 1300 (T0)|--------------------------->| | 1301 | | | 1302 | PACKET_OUT with single OFP | | 1303 | action header | | 1304 (R0)|<---------------------------| | 1305 | . | | 1306 | . | | 1307 | . | | 1308 | | | 1309 |PACKET_IN with single OFP | | 1310 |match header | | 1311 (Tn)|--------------------------->| | 1312 | | | 1313 | PACKET_OUT with single OFP | | 1314 | action header| | 1315 (Rn)|<---------------------------| | 1316 | | | 1317 | | | 1319 | | | 1320 | | | 1323 | | | 1325 Legend: 1326 T0,T1, ..Tn are PACKET_IN messages transmit timestamps. 1327 R0,R1, ..Rn are PACKET_OUT messages receive timestamps. 1328 Nrx : Number of successful PACKET_IN/PACKET_OUT message exchanges 1330 Discussion: 1331 The Asynchronous Message Processing Time will be obtained by 1332 sum of ((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx. 1334 11.4.3 Asynchronous Message Processing Rate 1336 Procedure: 1338 SDN Nodes OpenFlow SDN 1339 Controller Application 1340 | | | 1341 |PACKET_IN with multiple OFP | | 1342 |match headers | | 1343 |--------------------------->| | 1344 | | | 1345 | PACKET_OUT with multiple | | 1346 | OFP action headers| | 1347 |<---------------------------| | 1348 | | | 1349 |PACKET_IN with multiple OFP | | 1350 |match headers | | 1351 |--------------------------->| | 1352 | | | 1353 | PACKET_OUT with multiple | | 1354 | OFP action headers| | 1355 |<---------------------------| | 1356 | . | | 1357 | . | | 1358 | . | | 1359 | | | 1360 |PACKET_IN with multiple OFP | | 1361 |match headers | | 1362 |--------------------------->| | 1363 | | | 1364 | PACKET_OUT with multiple | | 1365 | OFP action headers| | 1366 |<---------------------------| | 1367 | | | 1368 | | | 1370 | | | 1371 | | | 1373 | | | 1375 Discussion: 1376 The Asynchronous Message Processing Rate will be obtained by 1377 calculating the number of OFP action headers received in all 1378 PACKET_OUT messages during the test duration. 1380 11.4.4 Reactive Path Provisioning Time 1382 Procedure: 1384 Test Traffic Test Traffic SDN Nodes OpenFlow 1385 Generator TP1 Generator TP2 Controller 1386 | | | | 1387 | |G-ARP (D1) | | 1388 | |--------------------->| | 1389 | | | | 1390 | | |PACKET_IN(D1) | 1391 | | |------------------>| 1392 | | | | 1393 |Traffic (S1,D1) | | 1394 (Tsf1)|----------------------------------->| | 1395 | | | | 1396 | | | | 1397 | | | | 1398 | | |PACKET_IN(S1,D1) | 1399 | | |------------------>| 1400 | | | | 1401 | | | FLOW_MOD(D1) | 1402 | | |<------------------| 1403 | | | | 1404 | |Traffic (S1,D1) | | 1405 | (Tdf1)|<---------------------| | 1406 | | | | 1408 Legend: 1409 G-ARP: Gratuitous ARP message. 1410 Tsf1: Time of first frame sent from TP1 1411 Tdf1: Time of first frame received from TP2 1413 Discussion: 1414 The Reactive Path Provisioning Time can be obtained by finding the 1415 time difference between the transmit and receive time of the 1416 traffic (Tsf1-Tdf1). 1418 11.4.5 Proactive Path Provisioning Time 1420 Procedure: 1422 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 1423 Generator TP1 Generator TP2 Controller Application 1424 | | | | | 1425 | |G-ARP (D1) | | | 1426 | |-------------->| | | 1427 | | | | | 1428 | | |PACKET_IN(D1) | | 1429 | | |--------------->| | 1430 | | | | | 1431 |Traffic (S1,D1) | | | 1432 (Tsf1)|---------------------------->| | | 1433 | | | | | 1434 | | | | | 1436 | | | | | 1437 | | | FLOW_MOD(D1) | | 1438 | | |<---------------| | 1439 | | | | | 1440 | |Traffic (S1,D1)| | | 1441 | (Tdf1)|<--------------| | | 1442 | | | | | 1444 Legend: 1445 G-ARP: Gratuitous ARP message. 1446 Tsf1: Time of first frame sent from TP1 1447 Tdf1: Time of first frame received from TP2 1449 Discussion: 1450 The Proactive Path Provisioning Time can be obtained by finding 1451 the time difference between the transmit and receive time of the 1452 traffic (Tsf1-Tdf1). 1454 11.4.6 Reactive Path Provisioning Rate 1456 Procedure: 1458 Test Traffic Test Traffic SDN Nodes OpenFlow 1459 Generator TP1 Generator TP2 Controller 1460 | | | | 1461 | | | | 1462 | | | | 1463 | |G-ARP (D1..Dn) | | 1464 | |--------------------| | 1465 | | | | 1466 | | |PACKET_IN(D1..Dn) | 1467 | | |--------------------->| 1468 | | | | 1469 |Traffic (S1..Sn,D1..Dn) | | 1470 |--------------------------------->| | 1471 | | | | 1472 | | |PACKET_IN(S1.Sn,D1.Dn)| 1473 | | |--------------------->| 1474 | | | | 1475 | | | FLOW_MOD(S1) | 1476 | | |<---------------------| 1477 | | | | 1478 | | | FLOW_MOD(D1) | 1479 | | |<---------------------| 1480 | | | | 1481 | | | FLOW_MOD(S2) | 1482 | | |<---------------------| 1483 | | | | 1484 | | | FLOW_MOD(D2) | 1485 | | |<---------------------| 1486 | | | . | 1487 | | | . | 1488 | | | | 1489 | | | FLOW_MOD(Sn) | 1490 | | |<---------------------| 1491 | | | | 1492 | | | FLOW_MOD(Dn) | 1493 | | |<---------------------| 1494 | | | | 1495 | | Traffic (S1..Sn, | | 1496 | | D1..Dn)| | 1497 | |<-------------------| | 1498 | | | | 1499 | | | | 1501 Legend: 1502 G-ARP: Gratuitous ARP 1503 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1504 Destination Endpoint n 1505 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source Endpoint n 1507 Discussion: 1508 The Reactive Path Provisioning Rate can be obtained by finding the 1509 total number of frames received at TP2 after the test duration. 1511 11.4.7 Proactive Path Provisioning Rate 1513 Procedure: 1515 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 1516 Generator TP1 Generator TP2 Controller Application 1517 | | | | | 1518 | |G-ARP (D1..Dn) | | | 1519 | |-------------->| | | 1520 | | | | | 1521 | | |PACKET_IN(D1.Dn)| | 1522 | | |--------------->| | 1523 | | | | | 1524 |Traffic (S1..Sn,D1..Dn) | | | 1525 (Tsf1)|---------------------------->| | | 1526 | | | | | 1527 | | | | | 1529 | | | | | 1530 | | | | . | 1531 | | | | | 1533 | | | | | 1534 | | | FLOW_MOD(S1) | | 1535 | | |<---------------| | 1536 | | | | | 1537 | | | FLOW_MOD(D1) | | 1538 | | |<---------------| | 1539 | | | | | 1540 | | | . | | 1541 | | | FLOW_MOD(Sn) | | 1542 | | |<---------------| | 1543 | | | | | 1544 | | | FLOW_MOD(Dn) | | 1545 | | |<---------------| | 1546 | | | | | 1547 | |Traffic (S1.Sn,| | | 1548 | | D1.Dn)| | | 1549 | (Tdf1)|<--------------| | | 1550 | | | | | 1552 Legend: 1553 G-ARP: Gratuitous ARP 1554 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1555 Destination Endpoint n 1556 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source Endpoint n 1558 Discussion: 1559 The Proactive Path Provisioning Rate can be obtained by finding 1560 the total number of frames received at TP2 after the test duration 1562 11.4.8 Network Topology Change Detection Time 1564 Procedure: 1566 SDN Nodes OpenFlow SDN 1567 Controller Application 1568 | | | 1569 | | | 1571 | | | 1572 T0 |PORT_STATUS with link down | | 1573 | from S1 | | 1574 |--------------------------->| | 1575 | | | 1576 |First PACKET_OUT with LLDP | | 1577 |to OF Switch | | 1578 T1 |<---------------------------| | 1579 | | | 1580 | | | 1583 Discussion: 1584 The Network Topology Change Detection Time can be obtained by 1585 finding the difference between the time the OpenFlow switch S1 1586 sends the PORT_STATUS message (T0) and the time that the OpenFlow 1587 controller sends the first topology re-discovery message (T1) to 1588 OpenFlow switches. 1590 11.5 Scalability 1592 11.5.1 Control Sessions Capacity 1594 Procedure: 1596 SDN Nodes OpenFlow 1597 Controller 1598 | | 1599 | OFPT_HELLO Exchange for Switch 1 | 1600 |<------------------------------------->| 1601 | | 1602 | OFPT_HELLO Exchange for Switch 2 | 1603 |<------------------------------------->| 1604 | . | 1605 | . | 1606 | . | 1607 | OFPT_HELLO Exchange for Switch n | 1608 |X<----------------------------------->X| 1609 | | 1611 Discussion: 1612 The value of Switch n-1 will provide Control Sessions Capacity. 1614 11.5.2 Network Discovery Size 1616 Procedure: 1618 SDN Nodes OpenFlow SDN 1619 Controller Application 1620 | | | 1621 | | | 1623 | | | 1624 | OFPT_HELLO Exchange | | 1625 |<-------------------------->| | 1626 | | | 1627 | PACKET_OUT with LLDP | | 1628 | to all switches | | 1629 |<---------------------------| | 1630 | | | 1631 | PACKET_IN with LLDP| | 1632 | rcvd from switch-1| | 1633 |--------------------------->| | 1634 | | | 1635 | PACKET_IN with LLDP| | 1636 | rcvd from switch-2| | 1637 |--------------------------->| | 1638 | . | | 1639 | . | | 1640 | | | 1641 | PACKET_IN with LLDP| | 1642 | rcvd from switch-n| | 1643 |--------------------------->| | 1644 | | | 1645 | | | 1647 | | | 1648 | | Query the controller for| 1649 | | discovered n/w topo.(N1)| 1650 | |<--------------------------| 1651 | | | 1652 | | | 1654 | | | 1655 | | | 1658 | | | 1660 Legend: 1661 n/w topo: Network Topology 1662 OF: OpenFlow 1664 Discussion: 1665 The value of N1 provides the Network Discovery Size value. The 1666 test duration can be set to the stipulated time within which the 1667 user expects the controller to complete the discovery process. 1669 11.5.3 Forwarding Table Capacity 1671 Procedure: 1673 Test Traffic SDN Nodes OpenFlow SDN 1674 Generator TP1 Controller Application 1675 | | | | 1676 | | | | 1677 |G-ARP (H1..Hn) | | | 1678 Step 1 |----------------->| | | 1679 | | | | 1680 | |PACKET_IN(D1..Dn) | | 1681 | |------------------>| | 1682 | | | | 1683 Step 2 | | || 1684 | | | | 1685 | | | |(F1) 1687 | | | | 1688 | | || 1689 | | | | 1690 | | | |(F2) 1692 | | | | 1693 | | || 1694 | | | | 1695 | | | |(F3) 1697 | | | | 1698 | | | | 1700 | | | | 1702 Legend: 1703 G-ARP: Gratuitous ARP 1704 H1..Hn: Host 1 .. Host n 1705 FWD: Forwarding Table 1707 Discussion: 1708 Query the controller forwarding table entries for multiple times 1709 until the three consecutive queries return the same value. The 1710 last value retrieved from the controller will provide the 1711 Forwarding Table Capacity value. The query interval is user 1712 configurable. The 5 seconds shown in this example is for 1713 representational purpose. 1715 11.6 Security 1717 11.6.1 Exception Handling 1719 Procedure: 1721 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 1722 Generator TP1 Generator TP2 Controller Application 1723 | | | | | 1724 | |G-ARP (D1..Dn) | | | 1725 | |------------------>| | | 1726 | | | | | 1727 | | |PACKET_IN(D1..Dn)| | 1728 | | |---------------->| | 1729 | | | | | 1730 Step 1|Traffic (S1..Sn,D1..Dn) | | | 1731 |----------------------------->| | | 1732 | | | | | 1733 | | |PACKET_IN(S1..Sa,| | 1734 | | | D1..Da)| | 1735 | | |---------------->| | 1736 | | | | | 1737 | | |PACKET_IN(Sa+1.. | | 1738 | | |.Sn,Da+1..Dn) | | 1739 | | |(1% incorrect OFP| | 1740 | | | Match header)| | 1741 | | |---------------->| | 1742 | | | | | 1743 | | | FLOW_MOD(D1..Dn)| | 1744 | | |<----------------| | 1745 | | | | | 1746 | | | FLOW_MOD(S1..Sa)| | 1747 | | | OFP headers| | 1748 | | |<----------------| | 1749 | | | | | 1750 | |Traffic (S1..Sa, | | | 1751 | | D1..Da)| | | 1752 | |<------------------| | | 1753 | | | | | 1754 | | | | | 1757 | | | | | 1758 | | | | | 1761 | | | | | 1762 | | | | | 1766 | | | | | 1767 | | | | | 1770 | | | | | 1772 Legend: 1773 G-ARP: Gratuitous ARP 1774 PACKET_IN(Sa+1..Sn,Da+1..Dn): OpenFlow PACKET_IN with wrong 1775 version number 1776 Rn1: Total number of frames received at Test Port 2 with 1777 1% incorrect frames 1778 Rn2: Total number of frames received at Test Port 2 with 1779 2% incorrect frames 1781 Discussion: 1782 The traffic rate sent towards OpenFlow switch from Test Port 1 1783 should be 1% higher than the Path Programming Rate. Rn1 will 1784 provide the Path Provisioning Rate of controller at 1% of 1785 incorrect frames handling and Rn2 will provide the Path 1786 Provisioning Rate of controller at 2% of incorrect frames 1787 handling. 1789 The procedure defined above provides test steps to determine the 1790 effect of handling error packets on Path Programming Rate. Same 1791 procedure can be adopted to determine the effects on other 1792 performance tests listed in this benchmarking tests. 1794 11.6.2 Denial of Service Handling 1796 Procedure: 1798 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 1799 Generator TP1 Generator TP2 Controller Application 1800 | | | | | 1801 | |G-ARP (D1..Dn) | | | 1802 | |------------------>| | | 1803 | | | | | 1804 | | |PACKET_IN(D1..Dn)| | 1805 | | |---------------->| | 1806 | | | | | 1807 |Traffic (S1..Sn,D1..Dn) | | | 1808 |----------------------------->| | | 1809 | | | | | 1810 | | |PACKET_IN(S1..Sn,| | 1811 | | | D1..Dn)| | 1812 | | |---------------->| | 1813 | | | | | 1814 | | |TCP SYN Attack | | 1815 | | |from a switch | | 1816 | | |---------------->| | 1817 | | | | | 1818 | | |FLOW_MOD(D1..Dn) | | 1819 | | |<----------------| | 1820 | | | | | 1821 | | | FLOW_MOD(S1..Sn)| | 1822 | | | OFP headers| | 1823 | | |<----------------| | 1824 | | | | | 1825 | |Traffic (S1..Sn, | | | 1826 | | D1..Dn)| | | 1827 | |<------------------| | | 1828 | | | | | 1829 | | | | | 1832 | | | | | 1833 | | | | | 1836 | | | | | 1838 Legend: 1839 G-ARP: Gratuitous ARP 1841 Discussion: 1842 TCP SYN attack should be launched from one of the 1843 emulated/simulated OpenFlow Switch. Rn1 provides the Path 1844 Programming Rate of controller uponhandling denial of service 1845 attack. 1847 The procedure defined above provides test steps to determine the 1848 effect of handling denial of service on Path Programming Rate. 1849 Same procedure can be adopted to determine the effects on other 1850 performance tests listed in this benchmarking tests. 1852 11.7 Reliability 1854 11.7.1 Controller Failover Time 1856 Procedure: 1858 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 1859 Generator TP1 Generator TP2 Controller Application 1860 | | | | | 1861 | |G-ARP (D1) | | | 1862 | |------------>| | | 1863 | | | | | 1864 | | |PACKET_IN(D1) | | 1865 | | |---------------->| | 1866 | | | | | 1867 Step 1|Traffic (S1..Sn,D1) | | | 1868 |-------------------------->| | | 1869 | | | | | 1870 | | | | | 1871 | | |PACKET_IN(S1,D1) | | 1872 | | |---------------->| | 1873 | | | | | 1874 | | |FLOW_MOD(D1) | | 1875 | | |<----------------| | 1876 | | |FLOW_MOD(S1) | | 1877 | | |<----------------| | 1878 | | | | | 1879 | |Traffic (S1,D1)| | | 1880 | |<------------| | | 1881 | | | | | 1882 | | |PACKET_IN(S2,D1) | | 1883 | | |---------------->| | 1884 | | | | | 1885 | | |FLOW_MOD(S2) | | 1886 | | |<----------------| | 1887 | | | | | 1888 | | |PACKET_IN(Sn-1,D1)| | 1889 | | |---------------->| | 1890 | | | | | 1891 | | |PACKET_IN(Sn,D1) | | 1892 | | |---------------->| | 1893 | | | . | | 1894 | | | . | | 1897 | | | FLOW_MOD(Sn-1) | | 1898 | | | <-X----------| | 1899 | | | | | 1900 | | |FLOW_MOD(Sn) | | 1901 | | |<----------------| | 1902 | | | | | 1903 | |Traffic (Sn,D1)| | | 1904 | |<------------| | | 1905 | | | | | 1906 | | | | | 1911 Legend: 1912 G-ARP: Gratuitous ARP. 1914 Discussion: 1915 The time difference between the last valid frame received before 1916 the traffic loss and the first frame received after the traffic 1917 loss will provide the controller failover time. 1919 If there is no frame loss during controller failover time, the 1920 controller failover time can be deemed negligible. 1922 11.7.2 Network Re-Provisioning Time 1924 Procedure: 1926 Test Traffic Test Traffic SDN Nodes OpenFlow SDN 1927 Generator TP1 Generator TP2 Controller Application 1928 | | | | | 1929 | |G-ARP (D1) | | | 1930 | |-------------->| | | 1931 | | | | | 1932 | | |PACKET_IN(D1) | | 1933 | | |---------------->| | 1934 | G-ARP (S1) | | | 1935 |---------------------------->| | | 1936 | | | | | 1937 | | |PACKET_IN(S1) | | 1938 | | |---------------->| | 1939 | | | | | 1940 |Traffic (S1,D1,Seq.no (1..n))| | | 1941 |---------------------------->| | | 1942 | | | | | 1943 | | |PACKET_IN(S1,D1) | | 1944 | | |---------------->| | 1945 | | | | | 1946 | |Traffic (D1,S1,| | | 1947 | | Seq.no (1..n))| | | 1948 | |-------------->| | | 1949 | | | | | 1950 | | |PACKET_IN(D1,S1) | | 1951 | | |---------------->| | 1952 | | | | | 1953 | | |FLOW_MOD(D1) | | 1954 | | |<----------------| | 1955 | | | | | 1956 | | |FLOW_MOD(S1) | | 1957 | | |<----------------| | 1958 | | | | | 1959 | |Traffic (S1,D1,| | | 1960 | | Seq.no(1))| | | 1961 | |<--------------| | | 1962 | | | | | 1963 | |Traffic (S1,D1,| | | 1964 | | Seq.no(2))| | | 1965 | |<--------------| | | 1966 | | | | | 1967 | | | | | 1968 | Traffic (D1,S1,Seq.no(1))| | | 1969 |<----------------------------| | | 1970 | | | | | 1971 | Traffic (D1,S1,Seq.no(2))| | | 1972 |<----------------------------| | | 1973 | | | | | 1974 | Traffic (D1,S1,Seq.no(x))| | | 1975 |<----------------------------| | | 1976 | | | | | 1977 | |Traffic (S1,D1,| | | 1978 | | Seq.no(x))| | | 1979 | |<--------------| | | 1980 | | | | | 1981 | | | | | 1982 | | | | | 1986 | | | | | 1987 | | |PORT_STATUS(Sa) | | 1988 | | |---------------->| | 1989 | | | | | 1990 | |Traffic (S1,D1,| | | 1991 | | Seq.no(n-1))| | | 1992 | | X<-----------| | | 1993 | | | | | 1994 | Traffic (D1,S1,Seq.no(n-1))| | | 1995 | X------------------------| | | 1996 | | | | | 1997 | | | | | 1998 | | |FLOW_MOD(D1) | | 1999 | | |<----------------| | 2000 | | | | | 2001 | | |FLOW_MOD(S1) | | 2002 | | |<----------------| | 2003 | | | | | 2004 | Traffic (D1,S1,Seq.no(n))| | | 2005 |<----------------------------| | | 2006 | | | | | 2007 | |Traffic (S1,D1,| | | 2008 | | Seq.no(n))| | | 2009 | |<--------------| | | 2010 | | | | | 2011 | | | | | 2016 Legend: 2017 G-ARP: Gratuitous ARP message. 2018 Seq.no: Sequence number. 2019 Sa: Neighbour switch of the switch that was brought down. 2021 Discussion: 2022 The time difference between the last valid frame received before 2023 the traffic loss (Packet number with sequence number x) and the 2024 first frame received after the traffic loss (packet with sequence 2025 number n) will provide the network path re-provisioning time. 2027 Note that the test is valid only when the controller provisions 2028 the alternate path upon network failure. 2030 12. Acknowledgements 2032 The authors would like to thank the following individuals for 2033 providing their valuable comments to the earlier versions of this 2034 document: Al Morton (AT&T), Sandeep Gangadharan (HP), 2035 M. Georgescu (NAIST), Andrew McGregor (Google), 2036 Scott Bradner (Harvard University), Jay Karthik (Cisco), 2037 Ramakrishnan (Brocade). 2039 13. Authors' Addresses 2041 Bhuvaneswaran Vengainathan 2042 Veryx Technologies Inc. 2043 1 International Plaza, Suite 550 2044 Philadelphia 2045 PA 19113 2047 Email: bhuvaneswaran.vengainathan@veryxtech.com 2049 Anton Basil 2050 Veryx Technologies Inc. 2051 1 International Plaza, Suite 550 2052 Philadelphia 2053 PA 19113 2055 Email: anton.basil@veryxtech.com 2057 Mark Tassinari 2058 Hewlett-Packard, 2059 8000 Foothills Blvd, 2060 Roseville, CA 95747 2062 Email: mark.tassinari@hp.com 2064 Vishwas Manral 2065 Ionos Corp, 2066 4100 Moorpark Ave, 2067 San Jose, CA 2069 Email: vishwas@ionosnetworks.com 2071 Sarah Banks 2072 VSS Monitoring 2073 930 De Guigne Drive, 2074 Sunnyvale, CA 2076 Email: sbanks@encrypted.net