idnits 2.17.1 draft-ietf-bmwg-sdn-controller-benchmark-meth-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 16, 2017) is 2351 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-10) exists of draft-ietf-bmwg-sdn-controller-benchmark-term-06 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Bhuvaneswaran Vengainathan 2 Network Working Group Anton Basil 3 Intended Status: Informational Veryx Technologies 4 Expires: May 16, 2018 Mark Tassinari 5 Hewlett-Packard 6 Vishwas Manral 7 Nano Sec 8 Sarah Banks 9 VSS Monitoring 10 November 16, 2017 12 Benchmarking Methodology for SDN Controller Performance 13 draft-ietf-bmwg-sdn-controller-benchmark-meth-06 15 Abstract 17 This document defines the methodologies for benchmarking control 18 plane performance of SDN controllers. Terminology related to 19 benchmarking SDN controllers is described in the companion 20 terminology document. SDN controllers have been implemented with 21 many varying designs in order to achieve their intended network 22 functionality. Hence, the authors have taken the approach of 23 considering an SDN controller as a black box, defining the 24 methodology in a manner that is agnostic to protocols and network 25 services supported by controllers. The intent of this document is to 26 provide a standard mechanism to measure the performance of all 27 controller implementations. 29 Status of this Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current. 39 Internet-Drafts are draft documents valid for a maximum of six 40 months and may be updated, replaced, or obsoleted by other documents 41 at any time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress. 44 This Internet-Draft will expire on May 16, 2018. 46 Copyright Notice 48 Copyright (c) 2017 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with 56 respect to this document. Code Components extracted from this 57 document must include Simplified BSD License text as described in 58 Section 4.e of the Trust Legal Provisions and are provided without 59 warranty as described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction...................................................4 64 2. Scope..........................................................4 65 3. Test Setup.....................................................5 66 3.1. Test setup - Controller working in Standalone Mode........5 67 3.2. Test setup - Controller working in Cluster Mode...........6 68 4. Test Considerations............................................7 69 4.1. Network Topology..........................................7 70 4.2. Test Traffic..............................................7 71 4.3. Test Emulator Requirements................................7 72 4.4. Connection Setup..........................................7 73 4.5. Measurement Point Specification and Recommendation........8 74 4.6. Connectivity Recommendation...............................8 75 4.7. Test Repeatability........................................8 76 5. Benchmarking Tests.............................................9 77 5.1. Performance...............................................9 78 5.1.1. Network Topology Discovery Time......................9 79 5.1.2. Asynchronous Message Processing Time................11 80 5.1.3. Asynchronous Message Processing Rate................12 81 5.1.4. Reactive Path Provisioning Time.....................15 82 5.1.5. Proactive Path Provisioning Time....................16 83 5.1.6. Reactive Path Provisioning Rate.....................17 84 5.1.7. Proactive Path Provisioning Rate....................19 85 5.1.8. Network Topology Change Detection Time..............20 86 5.2. Scalability..............................................22 87 5.2.1. Control Session Capacity............................22 88 5.2.2. Network Discovery Size..............................22 89 5.2.3. Forwarding Table Capacity...........................23 90 5.3. Security.................................................25 91 5.3.1. Exception Handling..................................25 92 5.3.2. Denial of Service Handling..........................26 93 5.4. Reliability..............................................28 94 5.4.1. Controller Failover Time............................28 95 5.4.2. Network Re-Provisioning Time........................29 96 6. References....................................................31 97 6.1. Normative References.....................................31 98 6.2. Informative References...................................31 99 7. IANA Considerations...........................................31 100 8. Security Considerations.......................................31 101 9. Acknowledgments...............................................32 102 Appendix A. Example Test Topologies..............................33 103 A.1. Leaf-Spine Topology - Three Tier Network Architecture....33 104 A.2. Leaf-Spine Topology - Two Tier Network Architecture......33 105 Appendix B. Benchmarking Methodology using OpenFlow Controllers..34 106 B.1. Protocol Overview........................................34 107 B.2. Messages Overview........................................34 108 B.3. Connection Overview......................................34 109 B.4. Performance Benchmarking Tests...........................35 110 B.4.1. Network Topology Discovery Time.....................35 111 B.4.2. Asynchronous Message Processing Time................36 112 B.4.3. Asynchronous Message Processing Rate................37 113 B.4.4. Reactive Path Provisioning Time.....................38 114 B.4.5. Proactive Path Provisioning Time....................39 115 B.4.6. Reactive Path Provisioning Rate.....................40 116 B.4.7. Proactive Path Provisioning Rate....................41 117 B.4.8. Network Topology Change Detection Time..............42 118 B.5. Scalability..............................................43 119 B.5.1. Control Sessions Capacity...........................43 120 B.5.2. Network Discovery Size..............................43 121 B.5.3. Forwarding Table Capacity...........................44 122 B.6. Security.................................................46 123 B.6.1. Exception Handling..................................46 124 B.6.2. Denial of Service Handling..........................47 125 B.7. Reliability..............................................49 126 B.7.1. Controller Failover Time............................49 127 B.7.2. Network Re-Provisioning Time........................50 128 Authors' Addresses...............................................53 130 1. Introduction 132 This document provides generic methodologies for benchmarking SDN 133 controller performance. An SDN controller may support many 134 northbound and southbound protocols, implement a wide range of 135 applications, and work solely, or as a group to achieve the desired 136 functionality. This document considers an SDN controller as a black 137 box, regardless of design and implementation. The tests defined in 138 the document can be used to benchmark SDN controller for 139 performance, scalability, reliability and security independent of 140 northbound and southbound protocols. These tests can be performed on 141 an SDN controller running as a virtual machine (VM) instance or on a 142 bare metal server. This document is intended for those who want to 143 measure the SDN controller performance as well as compare various 144 SDN controllers performance. 146 Conventions used in this document 148 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 149 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 150 document are to be interpreted as described in RFC 2119. 152 2. Scope 154 This document defines methodology to measure the networking metrics 155 of SDN controllers. For the purpose of this memo, the SDN controller 156 is a function that manages and controls Network Devices. Any SDN 157 controller without a control capability is out of scope for this 158 memo. The tests defined in this document enable benchmarking of SDN 159 Controllers in two ways; as a standalone controller and as a cluster 160 of homogeneous controllers. These tests are recommended for 161 execution in lab environments rather than in live network 162 deployments. Performance benchmarking of a federation of controllers 163 is beyond the scope of this document. Test Setup 165 The tests defined in this document enable measurement of an SDN 166 controllers performance in standalone mode and cluster mode. This 167 section defines common reference topologies that are later referred 168 to in individual tests (Additional forwarding Plane topologies are 169 provided in Appendix A). 171 3. Test Setup 173 3.1. Test setup - Controller working in Standalone Mode 175 +-----------------------------------------------------------+ 176 | Application Plane Test Emulator | 177 | | 178 | +-----------------+ +-------------+ | 179 | | Application | | Service | | 180 | +-----------------+ +-------------+ | 181 | | 182 +-----------------------------+(I2)-------------------------+ 183 | 184 | 185 | (Northbound interfaces) 186 +-------------------------------+ 187 | +----------------+ | 188 | | SDN Controller | | 189 | +----------------+ | 190 | | 191 | Device Under Test (DUT) | 192 +-------------------------------+ 193 | (Southbound interfaces) 194 | 195 | 196 +-----------------------------+(I1)-------------------------+ 197 | | 198 | +-----------+ +-----------+ | 199 | | Network |l1 ln-1| Network | | 200 | | Device 1 |---- .... ----| Device n | | 201 | +-----------+ +-----------+ | 202 | |l0 |ln | 203 | | | | 204 | | | | 205 | +---------------+ +---------------+ | 206 | | Test Traffic | | Test Traffic | | 207 | | Generator | | Generator | | 208 | | (TP1) | | (TP2) | | 209 | +---------------+ +---------------+ | 210 | | 211 | Forwarding Plane Test Emulator | 212 +-----------------------------------------------------------+ 214 Figure 1 216 3.2. Test setup - Controller working in Cluster Mode 218 +-----------------------------------------------------------+ 219 | Application Plane Test Emulator | 220 | | 221 | +-----------------+ +-------------+ | 222 | | Application | | Service | | 223 | +-----------------+ +-------------+ | 224 | | 225 +-----------------------------+(I2)-------------------------+ 226 | 227 | 228 | (Northbound interfaces) 229 +---------------------------------------------------------+ 230 | | 231 | ------------------ ------------------ | 232 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 233 | ------------------ ------------------ | 234 | | 235 | Device Under Test (DUT) | 236 +---------------------------------------------------------+ 237 | (Southbound interfaces) 238 | 239 | 240 +-----------------------------+(I1)-------------------------+ 241 | | 242 | +-----------+ +-----------+ | 243 | | Network |l1 ln-1| Network | | 244 | | Device 1 |---- .... ----| Device n | | 245 | +-----------+ +-----------+ | 246 | |l0 |ln | 247 | | | | 248 | | | | 249 | +---------------+ +---------------+ | 250 | | Test Traffic | | Test Traffic | | 251 | | Generator | | Generator | | 252 | | (TP1) | | (TP2) | | 253 | +---------------+ +---------------+ | 254 | | 255 | Forwarding Plane Test Emulator | 256 +-----------------------------------------------------------+ 258 Figure 2 260 4. Test Considerations 262 4.1. Network Topology 264 The test cases SHOULD use Leaf-Spine topology with at least 1 265 Network Device in the topology for benchmarking. The test traffic 266 generators TP1 and TP2 SHOULD be connected to the first and the last 267 leaf Network Device. If a test case uses test topology with 1 268 Network Device, the test traffic generators TP1 and TP2 SHOULD be 269 connected to the same node. However to achieve a complete 270 performance characterization of the SDN controller, it is 271 recommended that the controller be benchmarked for many network 272 topologies and a varying number of Network Devices. This document 273 includes two sample test topologies, defined in Section 10 - 274 Appendix A for reference. Further, care should be taken to make sure 275 that a loop prevention mechanism is enabled either in the SDN 276 controller, or in the network when the topology contains redundant 277 network paths. 279 4.2. Test Traffic 281 Test traffic is used to notify the controller about the asynchronous 282 arrival of new flows. The test cases SHOULD use frame sizes of 128, 283 512 and 1508 bytes for benchmarking. Testing using jumbo frames are 284 optional. 286 4.3. Test Emulator Requirements 288 The Test Emulator SHOULD time stamp the transmitted and received 289 control messages to/from the controller on the established network 290 connections. The test cases use these values to compute the 291 controller processing time. 293 4.4. Connection Setup 295 There may be controller implementations that support unencrypted and 296 encrypted network connections with Network Devices. Further, the 297 controller may have backward compatibility with Network Devices 298 running older versions of southbound protocols. It may be useful to 299 measure the controller performance be measured with one or more 300 applicable connection setup methods defined below. 302 1. Unencrypted connection with Network Devices, running same 303 protocol version. 304 2. Unencrypted connection with Network Devices, running different 305 protocol versions. 306 Example: 308 a. Controller running current protocol version and switch 309 running older protocol version 310 b. Controller running older protocol version and switch 311 running current protocol version 312 3. Encrypted connection with Network Devices, running same 313 protocol version 314 4. Encrypted connection with Network Devices, running different 315 protocol versions. 316 Example: 317 a. Controller running current protocol version and switch 318 running older protocol version 319 b. Controller running older protocol version and switch 320 running current protocol version 322 4.5. Measurement Point Specification and Recommendation 324 The measurement accuracy depends on several factors including the 325 point of observation where the indications are captured. For 326 example, the notification can be observed at the controller or test 327 emulator. The test operator SHOULD make the observations/ 328 measurements at the interfaces of test emulator unless it is 329 explicitly mentioned otherwise in the individual test. In any case, 330 the locations of measurement points MUST be reported. 332 4.6. Connectivity Recommendation 334 The SDN controller in the test setup SHOULD be connected directly 335 with the forwarding and the management plane test emulators to avoid 336 any delays or failure introduced by the intermediate devices during 337 benchmarking tests. When the controller is implemented as a virtual 338 machine, details of the physical and logical connectivity MUST be 339 reported. 341 4.7. Test Repeatability 343 To increase the confidence in measured result, it is recommended 344 that each test SHOULD be repeated a minimum of 10 times. 346 Test Reporting 348 Each test has a reporting format that contains some global and 349 identical reporting components, and some individual components that 350 are specific to individual tests. The following test configuration 351 parameters and controller settings parameters MUST be reflected in 352 the test report. 354 Test Configuration Parameters: 356 1. Controller name and version 357 2. Northbound protocols and versions 358 3. Southbound protocols and versions 359 4. Controller redundancy mode (Standalone or Cluster Mode) 360 5. Connection setup (Unencrypted or Encrypted) 361 6. Network Topology (Mesh or Tree or Linear) 362 7. Network Device Type (Physical or Virtual or Emulated) 363 8. Number of Nodes 364 9. Number of Links 365 10. Dataplane Test Traffic Type 366 11. Controller System Configuration (e.g., Physical or Virtual 367 Machine, CPU, Memory, Caches, Operating System, Interface 368 Speed, Storage) 369 12. Reference Test Setup (e.g., Section 3.1 etc.,) 371 Controller Settings Parameters: 372 1. Topology re-discovery timeout 373 2. Controller redundancy mode (e.g., active-standby etc.,) 374 3. Controller state persistence enabled/disabled 376 To ensure the repeatability of test, the following capabilities of 377 test emulator SHOULD be reported 379 1. Maximum number of Network Devices that the forwarding plane 380 emulates 381 2. Control message processing time (e.g., Topology Discovery 382 Messages) 384 One way to determine the above two values are to simulate the 385 required control sessions and messages from the control plane. 387 5. Benchmarking Tests 389 5.1. Performance 391 5.1.1. Network Topology Discovery Time 393 Objective: 395 The time taken by controller(s) to determine the complete network 396 topology, defined as the interval starting with the first discovery 397 message from the controller(s) at its Southbound interface, ending 398 with all features of the static topology determined. 400 Reference Test Setup: 402 The test SHOULD use one of the test setups described in section 3.1 403 or section 3.2 of this document in combination with Appendix A. 405 Prerequisite: 407 1. The controller MUST support network discovery. 408 2. Tester should be able to retrieve the discovered topology 409 information either through the controller's management interface, 410 or northbound interface to determine if the discovery was 411 successful and complete. 412 3. Ensure that the controller's topology re-discovery timeout has 413 been set to the maximum value to avoid initiation of re-discovery 414 process in the middle of the test. 416 Procedure: 418 1. Ensure that the controller is operational, its network 419 applications, northbound and southbound interfaces are up and 420 running. 421 2. Establish the network connections between controller and Network 422 Devices. 423 3. Record the time for the first discovery message (Tm1) received 424 from the controller at forwarding plane test emulator interface 425 I1. 426 4. Query the controller every 3 seconds to obtain the discovered 427 network topology information through the northbound interface or 428 the management interface and compare it with the deployed network 429 topology information. 430 5. Stop the trial when the discovered topology information matches 431 the deployed network topology, or when the discovered topology 432 information for 3 consecutive queries return the same details. 433 6. Record the time last discovery message (Tmn) sent to controller 434 from the forwarding plane test emulator interface (I1) when the 435 trial completed successfully. (e.g., the topology matches). 437 Measurement: 439 Topology Discovery Time Tr1 = Tmn-Tm1. 441 Tr1 + Tr2 + Tr3 .. Trn 442 Average Topology Discovery Time = ----------------------- 443 Total Trials 445 Reporting Format: 447 The Topology Discovery Time results MUST be reported in the format 448 of a table, with a row for each successful iteration. The last row 449 of the table indicates the average Topology Discovery Time. 451 If this test is repeated with varying number of nodes over the same 452 topology, the results SHOULD be reported in the form of a graph. The 453 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 454 SHOULD be the average Topology Discovery Time. 456 If this test is repeated with same number of nodes over different 457 topologies, the results SHOULD be reported in the form of a graph. 458 The X coordinate SHOULD be the Topology Type, the Y coordinate 459 SHOULD be the average Topology Discovery Time. 461 5.1.2. Asynchronous Message Processing Time 463 Objective: 465 The time taken by controller(s) to process an asynchronous message, 466 defined as the interval starting with an asynchronous message from a 467 network device after the discovery of all the devices by the 468 controller(s), ending with a response message from the controller(s) 469 at its Southbound interface. 471 Reference Test Setup: 473 This test SHOULD use one of the test setup described in section 3.1 474 or section 3.2 of this document in combination with Appendix A. 476 Prerequisite: 478 1. The controller MUST have successfully completed the network 479 topology discovery for the connected Network Devices. 481 Procedure: 483 1. Generate asynchronous messages from every connected Network 484 Device, to the SDN controller, one at a time in series from the 485 forwarding plane test emulator for the trial duration. 486 2. Record every request transmit (T1) timestamp and the 487 corresponding response (R1) received timestamp at the 488 forwarding plane test emulator interface (I1) for every 489 successful message exchange. 491 Measurement: 493 (R1-T1) + (R2-T2)..(Rn-Tn) 494 Asynchronous Message Processing Time Tr1 = ----------------------- 495 Nrx 497 Where Nrx is the total number of successful messages exchanged 499 Tr1 + Tr2 + Tr3..Trn 500 Average Asynchronous Message Processing Time= -------------------- 501 Total Trials 503 Reporting Format: 505 The Asynchronous Message Processing Time results MUST be reported in 506 the format of a table with a row for each iteration. The last row of 507 the table indicates the average Asynchronous Message Processing 508 Time. 510 The report should capture the following information in addition to 511 the configuration parameters captured in section 5. - Successful 512 messages exchanged (Nrx) 514 If this test is repeated with varying number of nodes with same 515 topology, the results SHOULD be reported in the form of a graph. The 516 X coordinate SHOULD be the Number of nodes (N), the Y coordinate 517 SHOULD be the average Asynchronous Message Processing Time. 519 If this test is repeated with same number of nodes using different 520 topologies, the results SHOULD be reported in the form of a graph. 521 The X coordinate SHOULD be the Topology Type, the Y coordinate 522 SHOULD be the average Asynchronous Message Processing Time. 524 5.1.3. Asynchronous Message Processing Rate 526 Objective: 528 Measure the number of responses to asynchronous messages (such as 529 new flow arrival notification message, etc.) for which the 530 controller(s) performed processing and replied with a valid and 531 productive (non-trivial) response message 533 This test will measure two benchmarks on Asynchronous Message 534 Processing Rate using a single procedure. The two benchmarks are 535 (see section 2.3.1.3 of [I-D.sdn-controller-benchmark-term]): 537 1. Loss-free Asynchronous Message Processing Rate 539 2. Maximum Asynchronous Message Processing Rate 541 Here two benchmarks are determined through a series of trials where 542 the number of messages are sent to the controller(s), and the 543 responses from the controller(s) are counted over the trial 544 duration. The message response rate and the message loss ratio are 545 calculated for each trial. 547 Reference Test Setup: 549 The test SHOULD use one of the test setups described in section 3.1 550 or section 3.2 of this document in combination with Appendix A. 552 Prerequisite: 554 1. The controller(s) MUST have successfully completed the network 555 topology discovery for the connected Network Devices. 556 2. Choose and record the Trial Duration (Td), the sending rate step- 557 size (STEP), the tolerance on equality for two consecutive trials 558 (P%),and the maximum possible message sending rate (Ntx1/Td). 560 Procedure: 562 1. Generate asynchronous messages continuously at the maximum 563 possible rate on the established connections from all the 564 emulated/simulated Network Devices for the given trial Duration 565 (Td). 566 2. Record the total number of responses received from the controller 567 (Nrx1) as well as the number of messages sent (Ntx1) to the 568 controller within the trial duration(Td). 569 3. Calculate the Asynchronous Message Processing Rate (Tr1) and 570 the Message Loss Ratio (Lr1). Ensure that the controller(s) have 571 returned to normal operation. 572 4. Repeat the trial by reducing the asynchronous message sending rate 573 used in last trial by the STEP size. 574 5. Continue repeating the trials and reducing the sending rate until 575 both the maximum value of Nrxn and the Nrxn corresponding to zero 576 loss ratio have been found. 577 6. The trials corresponding to the benchmark levels MUST be repeated 578 using the same asynchronous message rates until the responses 579 received from the controller are equal (+/-P%) for two consecutive 580 trials. 581 7. Record the number of responses received from the controller (Nrxn) 582 as well as the number of messages sent (Ntxn) to the controller in 583 the last trial. 585 Measurement: 587 Nrxn 588 Asynchronous Message Processing Rate Trn = ----- 589 Td 591 Maximum Asynchronous Message Processing Rate = MAX(Trn) for all n 593 Nrxn 594 Asynchronous Message Loss Ratio Lrn = 1 - ----- 595 Ntxn 597 Loss-free Asynchronous Message Processing Rate = MAX(Trn) given 598 Lrn=0 600 Reporting Format: 602 The Asynchronous Message Processing Rate results MUST be reported in 603 the format of a table with a row for each trial. 605 The table should report the following information in addition to the 606 configuration parameters captured in section 5, with columns: 608 - Offered rate (Ntxn/Td) 610 - Asynchronous Message Processing Rate (Trn) 612 - Loss Ratio (Lr) 614 - Benchmark at this iteration (blank for none, Maximum, Loss-Free) 616 The results MAY be presented in the form of a graph. The X axis 617 SHOULD be the Offered rate, and dual Y axes would represent 618 Asynchronous Message Processing Rate and Loss Ratio, respectively. 620 If this test is repeated with varying number of nodes over same 621 topology, the results SHOULD be reported in the form of a graph. The 622 X axis SHOULD be the Number of nodes (N), the Y axis SHOULD be the 623 Asynchronous Message Processing Rate. Both the Maximum and the Loss- 624 Free Rates should be plotted for each N. 626 If this test is repeated with same number of nodes over different 627 topologies, the results SHOULD be reported in the form of a graph. 628 The X axis SHOULD be the Topology Type, the Y axis SHOULD be the 629 Asynchronous Message Processing Rate. Both the Maximum and the Loss- 630 Free Rates should be plotted for each topology. 632 5.1.4. Reactive Path Provisioning Time 634 Objective: 636 The time taken by the controller to setup a path reactively between 637 source and destination node, defined as the interval starting with 638 the first flow provisioning request message received by the 639 controller(s) at its Southbound interface, ending with the last flow 640 provisioning response message sent from the controller(s) at its 641 Southbound interface. 643 Reference Test Setup: 645 The test SHOULD use one of the test setups described in section 3.1 646 or section 3.2 of this document in combination with Appendix A. The 647 number of Network Devices in the path is a parameter of the test 648 that may be varied from 2 to maximum discovery size in repetitions 649 of this test. 651 Prerequisite: 653 1. The controller MUST contain the network topology information for 654 the deployed network topology. 655 2. The controller should have the knowledge about the location of 656 destination endpoint for which the path has to be provisioned. 657 This can be achieved through dynamic learning or static 658 provisioning. 659 3. Ensure that the default action for 'flow miss' in Network Device 660 is configured to 'send to controller'. 661 4. Ensure that each Network Device in a path requires the controller 662 to make the forwarding decision while paving the entire path. 664 Procedure: 666 1. Send a single traffic stream from the test traffic generator TP1 667 to test traffic generator TP2. 668 2. Record the time of the first flow provisioning request message 669 sent to the controller (Tsf1) from the Network Device at the 670 forwarding plane test emulator interface (I1). 671 3. Wait for the arrival of first traffic frame at the Traffic 672 Endpoint TP2 or the expiry of trial duration (Td). 673 4. Record the time of the last flow provisioning response message 674 received from the controller (Tdf1) to the Network Device at the 675 forwarding plane test emulator interface (I1). 677 Measurement: 679 Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1. 681 Tr1 + Tr2 + Tr3 .. Trn 682 Average Reactive Path Provisioning Time = ----------------------- 683 Total Trials 685 Reporting Format: 687 The Reactive Path Provisioning Time results MUST be reported in the 688 format of a table with a row for each iteration. The last row of the 689 table indicates the Average Reactive Path Provisioning Time 691 The report should capture the following information in addition to 692 the configuration parameters captured in section 5. 694 - Number of Network Devices in the path 696 5.1.5. Proactive Path Provisioning Time 698 Objective: 700 The time taken by the controller to setup a path proactively between 701 source and destination node, defined as the interval starting with 702 the first proactive flow provisioned in the controller(s) at its 703 Northbound interface, ending with the last flow provisioning 704 response message sent from the controller(s) at it Southbound 705 interface. 707 Reference Test Setup: 709 The test SHOULD use one of the test setups described in section 3.1 710 or section 3.2 of this document in combination with Appendix A. 712 Prerequisite: 714 1. The controller MUST contain the network topology information for 715 the deployed network topology. 716 2. The controller should have the knowledge about the location of 717 destination endpoint for which the path has to be provisioned. 718 This can be achieved through dynamic learning or static 719 provisioning. 721 3. Ensure that the default action for flow miss in Network Device is 722 'drop'. 724 Procedure: 726 1. Send a single traffic stream from test traffic generator TP1 to 727 TP2. 728 2. Install the flow entries to reach from test traffic generator TP1 729 to the test traffic generator TP2 through controller's northbound 730 or management interface. 731 3. Wait for the arrival of first traffic frame at the test traffic 732 generator TP2 or the expiry of trial duration (Td). 733 4. Record the time when the proactive flow is provisioned in the 734 Controller (Tsf1) at the management plane test emulator interface 735 I2. 736 5. Record the time of the last flow provisioning message received 737 from the controller (Tdf1) at the forwarding plane test emulator 738 interface I1. 740 Measurement: 742 Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1. 744 Tr1 + Tr2 + Tr3 .. Trn 745 Average Proactive Path Provisioning Time = ----------------------- 746 Total Trials 748 Reporting Format: 750 The Proactive Path Provisioning Time results MUST be reported in the 751 format of a table with a row for each iteration. The last row of the 752 table indicates the Average Proactive Path Provisioning Time. 754 The report should capture the following information in addition to 755 the configuration parameters captured in section 5. 757 - Number of Network Devices in the path 759 5.1.6. Reactive Path Provisioning Rate 761 Objective: 763 The maximum number of independent paths a controller can 764 concurrently establish between source and destination nodes 765 reactively, defined as the number of paths provisioned by the 766 controller(s) at its Southbound interface for the flow provisioning 767 requests received for path provisioning at its Southbound interface 768 between the start of the test and the expiry of given trial 769 duration. 771 Reference Test Setup: 773 The test SHOULD use one of the test setups described in section 3.1 774 or section 3.2 of this document in combination with Appendix A. 776 Prerequisite: 778 1. The controller MUST contain the network topology information for 779 the deployed network topology. 780 2. The controller should have the knowledge about the location of 781 destination addresses for which the paths have to be provisioned. 782 This can be achieved through dynamic learning or static 783 provisioning. 784 3. Ensure that the default action for 'flow miss' in Network Device 785 is configured to 'send to controller'. 786 4. Ensure that each Network Device in a path requires the controller 787 to make the forwarding decision while provisioning the entire 788 path. 790 Procedure: 792 1. Send traffic with unique source and destination addresses from 793 test traffic generator TP1. 794 2. Record total number of unique traffic frames (Ndf) received at the 795 test traffic generator TP2 within the trial duration (Td). 797 Measurement: 799 Ndf 800 Reactive Path Provisioning Rate Tr1 = ------ 801 Td 803 Tr1 + Tr2 + Tr3 .. Trn 804 Average Reactive Path Provisioning Rate = ------------------------ 805 Total Trials 807 Reporting Format: 809 The Reactive Path Provisioning Rate results MUST be reported in the 810 format of a table with a row for each iteration. The last row of the 811 table indicates the Average Reactive Path Provisioning Rate. 813 The report should capture the following information in addition to 814 the configuration parameters captured in section 5. 816 - Number of Network Devices in the path 818 - Offered rate 820 5.1.7. Proactive Path Provisioning Rate 822 Objective: 824 Measure the maximum rate of independent paths a controller can 825 concurrently establish between source and destination nodes 826 proactively, defined as the number of paths provisioned by the 827 controller(s) at its Southbound interface for the paths requested in 828 its Northbound interface between the start of the test and the 829 expiry of given trial duration . The measurement is based on 830 dataplane observations of successful path activation 832 Reference Test Setup: 834 The test SHOULD use one of the test setups described in section 3.1 835 or section 3.2 of this document in combination with Appendix A. 837 Prerequisite: 839 1. The controller MUST contain the network topology information for 840 the deployed network topology. 842 2. The controller should have the knowledge about the location of 843 destination addresses for which the paths have to be provisioned. 844 This can be achieved through dynamic learning or static 845 provisioning. 847 3. Ensure that the default action for flow miss in Network Device is 848 'drop'. 850 Procedure: 852 1. Send traffic continuously with unique source and destination 853 addresses from test traffic generator TP1. 855 2. Install corresponding flow entries to reach from simulated 856 sources at the test traffic generator TP1 to the simulated 857 destinations at test traffic generator TP2 through controller's 858 northbound or management interface. 860 3. Record total number of unique traffic frames received Ndf) at the 861 test traffic generator TP2 within the trial duration (Td). 863 Measurement: 865 Ndf 866 Proactive Path Provisioning Rate Tr1 = ------ 867 Td 869 Tr1 + Tr2 + Tr3 .. Trn 870 Average Proactive Path Provisioning Rate = ----------------------- 871 Total Trials 873 Reporting Format: 875 The Proactive Path Provisioning Rate results MUST be reported in the 876 format of a table with a row for each iteration. The last row of the 877 table indicates the Average Proactive Path Provisioning Rate. 879 The report should capture the following information in addition to 880 the configuration parameters captured in section 5. 882 - Number of Network Devices in the path 884 - Offered rate 886 5.1.8. Network Topology Change Detection Time 888 Objective: 890 The amount of time required for the controller to detect any changes 891 in the network topology, defined as the interval starting with the 892 notification message received by the controller(s) at its Southbound 893 interface, ending with the first topology rediscovery messages sent 894 from the controller(s) at its Southbound interface. 896 Reference Test Setup: 898 The test SHOULD use one of the test setups described in section 3.1 899 or section 3.2 of this document in combination with Appendix A. 901 Prerequisite: 903 1. The controller MUST have successfully discovered the network 904 topology information for the deployed network topology. 906 2. The periodic network discovery operation should be configured to 907 twice the Trial duration (Td) value. 909 Procedure: 911 1. Trigger a topology change event by bringing down an active 912 Network Device in the topology. 914 2. Record the time when the first topology change notification is 915 sent to the controller (Tcn) at the forwarding plane test emulator 916 interface (I1). 918 3. Stop the trial when the controller sends the first topology re- 919 discovery message to the Network Device or the expiry of trial 920 duration (Td). 922 4. Record the time when the first topology re-discovery message is 923 received from the controller (Tcd) at the forwarding plane test 924 emulator interface (I1) 926 Measurement: 928 Network Topology Change Detection Time Tr1 = Tcd-Tcn. 930 Tr1 + Tr2 + Tr3 .. Trn 931 Average Network Topology Change Detection Time = ------------------ 932 Total Trials 934 Reporting Format: 936 The Network Topology Change Detection Time results MUST be reported 937 in the format of a table with a row for each iteration. The last 938 row of the table indicates the average Network Topology Change Time. 940 5.2. Scalability 942 5.2.1. Control Session Capacity 944 Objective: 946 Measure the maximum number of control sessions the controller can 947 maintain, defined as the number of sessions that the controller can 948 accept from network devices, starting with the first control 949 session, ending with the last control session that the controller(s) 950 accepts at its Southbound interface. 952 Reference Test Setup: 954 The test SHOULD use one of the test setups described in section 3.1 955 or section 3.2 of this document in combination with Appendix A. 957 Procedure: 959 1. Establish control connection with controller from every Network 960 Device emulated in the forwarding plane test emulator. 961 2. Stop the trial when the controller starts dropping the control 962 connections. 963 3. Record the number of successful connections established with the 964 controller (CCn) at the forwarding plane test emulator. 966 Measurement: 968 Control Sessions Capacity = CCn. 970 Reporting Format: 972 The Control Session Capacity results MUST be reported in addition to 973 the configuration parameters captured in section 5. 975 5.2.2. Network Discovery Size 977 Objective: 979 Measure the network size (number of nodes, links and hosts) that a 980 controller can discover, defined as the size of a network that the 981 controller(s) can discover, starting from a network topology given 982 by the user for discovery, ending with the topology that the 983 controller(s) could successfully discover. 985 Reference Test Setup: 987 The test SHOULD use one of the test setups described in section 3.1 988 or section 3.2 of this document in combination with Appendix A. 990 Prerequisite: 992 1. The controller MUST support automatic network discovery. 993 2. Tester should be able to retrieve the discovered topology 994 information either through controller's management interface or 995 northbound interface. 997 Procedure: 999 1. Establish the network connections between controller and network 1000 nodes. 1001 2. Query the controller for the discovered network topology 1002 information and compare it with the deployed network topology 1003 information. 1004 3. Increase the number of nodes by 1 when the comparison is 1005 successful and repeat the trial. 1006 4. Decrease the number of nodes by 1 when the comparison fails and 1007 repeat the trial. 1008 5. Continue the trial until the comparison of step 4 is successful. 1009 6. Record the number of nodes for the last trial (Ns) where the 1010 topology comparison was successful. 1012 Measurement: 1014 Network Discovery Size = Ns. 1016 Reporting Format: 1018 The Network Discovery Size results MUST be reported in addition to 1019 the configuration parameters captured in section 5. 1021 5.2.3. Forwarding Table Capacity 1023 Objective: 1025 Measure the maximum number of flow entries a controller can manage 1026 in its Forwarding table. 1028 Reference Test Setup: 1030 The test SHOULD use one of the test setups described in section 3.1 1031 or section 3.2 of this document in combination with Appendix A. 1033 Prerequisite: 1035 1. The controller Forwarding table should be empty. 1036 2. Flow Idle time MUST be set to higher or infinite value. 1037 3. The controller MUST have successfully completed network topology 1038 discovery. 1039 4. Tester should be able to retrieve the forwarding table information 1040 either through controller's management interface or northbound 1041 interface. 1043 Procedure: 1045 Reactive Flow Provisioning Mode: 1047 1. Send bi-directional traffic continuously with unique source and/or 1048 destination addresses from test traffic generators TP1 and TP2 at 1049 the asynchronous message processing rate of controller. 1050 2. Query the controller at a regular interval (e.g., 5 seconds) for 1051 the number of learnt flow entries from its northbound interface. 1052 3. Stop the trial when the retrieved value is constant for three 1053 consecutive iterations and record the value received from the last 1054 query (Nrp). 1056 Proactive Flow Provisioning Mode: 1058 1. Install unique flows continuously through controller's northbound 1059 or management interface until a failure response is received from 1060 the controller. 1061 2. Record the total number of successful responses (Nrp). 1063 Note: 1065 Some controller designs for proactive flow provisioning mode may 1066 require the switch to send flow setup requests in order to generate 1067 flow setup responses. In such cases, it is recommended to generate 1068 bi-directional traffic for the provisioned flows. 1070 Measurement: 1072 Proactive Flow Provisioning Mode: 1074 Max Flow Entries = Total number of flows provisioned (Nrp) 1076 Reactive Flow Provisioning Mode: 1078 Max Flow Entries = Total number of learnt flow entries (Nrp) 1080 Forwarding Table Capacity = Max Flow Entries. 1082 Reporting Format: 1084 The Forwarding Table Capacity results MUST be tabulated with the 1085 following information in addition to the configuration parameters 1086 captured in section 5. 1088 - Provisioning Type (Proactive/Reactive) 1090 5.3. Security 1092 5.3.1. Exception Handling 1094 Objective: 1096 Determine the effect of handling error packets and notifications on 1097 performance tests. The impact MUST be measured for the following 1098 performance tests 1100 a. Path Provisioning Rate 1102 b. Path Provisioning Time 1104 c. Network Topology Change Detection Time 1106 Reference Test Setup: 1108 The test SHOULD use one of the test setups described in section 3.1 1109 or section 3.2 of this document in combination with Appendix A. 1111 Prerequisite: 1113 1. This test MUST be performed after obtaining the baseline 1114 measurement results for the above performance tests. 1116 2. Ensure that the invalid messages are not dropped by the 1117 intermediate devices connecting the controller and Network 1118 Devices. 1120 Procedure: 1122 1. Perform the above listed performance tests and send 1% of messages 1123 from the Asynchronous Message Processing Rate as invalid messages 1124 from the connected Network Devices emulated at the forwarding 1125 plane test emulator. 1126 2. Perform the above listed performance tests and send 2% of messages 1127 from the Asynchronous Message Processing Rate as invalid messages 1128 from the connected Network Devices emulated at the forwarding 1129 plane test emulator. 1131 Note: 1133 Invalid messages can be frames with incorrect protocol fields or any 1134 form of failure notifications sent towards controller. 1136 Measurement: 1138 Measurement MUST be done as per the equation defined in the 1139 corresponding performance test measurement section. 1141 Reporting Format: 1143 The Exception Handling results MUST be reported in the format of 1144 table with a column for each of the below parameters and row for 1145 each of the listed performance tests. 1147 - Without Exceptions 1149 - With 1% Exceptions 1151 - With 2% Exceptions 1153 5.3.2. Denial of Service Handling 1155 Objective: 1157 Determine the effect of handling DoS attacks on performance and 1158 scalability tests the impact MUST be measured for the following 1159 tests: 1161 a. Path Provisioning Rate 1163 b. Path Provisioning Time 1165 c. Network Topology Change Detection Time 1167 d. Network Discovery Size 1169 Reference Test Setup: 1171 The test SHOULD use one of the test setups described in section 3.1 1172 or section 3.2 of this document in combination with Appendix A. 1174 Prerequisite: 1176 This test MUST be performed after obtaining the baseline measurement 1177 results for the above tests. 1179 Procedure: 1181 1. Perform the listed tests and launch a DoS attack towards 1182 controller while the trial is running. 1184 Note: 1186 DoS attacks can be launched on one of the following interfaces. 1188 a. Northbound (e.g., Sending a huge number of requests on 1189 northbound interface) 1190 b. Management (e.g., Ping requests to controller's management 1191 interface) 1192 c. Southbound (e.g., TCP SYNC messages on southbound interface) 1194 Measurement: 1196 Measurement MUST be done as per the equation defined in the 1197 corresponding test's measurement section. 1199 Reporting Format: 1201 The DoS Attacks Handling results MUST be reported in the format of 1202 table with a column for each of the below parameters and row for 1203 each of the listed tests. 1205 - Without any attacks 1207 - With attacks 1209 The report should also specify the nature of attack and the 1210 interface. 1212 5.4. Reliability 1214 5.4.1. Controller Failover Time 1216 Objective: 1218 The time taken to switch from an active controller to the backup 1219 controller, when the controllers work in redundancy mode and the 1220 active controller fails, defined as the interval starting with the 1221 active controller bringing down, ending with the first re-discovery 1222 message received from the new controller at its Southbound 1223 interface. 1225 Reference Test Setup: 1227 The test SHOULD use the test setup described in section 3.2 of this 1228 document in combination with Appendix A. 1230 Prerequisite: 1232 1. Master controller election MUST be completed. 1233 2. Nodes are connected to the controller cluster as per the 1234 Redundancy Mode (RM). 1235 3. The controller cluster should have successfully completed the 1236 network topology discovery. 1237 4. The Network Device MUST send all new flows to the controller when 1238 it receives from the test traffic generator. 1239 5. Controller should have learnt the location of destination (D1) at 1240 test traffic generator TP2. 1242 Procedure: 1244 1. Send uni-directional traffic continuously with incremental 1245 sequence number and source addresses from test traffic generator 1246 TP1 at the rate that the controller processes without any drops. 1247 2. Ensure that there are no packet drops observed at the test traffic 1248 generator TP2. 1249 3. Bring down the active controller. 1250 4. Stop the trial when a first frame received on TP2 after failover 1251 operation. 1253 5. Record the time at which the last valid frame received (T1) at 1254 test traffic generator TP2 before sequence error and the first 1255 valid frame received (T2) after the sequence error at TP2 1257 Measurement: 1259 Controller Failover Time = (T2 - T1) 1261 Packet Loss = Number of missing packet sequences. 1263 Reporting Format: 1265 The Controller Failover Time results MUST be tabulated with the 1266 following information. 1268 - Number of cluster nodes 1270 - Redundancy mode 1272 - Controller Failover Time 1274 - Packet Loss 1276 - Cluster keep-alive interval 1278 5.4.2. Network Re-Provisioning Time 1280 Objective: 1282 The time taken to re-route the traffic by the Controller, when there 1283 is a failure in existing traffic paths, defined as the interval 1284 starting from the first failure notification message received by the 1285 controller, ending with the last flow re-provisioning message sent 1286 by the controller at its Southbound interface. 1288 Reference Test Setup: 1290 This test SHOULD use one of the test setup described in section 3.1 1291 or section 3.2 of this document in combination with Appendix A. 1293 Prerequisite: 1294 1. Network with the given number of nodes and redundant paths MUST be 1295 deployed. 1297 2. Ensure that the controller MUST have knowledge about the location 1298 of test traffic generators TP1 and TP2. 1299 3. Ensure that the controller does not pre-provision the alternate 1300 path in the emulated Network Devices at the forwarding plane test 1301 emulator. 1303 Procedure: 1305 1. Send bi-directional traffic continuously with unique sequence 1306 number from TP1 and TP2. 1307 2. Bring down a link or switch in the traffic path. 1308 3. Stop the trial after receiving first frame after network re- 1309 convergence. 1310 4. Record the time of last received frame prior to the frame loss at 1311 TP2 (TP2-Tlfr) and the time of first frame received after the 1312 frame loss at TP2 (TP2-Tffr). There must be a gap in sequence 1313 numbers of these frames 1314 5. Record the time of last received frame prior to the frame loss at 1315 TP1 (TP1-Tlfr) and the time of first frame received after the 1316 frame loss at TP1 (TP1-Tffr). 1318 Measurement: 1320 Forward Direction Path Re-Provisioning Time (FDRT) 1321 = (TP2-Tffr - TP2-Tlfr) 1323 Reverse Direction Path Re-Provisioning Time (RDRT) 1324 = (TP1-Tffr - TP1-Tlfr) 1326 Network Re-Provisioning Time = (FDRT+RDRT)/2 1328 Forward Direction Packet Loss = Number of missing sequence frames 1329 at TP1 1331 Reverse Direction Packet Loss = Number of missing sequence frames 1332 at TP2 1334 Reporting Format: 1336 The Network Re-Provisioning Time results MUST be tabulated with the 1337 following information. 1339 - Number of nodes in the primary path 1341 - Number of nodes in the alternate path 1342 - Network Re-Provisioning Time 1344 - Forward Direction Packet Loss 1346 - Reverse Direction Packet Loss 1348 6. References 1350 6.1. Normative References 1352 [I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil, 1353 Mark.T, Vishwas Manral, Sarah Banks, "Terminology for 1354 Benchmarking SDN Controller Performance", 1355 draft-ietf-bmwg-sdn-controller-benchmark-term-06 1356 (Work in progress), November 16, 2017 1358 6.2. Informative References 1360 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 1361 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 1363 7. IANA Considerations 1365 This document does not have any IANA requests. 1367 8. Security Considerations 1369 Benchmarking tests described in this document are limited to the 1370 performance characterization of controller in lab environment with 1371 isolated network. 1373 The benchmarking network topology will be an independent test setup 1374 and MUST NOT be connected to devices that may forward the test 1375 traffic into a production network, or misroute traffic to the test 1376 management network. 1378 Further, benchmarking is performed on a "black-box" basis, relying 1379 solely on measurements observable external to the controller. 1381 Special capabilities SHOULD NOT exist in the controller specifically 1382 for benchmarking purposes. Any implications for network security 1383 arising from the controller SHOULD be identical in the lab and in 1384 production networks 1386 9. Acknowledgments 1388 The authors would like to thank the following individuals for 1389 providing their valuable comments to the earlier versions of this 1390 document: Al Morton (AT&T), Sandeep Gangadharan (HP), M. Georgescu 1391 (NAIST), Andrew McGregor (Google), Scott Bradner (Harvard 1392 University), Jay Karthik (Cisco), Ramakrishnan (Dell), Khasanov 1393 Boris (Huawei), Brian Castelli (Spirent) 1395 This document was prepared using 2-Word-v2.0.template.dot. 1397 Appendix A. Example Test Topologies 1399 A.1. Leaf-Spine Topology - Three Tier Network Architecture 1401 +----------+ 1402 | SDN | 1403 | Node | (Core) 1404 +----------+ 1405 / \ 1406 / \ 1407 +------+ +------+ 1408 | SDN | | SDN | (Spine) 1409 | Node |.. | Node | 1410 +------+ +------+ 1411 / \ / \ 1412 / \ / \ 1413 l1 / / \ ln-1 1414 / / \ \ 1415 +--------+ +-------+ 1416 | SDN | | SDN | 1417 | Node |.. | Node | (Leaf) 1418 +--------+ +-------+ 1420 A.2. Leaf-Spine Topology - Two Tier Network Architecture 1422 +------+ +------+ 1423 | SDN | | SDN | (Spine) 1424 | Node |.. | Node | 1425 +------+ +------+ 1426 / \ / \ 1427 / \ / \ 1428 l1 / / \ ln-1 1429 / / \ \ 1430 +--------+ +-------+ 1431 | SDN | | SDN | 1432 | Node |.. | Node | (Leaf) 1433 +--------+ +-------+ 1435 Appendix B. Benchmarking Methodology using OpenFlow Controllers 1437 This section gives an overview of OpenFlow protocol and provides 1438 test methodology to benchmark SDN controllers supporting OpenFlow 1439 southbound protocol. 1441 B.1. Protocol Overview 1443 OpenFlow is an open standard protocol defined by Open Networking 1444 Foundation (ONF)[ OpenFlow Switch Specification], used for 1445 programming the forwarding plane of network switches or routers via 1446 a centralized controller. 1448 B.2. Messages Overview 1450 OpenFlow protocol supports three messages types namely controller- 1451 to-switch, asynchronous and symmetric. 1453 Controller-to-switch messages are initiated by the controller and 1454 used to directly manage or inspect the state of the switch. These 1455 messages allow controllers to query/configure the switch (Features, 1456 Configuration messages), collect information from switch (Read-State 1457 message), send packets on specified port of switch (Packet-out 1458 message), and modify switch forwarding plane and state (Modify- 1459 State, Role-Request messages etc.). 1461 Asynchronous messages are generated by the switch without a 1462 controller soliciting them. These messages allow switches to update 1463 controllers to denote an arrival of new flow (Packet-in), switch 1464 state change (Flow-Removed, Port-status) and error (Error). 1466 Symmetric messages are generated in either direction without 1467 solicitation. These messages allow switches and controllers to set 1468 up connection (Hello), verify for liveness (Echo) and offer 1469 additional functionalities (Experimenter). 1471 B.3. Connection Overview 1473 OpenFlow channel is used to exchange OpenFlow message between an 1474 OpenFlow switch and an OpenFlow controller. The OpenFlow channel 1475 connection can be setup using plain TCP or TLS. By default, a switch 1476 establishes single connection with SDN controller. A switch may 1477 establish multiple parallel connections to single controller 1478 (auxiliary connection) or multiple controllers to handle controller 1479 failures and load balancing. 1481 B.4. Performance Benchmarking Tests 1483 B.4.1. Network Topology Discovery Time 1485 Procedure: 1487 Network Devices OpenFlow SDN 1488 Controller Application 1489 | | | 1490 | | | 1492 | | | 1493 | | | 1495 | | | 1496 | OFPT_HELLO Exchange | | 1497 |<-------------------------->| | 1498 | | | 1499 | PACKET_OUT with LLDP | | 1500 | to all switches | | 1501 (Tm1)|<---------------------------| | 1502 | | | 1503 | PACKET_IN with LLDP| | 1504 | rcvd from switch-1| | 1505 |--------------------------->| | 1506 | | | 1507 | PACKET_IN with LLDP| | 1508 | rcvd from switch-2| | 1509 |--------------------------->| | 1510 | . | | 1511 | . | | 1512 | | | 1513 | PACKET_IN with LLDP| | 1514 | rcvd from switch-n| | 1515 (Tmn)|--------------------------->| | 1516 | | | 1517 | | | 1519 | | | 1520 | | Query the controller for| 1521 | | discovered n/w topo.(Di)| 1522 | |<--------------------------| 1523 | | | 1524 | | | 1526 | | | 1528 Legend: 1530 NB: Northbound 1531 SB: Southbound 1532 OF: OpenFlow 1533 Tm1: Time of reception of first LLDP message from controller 1534 Tmn: Time of last LLDP message sent to controller 1536 Discussion: 1538 The Network Topology Discovery Time can be obtained by calculating 1539 the time difference between the first PACKET_OUT with LLDP message 1540 received from the controller (Tm1) and the last PACKET_IN with LLDP 1541 message sent to the controller (Tmn) when the comparison is 1542 successful. 1544 B.4.2. Asynchronous Message Processing Time 1546 Procedure: 1548 Network Devices OpenFlow SDN 1549 Controller Application 1550 | | | 1551 |PACKET_IN with single | | 1552 |OFP match header | | 1553 (T0)|--------------------------->| | 1554 | | | 1555 | PACKET_OUT with single OFP | | 1556 | action header | | 1557 (R0)|<---------------------------| | 1558 | . | | 1559 | . | | 1560 | . | | 1561 | | | 1562 |PACKET_IN with single OFP | | 1563 |match header | | 1564 (Tn)|--------------------------->| | 1565 | | | 1566 | PACKET_OUT with single OFP | | 1567 | action header| | 1568 (Rn)|<---------------------------| | 1569 | | | 1570 | | | 1572 | | | 1573 | | | 1576 | | | 1578 Legend: 1580 T0,T1, ..Tn are PACKET_IN messages transmit timestamps. 1581 R0,R1, ..Rn are PACKET_OUT messages receive timestamps. 1582 Nrx : Number of successful PACKET_IN/PACKET_OUT message 1583 exchanges 1585 Discussion: 1587 The Asynchronous Message Processing Time will be obtained by sum of 1588 ((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx. 1590 B.4.3. Asynchronous Message Processing Rate 1592 Procedure: 1594 Network Devices OpenFlow SDN 1595 Controller Application 1596 | | | 1597 |PACKET_IN with single OFP | | 1598 |match headers | | 1599 |--------------------------->| | 1600 | | | 1601 | PACKET_OUT with single | | 1602 | OFP action headers| | 1603 |<---------------------------| | 1604 | | | 1605 | . | | 1606 | . | | 1607 | . | | 1608 | | | 1609 |PACKET_IN with single OFP | | 1610 |match headers | | 1611 |--------------------------->| | 1612 | | | 1613 | PACKET_OUT with single | | 1614 | OFP action headers| | 1615 |<---------------------------| | 1616 | | | 1617 | | | 1619 | | | 1620 | | | 1622 | | | 1623 | | | 1625 | | | 1627 Note: The Ntx1 on initial trials should be greater than Nrx1 and 1628 repeat the trials until the Nrxn for two consecutive trials equeal 1629 to (+/-P%). 1631 Discussion: 1633 This test will measure two benchmarks using single procedure. 1) The 1634 Maximum Asynchronous Message Processing Rate will be obtained by 1635 calculating the maximum PACKET OUTs (Nrxn) received from the 1636 controller(s) across n trials. 2) The Loss-free Asynchronous Message 1637 Processing Rate will be obtained by calculating the maximum PACKET 1638 OUTs received from controller (s) when Loss Ratio equals zero. The 1639 loss ratio is obtained by 1 - Nrxn/Ntxn 1641 B.4.4. Reactive Path Provisioning Time 1643 Procedure: 1645 Test Traffic Test Traffic Network Devices OpenFlow 1646 Generator TP1 Generator TP2 Controller 1647 | | | | 1648 | |G-ARP (D1) | | 1649 | |--------------------->| | 1650 | | | | 1651 | | |PACKET_IN(D1) | 1652 | | |------------------>| 1653 | | | | 1654 |Traffic (S1,D1) | | 1655 (Tsf1)|----------------------------------->| | 1656 | | | | 1657 | | | | 1658 | | | | 1659 | | |PACKET_IN(S1,D1) | 1660 | | |------------------>| 1661 | | | | 1662 | | | FLOW_MOD(D1) | 1663 | | |<------------------| 1664 | | | | 1665 | |Traffic (S1,D1) | | 1666 | (Tdf1)|<---------------------| | 1667 | | | | 1669 Legend: 1671 G-ARP: Gratuitous ARP message. 1672 Tsf1: Time of first frame sent from TP1 1673 Tdf1: Time of first frame received from TP2 1675 Discussion: 1677 The Reactive Path Provisioning Time can be obtained by finding the 1678 time difference between the transmit and receive time of the traffic 1679 (Tsf1-Tdf1). 1681 B.4.5. Proactive Path Provisioning Time 1683 Procedure: 1685 Test Traffic Test Traffic Network Devices OpenFlow SDN 1686 Generator TP1 Generator TP2 Controller Application 1687 | | | | | 1688 | |G-ARP (D1) | | | 1689 | |-------------->| | | 1690 | | | | | 1691 | | |PACKET_IN(D1) | | 1692 | | |--------------->| | 1693 | | | | | 1694 |Traffic (S1,D1) | | | 1695 Tsf1)|---------------------------->| | | 1696 | | | | | 1697 | | | | | 1699 | | | | | 1700 | | | FLOW_MOD(D1) | | 1701 | | |<---------------| | 1702 | | | | | 1703 | |Traffic (S1,D1)| | | 1704 | (Tdf1)|<--------------| | | 1705 | | | | | 1707 Legend: 1709 G-ARP: Gratuitous ARP message. 1710 Tsf1: Time of first frame sent from TP1 1711 Tdf1: Time of first frame received from TP2 1713 Discussion: 1715 The Proactive Path Provisioning Time can be obtained by finding the 1716 time difference between the transmit and receive time of the traffic 1717 (Tsf1-Tdf1). 1719 B.4.6. Reactive Path Provisioning Rate 1721 Procedure: 1723 Test Traffic Test Traffic Network Devices OpenFlow 1724 Generator TP1 Generator TP2 Controller 1725 | | | | 1726 | | | | 1727 | | | | 1728 | |G-ARP (D1..Dn) | | 1729 | |--------------------| | 1730 | | | | 1731 | | |PACKET_IN(D1..Dn) | 1732 | | |--------------------->| 1733 | | | | 1734 |Traffic (S1..Sn,D1..Dn) | | 1735 |--------------------------------->| | 1736 | | | | 1737 | | |PACKET_IN(S1.Sn,D1.Dn)| 1738 | | |--------------------->| 1739 | | | | 1740 | | | FLOW_MOD(S1) | 1741 | | |<---------------------| 1742 | | | | 1743 | | | FLOW_MOD(D1) | 1744 | | |<---------------------| 1745 | | | | 1746 | | | FLOW_MOD(S2) | 1747 | | |<---------------------| 1748 | | | | 1749 | | | FLOW_MOD(D2) | 1750 | | |<---------------------| 1751 | | | . | 1752 | | | . | 1753 | | | | 1754 | | | FLOW_MOD(Sn) | 1755 | | |<---------------------| 1756 | | | | 1757 | | | FLOW_MOD(Dn) | 1758 | | |<---------------------| 1759 | | | | 1760 | | Traffic (S1..Sn, | | 1761 | | D1..Dn)| | 1762 | |<-------------------| | 1763 | | | | 1764 | | | | 1766 Legend: 1768 G-ARP: Gratuitous ARP 1769 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1770 Destination Endpoint n 1771 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1772 Endpoint n 1774 Discussion: 1776 The Reactive Path Provisioning Rate can be obtained by finding the 1777 total number of frames received at TP2 after the trial duration. 1779 B.4.7. Proactive Path Provisioning Rate 1781 Procedure: 1783 Test Traffic Test Traffic Network Devices OpenFlow SDN 1784 Generator TP1 Generator TP2 Controller Application 1785 | | | | | 1786 | |G-ARP (D1..Dn) | | | 1787 | |-------------->| | | 1788 | | | | | 1789 | | |PACKET_IN(D1.Dn)| | 1790 | | |--------------->| | 1791 | | | | | 1792 |Traffic (S1..Sn,D1..Dn) | | | 1793 Tsf1)|---------------------------->| | | 1794 | | | | | 1795 | | | | | 1797 | | | | | 1798 | | | | . | 1799 | | | | | 1801 | | | | | 1802 | | | FLOW_MOD(S1) | | 1803 | | |<---------------| | 1804 | | | | | 1805 | | | FLOW_MOD(D1) | | 1806 | | |<---------------| | 1807 | | | | | 1808 | | | . | | 1809 | | | FLOW_MOD(Sn) | | 1810 | | |<---------------| | 1811 | | | | | 1812 | | | FLOW_MOD(Dn) | | 1813 | | |<---------------| | 1814 | | | | | 1815 | |Traffic (S1.Sn,| | | 1816 | | D1.Dn)| | | 1817 | (Tdf1)|<--------------| | | 1818 | | | | | 1820 Legend: 1822 G-ARP: Gratuitous ARP 1823 D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... 1824 Destination Endpoint n 1825 S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source 1826 Endpoint n 1828 Discussion: 1830 The Proactive Path Provisioning Rate can be obtained by finding the 1831 total number of frames received at TP2 after the trial duration 1833 B.4.8. Network Topology Change Detection Time 1835 Procedure: 1837 Network Devices OpenFlow SDN 1838 Controller Application 1839 | | | 1840 | | | 1842 | | | 1843 T0 |PORT_STATUS with link down | | 1844 | from S1 | | 1845 |--------------------------->| | 1846 | | | 1847 |First PACKET_OUT with LLDP | | 1848 |to OF Switch | | 1849 T1 |<---------------------------| | 1850 | | | 1851 | | | 1854 Discussion: 1856 The Network Topology Change Detection Time can be obtained by 1857 finding the difference between the time the OpenFlow switch S1 sends 1858 the PORT_STATUS message (T0) and the time that the OpenFlow 1859 controller sends the first topology re-discovery message (T1) to 1860 OpenFlow switches. 1862 B.5. Scalability 1864 B.5.1. Control Sessions Capacity 1866 Procedure: 1868 Network Devices OpenFlow 1869 Controller 1870 | | 1871 | OFPT_HELLO Exchange for Switch 1 | 1872 |<------------------------------------->| 1873 | | 1874 | OFPT_HELLO Exchange for Switch 2 | 1875 |<------------------------------------->| 1876 | . | 1877 | . | 1878 | . | 1879 | OFPT_HELLO Exchange for Switch n | 1880 |X<----------------------------------->X| 1881 | | 1883 Discussion: 1885 The value of Switch n-1 will provide Control Sessions Capacity. 1887 B.5.2. Network Discovery Size 1889 Procedure: 1891 Network Devices OpenFlow SDN 1892 Controller Application 1893 | | | 1894 | | | 1896 | | | 1897 | OFPT_HELLO Exchange | | 1898 |<-------------------------->| | 1899 | | | 1900 | PACKET_OUT with LLDP | | 1901 | to all switches | | 1902 |<---------------------------| | 1903 | | | 1904 | PACKET_IN with LLDP| | 1905 | rcvd from switch-1| | 1906 |--------------------------->| | 1907 | | | 1908 | PACKET_IN with LLDP| | 1909 | rcvd from switch-2| | 1910 |--------------------------->| | 1911 | . | | 1912 | . | | 1913 | | | 1914 | PACKET_IN with LLDP| | 1915 | rcvd from switch-n| | 1916 |--------------------------->| | 1917 | | | 1918 | | | 1920 | | | 1921 | | Query the controller for| 1922 | | discovered n/w topo.(N1)| 1923 | |<--------------------------| 1924 | | | 1925 | | | 1927 | | | 1928 | | | 1931 | | | 1933 Legend: 1935 n/w topo: Network Topology 1936 OF: OpenFlow 1938 Discussion: 1940 The value of N1 provides the Network Discovery Size value. The trial 1941 duration can be set to the stipulated time within which the user 1942 expects the controller to complete the discovery process. 1944 B.5.3. Forwarding Table Capacity 1945 Procedure: 1947 Test Traffic Network Devices OpenFlow SDN 1948 Generator TP1 Controller Application 1949 | | | | 1950 | | | | 1951 |G-ARP (H1..Hn) | | | 1952 |----------------->| | | 1953 | | | | 1954 | |PACKET_IN(D1..Dn) | | 1955 | |------------------>| | 1956 | | | | 1957 | | || 1958 | | | | 1959 | | | |(F1) 1961 | | | | 1962 | | || 1963 | | | | 1964 | | | |(F2) 1966 | | | | 1967 | | || 1968 | | | | 1969 | | | |(F3) 1971 | | | | 1972 | | | | 1974 | | | | 1976 Legend: 1978 G-ARP: Gratuitous ARP 1979 H1..Hn: Host 1 .. Host n 1980 FWD: Forwarding Table 1982 Discussion: 1984 Query the controller forwarding table entries for multiple times 1985 until the three consecutive queries return the same value. The last 1986 value retrieved from the controller will provide the Forwarding 1987 Table Capacity value. The query interval is user configurable. The 5 1988 seconds shown in this example is for representational purpose. 1990 B.6. Security 1992 B.6.1. Exception Handling 1994 Procedure: 1996 Test Traffic Test Traffic Network Devices OpenFlow SDN 1997 Generator TP1 Generator TP2 Controller Application 1998 | | | | | 1999 | |G-ARP (D1..Dn) | | | 2000 | |------------------>| | | 2001 | | | | | 2002 | | |PACKET_IN(D1..Dn)| | 2003 | | |---------------->| | 2004 | | | | | 2005 |Traffic (S1..Sn,D1..Dn) | | | 2006 |----------------------------->| | | 2007 | | | | | 2008 | | |PACKET_IN(S1..Sa,| | 2009 | | | D1..Da)| | 2010 | | |---------------->| | 2011 | | | | | 2012 | | |PACKET_IN(Sa+1.. | | 2013 | | |.Sn,Da+1..Dn) | | 2014 | | |(1% incorrect OFP| | 2015 | | | Match header)| | 2016 | | |---------------->| | 2017 | | | | | 2018 | | | FLOW_MOD(D1..Dn)| | 2019 | | |<----------------| | 2020 | | | | | 2021 | | | FLOW_MOD(S1..Sa)| | 2022 | | | OFP headers| | 2023 | | |<----------------| | 2024 | | | | | 2025 | |Traffic (S1..Sa, | | | 2026 | | D1..Da)| | | 2027 | |<------------------| | | 2028 | | | | | 2029 | | | | | 2032 | | | | | 2033 | | | | | 2036 | | | | | 2037 | | | | | 2041 | | | | | 2042 | | | | | 2045 | | | | | 2047 Legend: 2049 G-ARP: Gratuitous ARP 2050 PACKET_IN(Sa+1..Sn,Da+1..Dn): OpenFlow PACKET_IN with wrong 2051 version number 2052 Rn1: Total number of frames received at Test Port 2 with 2053 1% incorrect frames 2054 Rn2: Total number of frames received at Test Port 2 with 2055 2% incorrect frames 2057 Discussion: 2059 The traffic rate sent towards OpenFlow switch from Test Port 1 2060 should be 1% higher than the Path Programming Rate. Rn1 will provide 2061 the Path Provisioning Rate of controller at 1% of incorrect frames 2062 handling and Rn2 will provide the Path Provisioning Rate of 2063 controller at 2% of incorrect frames handling. 2065 The procedure defined above provides test steps to determine the 2066 effect of handling error packets on Path Programming Rate. Same 2067 procedure can be adopted to determine the effects on other 2068 performance tests listed in this benchmarking tests. 2070 B.6.2. Denial of Service Handling 2072 Procedure: 2074 Test Traffic Test Traffic Network Devic OpenFlow SDN 2075 Generator TP1 Generator TP2 Controller Application 2076 | | | | | 2077 | |G-ARP (D1..Dn) | | | 2078 | |------------------>| | | 2079 | | | | | 2080 | | |PACKET_IN(D1..Dn)| | 2081 | | |---------------->| | 2082 | | | | | 2083 |Traffic (S1..Sn,D1..Dn) | | | 2084 |----------------------------->| | | 2085 | | | | | 2086 | | |PACKET_IN(S1..Sn,| | 2087 | | | D1..Dn)| | 2088 | | |---------------->| | 2089 | | | | | 2090 | | |TCP SYN Attack | | 2091 | | |from a switch | | 2092 | | |---------------->| | 2093 | | | | | 2094 | | |FLOW_MOD(D1..Dn) | | 2095 | | |<----------------| | 2096 | | | | | 2097 | | | FLOW_MOD(S1..Sn)| | 2098 | | | OFP headers| | 2099 | | |<----------------| | 2100 | | | | | 2101 | |Traffic (S1..Sn, | | | 2102 | | D1..Dn)| | | 2103 | |<------------------| | | 2104 | | | | | 2105 | | | | | 2108 | | | | | 2109 | | | | | 2112 | | | | | 2114 Legend: 2116 G-ARP: Gratuitous ARP 2118 Discussion: 2120 TCP SYN attack should be launched from one of the emulated/simulated 2121 OpenFlow Switch. Rn1 provides the Path Programming Rate of 2122 controller uponhandling denial of service attack. 2124 The procedure defined above provides test steps to determine the 2125 effect of handling denial of service on Path Programming Rate. Same 2126 procedure can be adopted to determine the effects on other 2127 performance tests listed in this benchmarking tests. 2129 B.7. Reliability 2131 B.7.1. Controller Failover Time 2133 Procedure: 2135 Test Traffic Test Traffic Network Device OpenFlow SDN 2136 Generator TP1 Generator TP2 Controller Application 2137 | | | | | 2138 | |G-ARP (D1) | | | 2139 | |------------>| | | 2140 | | | | | 2141 | | |PACKET_IN(D1) | | 2142 | | |---------------->| | 2143 | | | | | 2144 |Traffic (S1..Sn,D1) | | | 2145 |-------------------------->| | | 2146 | | | | | 2147 | | | | | 2148 | | |PACKET_IN(S1,D1) | | 2149 | | |---------------->| | 2150 | | | | | 2151 | | |FLOW_MOD(D1) | | 2152 | | |<----------------| | 2153 | | |FLOW_MOD(S1) | | 2154 | | |<----------------| | 2155 | | | | | 2156 | |Traffic (S1,D1)| | | 2157 | |<------------| | | 2158 | | | | | 2159 | | |PACKET_IN(S2,D1) | | 2160 | | |---------------->| | 2161 | | | | | 2162 | | |FLOW_MOD(S2) | | 2163 | | |<----------------| | 2164 | | | | | 2165 | | |PACKET_IN(Sn-1,D1)| | 2166 | | |---------------->| | 2167 | | | | | 2168 | | |PACKET_IN(Sn,D1) | | 2169 | | |---------------->| | 2170 | | | . | | 2171 | | | . | | 2174 | | | FLOW_MOD(Sn-1) | | 2175 | | | <-X----------| | 2176 | | | | | 2177 | | |FLOW_MOD(Sn) | | 2178 | | |<----------------| | 2179 | | | | | 2180 | |Traffic (Sn,D1)| | | 2181 | |<------------| | | 2182 | | | | | 2183 | | | | | 2188 Legend: 2190 G-ARP: Gratuitous ARP. 2192 Discussion: 2194 The time difference between the last valid frame received before the 2195 traffic loss and the first frame received after the traffic loss 2196 will provide the controller failover time. 2198 If there is no frame loss during controller failover time, the 2199 controller failover time can be deemed negligible. 2201 B.7.2. Network Re-Provisioning Time 2203 Procedure: 2205 Test Traffic Test Traffic Network Devices OpenFlow SDN 2206 Generator TP1 Generator TP2 Controller Application 2207 | | | | | 2208 | |G-ARP (D1) | | | 2209 | |-------------->| | | 2210 | | | | | 2211 | | |PACKET_IN(D1) | | 2212 | | |---------------->| | 2213 | G-ARP (S1) | | | 2214 |---------------------------->| | | 2215 | | | | | 2216 | | |PACKET_IN(S1) | | 2217 | | |---------------->| | 2218 | | | | | 2219 |Traffic (S1,D1,Seq.no (1..n))| | | 2220 |---------------------------->| | | 2221 | | | | | 2222 | | |PACKET_IN(S1,D1) | | 2223 | | |---------------->| | 2224 | | | | | 2225 | |Traffic (D1,S1,| | | 2226 | | Seq.no (1..n))| | | 2227 | |-------------->| | | 2228 | | | | | 2229 | | |PACKET_IN(D1,S1) | | 2230 | | |---------------->| | 2231 | | | | | 2232 | | |FLOW_MOD(D1) | | 2233 | | |<----------------| | 2234 | | | | | 2235 | | |FLOW_MOD(S1) | | 2236 | | |<----------------| | 2237 | | | | | 2238 | |Traffic (S1,D1,| | | 2239 | | Seq.no(1))| | | 2240 | |<--------------| | | 2241 | | | | | 2242 | |Traffic (S1,D1,| | | 2243 | | Seq.no(2))| | | 2244 | |<--------------| | | 2245 | | | | | 2246 | | | | | 2247 | Traffic (D1,S1,Seq.no(1))| | | 2248 |<----------------------------| | | 2249 | | | | | 2250 | Traffic (D1,S1,Seq.no(2))| | | 2251 |<----------------------------| | | 2252 | | | | | 2253 | Traffic (D1,S1,Seq.no(x))| | | 2254 |<----------------------------| | | 2255 | | | | | 2256 | |Traffic (S1,D1,| | | 2257 | | Seq.no(x))| | | 2258 | |<--------------| | | 2259 | | | | | 2260 | | | | | 2261 | | | | | 2265 | | | | | 2266 | | |PORT_STATUS(Sa) | | 2267 | | |---------------->| | 2268 | | | | | 2269 | |Traffic (S1,D1,| | | 2270 | | Seq.no(n-1))| | | 2271 | | X<-----------| | | 2272 | | | | | 2273 | Traffic (D1,S1,Seq.no(n-1))| | | 2274 | X------------------------| | | 2275 | | | | | 2276 | | | | | 2277 | | |FLOW_MOD(D1) | | 2278 | | |<----------------| | 2279 | | | | | 2280 | | |FLOW_MOD(S1) | | 2281 | | |<----------------| | 2282 | | | | | 2283 | Traffic (D1,S1,Seq.no(n))| | | 2284 |<----------------------------| | | 2285 | | | | | 2286 | |Traffic (S1,D1,| | | 2287 | | Seq.no(n))| | | 2288 | |<--------------| | | 2289 | | | | | 2290 | | | | | 2295 Legend: 2297 G-ARP: Gratuitous ARP message. 2298 Seq.no: Sequence number. 2299 Sa: Neighbor switch of the switch that was brought down. 2301 Discussion: 2303 The time difference between the last valid frame received before the 2304 traffic loss (Packet number with sequence number x) and the first 2305 frame received after the traffic loss (packet with sequence number 2306 n) will provide the network path re-provisioning time. 2308 Note that the trial is valid only when the controller provisions the 2309 alternate path upon network failure. 2311 Authors' Addresses 2313 Bhuvaneswaran Vengainathan 2314 Veryx Technologies Inc. 2315 1 International Plaza, Suite 550 2316 Philadelphia 2317 PA 19113 2319 Email: bhuvaneswaran.vengainathan@veryxtech.com 2321 Anton Basil 2322 Veryx Technologies Inc. 2323 1 International Plaza, Suite 550 2324 Philadelphia 2325 PA 19113 2327 Email: anton.basil@veryxtech.com 2329 Mark Tassinari 2330 Hewlett-Packard, 2331 8000 Foothills Blvd, 2332 Roseville, CA 95747 2334 Email: mark.tassinari@hpe.com 2336 Vishwas Manral 2337 Nano Sec, 2338 CA 2340 Email: vishwas.manral@gmail.com 2342 Sarah Banks 2343 VSS Monitoring 2344 930 De Guigne Drive, 2345 Sunnyvale, CA 2347 Email: sbanks@encrypted.net