idnits 2.17.1 draft-ietf-bmwg-sdn-controller-benchmark-term-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 16, 2017) is 2351 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-09) exists of draft-ietf-bmwg-sdn-controller-benchmark-meth-06 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Bhuvaneswaran Vengainathan 2 Network Working Group Anton Basil 3 Intended Status: Informational Veryx Technologies 4 Expires: May 16, 2018 Mark Tassinari 5 Hewlett-Packard 6 Vishwas Manral 7 Nano Sec 8 Sarah Banks 9 VSS Monitoring 10 November 16, 2017 12 Terminology for Benchmarking SDN Controller Performance 13 draft-ietf-bmwg-sdn-controller-benchmark-term-06 15 Abstract 17 This document defines terminology for benchmarking an SDN 18 controller's control plane performance. It extends the terminology 19 already defined in RFC 7426 for the purpose of benchmarking SDN 20 controllers. The terms provided in this document help to benchmark 21 SDN controller's performance independent of the controller's 22 supported protocols and/or network services. A mechanism for 23 benchmarking the performance of SDN controllers is defined in the 24 companion methodology document. These two documents provide a 25 standard mechanism to measure and evaluate the performance of 26 various controller implementations. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current. 38 Internet-Drafts are draft documents valid for a maximum of six 39 months and may be updated, replaced, or obsoleted by other documents 40 at any time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress. 43 This Internet-Draft will expire on May 16, 2018. 45 Copyright Notice 47 Copyright (c) 2017 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with 55 respect to this document. Code Components extracted from this 56 document must include Simplified BSD License text as described in 57 Section 4.e of the Trust Legal Provisions and are provided without 58 warranty as described in the Simplified BSD License. 60 Table of Contents 62 1. Introduction...................................................4 63 2. Term Definitions...............................................4 64 2.1. SDN Terms.................................................4 65 2.1.1. Flow.................................................4 66 2.1.2. Northbound Interface.................................5 67 2.1.3. Controller Forwarding Table..........................5 68 2.1.4. Proactive Flow Provisioning Mode.....................5 69 2.1.5. Reactive Flow Provisioning Mode......................6 70 2.1.6. Path.................................................6 71 2.1.7. Standalone Mode......................................6 72 2.1.8. Cluster/Redundancy Mode..............................7 73 2.1.9. Asynchronous Message.................................7 74 2.1.10. Test Traffic Generator..............................8 75 2.2. Test Configuration/Setup Terms............................8 76 2.2.1. Number of Network Devices............................8 77 2.2.2. Trials...............................................8 78 2.2.3. Trial Duration.......................................9 79 2.2.4. Number of Cluster nodes..............................9 80 2.3. Benchmarking Terms.......................................10 81 2.3.1. Performance.........................................10 82 2.3.1.1. Network Topology Discovery Time................10 83 2.3.1.2. Asynchronous Message Processing Time...........10 84 2.3.1.3. Asynchronous Message Processing Rate...........11 85 2.3.1.4. Reactive Path Provisioning Time................12 86 2.3.1.5. Proactive Path Provisioning Time...............12 87 2.3.1.6. Reactive Path Provisioning Rate................13 88 2.3.1.7. Proactive Path Provisioning Rate...............14 89 2.3.1.8. Network Topology Change Detection Time.........14 91 2.3.2. Scalability.........................................15 92 2.3.2.1. Control Sessions Capacity......................15 93 2.3.2.2. Network Discovery Size.........................15 94 2.3.2.3. Forwarding Table Capacity......................16 95 2.3.3. Security............................................16 96 2.3.3.1. Exception Handling.............................16 97 2.3.3.2. Denial of Service Handling.....................17 98 2.3.4. Reliability.........................................17 99 2.3.4.1. Controller Failover Time.......................17 100 2.3.4.2. Network Re-Provisioning Time...................18 101 3. Test Setup....................................................18 102 3.1. Test setup - Controller working in Standalone Mode.......18 103 3.2. Test setup - Controller working in Cluster Mode..........19 104 4. Test Coverage.................................................20 105 5. References....................................................21 106 5.1. Normative References.....................................21 107 5.2. Informative References...................................22 108 6. IANA Considerations...........................................22 109 7. Security Considerations.......................................22 110 8. Acknowledgements..............................................22 111 9. Authors' Addresses............................................22 113 1. Introduction 115 Software Defined Networking (SDN) is a networking architecture in 116 which network control is decoupled from the underlying forwarding 117 function and is placed in a centralized location called the SDN 118 controller. The SDN controller abstracts the underlying network and 119 offers a global view of the overall network to applications and 120 business logic. Thus, an SDN controller provides the flexibility to 121 program, control, and manage network behaviour dynamically through 122 standard interfaces. Since the network controls are logically 123 centralized, the need to benchmark the SDN controller performance 124 becomes significant. This document defines terms to benchmark 125 various controller designs for performance, scalability, reliability 126 and security, independent of northbound and southbound protocols. 128 Conventions used in this document 130 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 131 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 132 document are to be interpreted as described in RFC 2119. 134 2. Term Definitions 136 2.1. SDN Terms 138 The terms defined in this section are extensions to the terms 139 defined in [RFC7426] "Software-Defined Networking (SDN): Layers and 140 Architecture Terminology". This RFC should be referred before 141 attempting to make use of this document. 143 2.1.1. Flow 145 Definition: 146 The definition of Flow is same as microflows defined in [RFC4689] 147 Section 3.1.5. 149 Discussion: 150 A flow can be set of packets having same source address, destination 151 address, source port and destination port, or any of these 152 combinations. 154 Measurement Units: 155 N/A 157 See Also: 158 None 160 2.1.2. Northbound Interface 162 Definition: 163 The definition of northbound interface is same Service Interface 164 defined in [RFC7426] . 166 Discussion: 167 The northbound interface allows SDN applications and orchestration 168 systems to program and retrieve the network information through the 169 SDN controller. 171 Measurement Units: 172 N/A 174 See Also: 175 None 177 2.1.3. Controller Forwarding Table 179 Definition: 180 A controller forwarding table contains flow entries learned in one 181 of two ways: first, entries could be learned from traffic received 182 through the data plane, or, second, these entries could be 183 statically provisioned on the controller, and distributed to devices 184 via the southbound interface. 186 Discussion: 187 The controller forwarding table has an aging mechanism which will be 188 applied only for dynamically learnt entries. 190 Measurement Units: 191 N/A 193 See Also: 194 None 196 2.1.4. Proactive Flow Provisioning Mode 198 Definition: 199 Controller programming flows in Network Devices based on the flow 200 entries provisioned through controller's northbound interface. 202 Discussion: 203 Orchestration systems and SDN applications can define the network 204 forwarding behaviour by programming the controller using proactive 205 flow provisioning. The controller can then program the Network 206 Devices with the pre-provisioned entries. 208 Measurement Units: 209 N/A 211 See Also: 212 None 214 2.1.5. Reactive Flow Provisioning Mode 216 Definition: 217 Controller programming flows in Network Devices based on the traffic 218 received from Network Devices through controller's southbound 219 interface 221 Discussion: 222 The SDN controller dynamically decides the forwarding behaviour 223 based on the incoming traffic from the Network Devices. The 224 controller then programs the Network Devices using Reactive Flow 225 Provisioning. 227 Measurement Units: 228 N/A 230 See Also: 231 None 233 2.1.6. Path 235 Definition: 236 Refer to Section 5 in [RFC2330] 238 Discussion: 239 None 241 Measurement Units: 242 N/A 244 See Also: 245 None 247 2.1.7. Standalone Mode 249 Definition: 251 Single controller handling all control plane functionalities without 252 redundancy, or the ability to provide high availability and/or 253 automatic failover. 255 Discussion: 256 In standalone mode, one controller manages one or more network 257 domains. 259 Measurement Units: 260 N/A 262 See Also: 263 None 265 2.1.8. Cluster/Redundancy Mode 267 Definition: 268 A group of 2 or more controllers handling all control plane 269 functionalities. 271 Discussion: 272 In cluster mode, multiple controllers are teamed together for the 273 purpose of load sharing and/or high availability. The controllers in 274 the group may work in active/standby (master/slave) or active/active 275 (equal) mode depending on the intended purpose. 277 Measurement Units: 278 N/A 280 See Also: 281 None 283 2.1.9. Asynchronous Message 285 Definition: 286 Any message from the Network Device that is generated for network 287 events. 289 Discussion: 290 Control messages like flow setup request and response message is 291 classified as asynchronous message. The controller has to return a 292 response message. Note that the Network Device will not be in 293 blocking mode and continues to send/receive other control messages 295 Measurement Units: 297 N/A 299 See Also: 300 None 302 2.1.10. Test Traffic Generator 304 Definition: 305 Test Traffic Generator is an entity that generates/receives network 306 traffic. 308 Discussion: 309 Test Traffic Generator can be an entity that interfaces with Network 310 Devices to send/receive real-time network traffic. 312 Measurement Units: 313 N/A 315 See Also: 316 None 318 2.2. Test Configuration/Setup Terms 320 2.2.1. Number of Network Devices 322 Definition: 323 The number of Network Devices present in the defined test topology. 325 Discussion: 326 The Network Devices defined in the test topology can be deployed 327 using real hardware or emulated in hardware platforms. 329 Measurement Units: 330 N/A 332 See Also: 333 None 335 2.2.2. Trials 337 Definition: 338 The number of times the test needs to be repeated. 340 Discussion: 342 The test needs to be repeated for multiple iterations to obtain a 343 reliable metric. It is recommended that this test SHOULD be 344 performed for at least 10 iterations to increase the confidence in 345 measured result. 347 Measurement Units: 348 N/A 350 See Also: 351 None 353 2.2.3. Trial Duration 355 Definition: 356 Defines the duration of test trials for each iteration. 358 Discussion: 359 Trial duration forms the basis for stop criteria for benchmarking 360 tests. Trial not completed within this time interval is considered 361 as incomplete. 363 Measurement Units: 364 seconds 366 See Also: 367 None 369 2.2.4. Number of Cluster nodes 371 Definition: 372 Defines the number of controllers present in the controller cluster. 374 Discussion: 375 This parameter is relevant when testing the controller performance 376 in clustering/teaming mode. The number of nodes in the cluster MUST 377 be greater than 1. 379 Measurement Units: 380 N/A 382 See Also: 383 None 385 2.3. Benchmarking Terms 387 This section defines metrics for benchmarking the SDN controller. 388 The procedure to perform the defined metrics is defined in the 389 accompanying methodology document [I-D.sdn-controller-benchmark-meth] 391 2.3.1. Performance 393 2.3.1.1. Network Topology Discovery Time 395 Definition: 396 The time taken by controller(s) to determine the complete network 397 topology, defined as the interval starting with the first discovery 398 message from the controller(s) at its Southbound interface, ending 399 with all features of the static topology determined. 401 Discussion: 402 Network topology discovery is key for the SDN controller to 403 provision and manage the network. So it is important to measure how 404 quickly the controller discovers the topology to learn the current 405 network state. This benchmark is obtained by presenting a network 406 topology (Tree, Mesh or Linear) with the given number of nodes to 407 the controller and wait for the discovery process to complete .It is 408 expected that the controller supports network discovery mechanism 409 and uses protocol messages for its discovery process. 411 Measurement Units: 412 milliseconds 414 See Also: 415 None 417 2.3.1.2. Asynchronous Message Processing Time 419 Definition: 420 The time taken by controller(s) to process an asynchronous message, 421 defined as the interval starting with an asynchronous message from a 422 network device after the discovery of all the devices by the 423 controller(s), ending with a response message from the controller(s) 424 at its Southbound interface. 426 Discussion: 427 For SDN to support dynamic network provisioning, it is important to 428 measure how quickly the controller responds to an event triggered 429 from the network. The event could be any notification messages 430 generated by an Network Device upon arrival of a new flow, link down 431 etc. This benchmark is obtained by sending asynchronous messages 432 from every connected Network Devices one at a time for the defined 433 trial duration. This test assumes that the controller will respond 434 to the received asynchronous message. 436 Measurement Units: 437 milliseconds 439 See Also: 440 None 442 2.3.1.3. Asynchronous Message Processing Rate 444 Definition: 445 The number responses to asynchronous messages (such as new flow 446 arrival notification message, etc.) for which the controller(s) 447 performed processing and replied with a valid and productive (non- 448 trivial) response message. 450 Discussion: 451 As SDN assures flexible network and agile provisioning, it is 452 important to measure how many network events that the controller can 453 handle at a time. This benchmark is obtained by sending asynchronous 454 messages from every connected Network Devices at the rate that the 455 controller processes without dropping. This test assumes that the 456 controller responds to all the received asynchronous messages (the 457 messages can be designed to elicit individual responses). 459 When sending asynchronous messages to the controller(s) at high 460 rates, some messages or responses may be discarded or corrupted and 461 require retransmission to controller(s). Therefore, a useful 462 qualification on Asynchronous Message Processing Rate is whether the 463 in-coming message count equals the response count in each trial. 464 This is called the Loss-free Asynchronous Message Processing Rate. 466 Note that several of the early controller benchmarking tools did not 467 consider lost messages, and instead report the maximum response 468 rate. This is called the Maximum Asynchronous Message Processing 469 Rate. 471 To characterize both the Loss-free and Maximum Rates, a test could 472 begin the first trial by sending asynchronous messages to the 473 controller (s) at the maximum possible rate and record the message 474 reply rate and the message loss rate. The message sending rate is 475 then decreased by the step-size. The message reply rate and the 476 message loss rate are recorded. The test ends with a trial where the 477 controller(s) processes the all asynchronous messages sent without 478 loss. This is the Loss-free Asynchronous Message Processing Rate. 480 The trial where the controller(s) produced the maximum response rate 481 is the Maximum Asynchronous Message Processing Rate. Of course, the 482 first trial could begin at a low sending rate with zero lost 483 responses, and increase until the Loss-free and Maximum Rates are 484 discovered. 486 Measurement Units: 487 Messages processed per second. 489 See Also: 490 None 492 2.3.1.4. Reactive Path Provisioning Time 494 Definition: 495 The time taken by the controller to setup a path reactively between 496 source and destination node, defined as the interval starting with 497 the first flow provisioning request message received by the 498 controller(s), ending with the last flow provisioning response 499 message sent from the controller(s) at it Southbound interface. 501 Discussion: 502 As SDN supports agile provisioning, it is important to measure how 503 fast that the controller provisions an end-to-end flow in the 504 dataplane. The benchmark is obtained by sending traffic from a 505 source endpoint to the destination endpoint, finding the time 506 difference between the first and the last flow provisioning message 507 exchanged between the controller and the Network Devices for the 508 traffic path. 510 Measurement Units: 511 milliseconds. 513 See Also: 514 None 516 2.3.1.5. Proactive Path Provisioning Time 518 Definition: 519 The time taken by the controller to setup a path proactively between 520 source and destination node, defined as the interval starting with 521 the first proactive flow provisioned in the controller(s) at its 522 Northbound interface, ending with the last flow provisioning 523 response message sent from the controller(s) at it Southbound 524 interface. 526 Discussion: 527 For SDN to support pre-provisioning of traffic path from 528 application, it is important to measure how fast that the controller 529 provisions an end-to-end flow in the dataplane. The benchmark is 530 obtained by provisioning a flow on controller's northbound interface 531 for the traffic to reach from a source to a destination endpoint, 532 finding the time difference between the first and the last flow 533 provisioning message exchanged between the controller and the 534 Network Devices for the traffic path. 536 Measurement Units: 537 milliseconds. 539 See Also: 540 None 542 2.3.1.6. Reactive Path Provisioning Rate 544 Definition: 545 The maximum number of independent paths a controller can 546 concurrently establish between source and destination nodes 547 reactively, defined as the number of paths provisioned by the 548 controller(s) at its Southbound interface for the flow provisioning 549 requests received for path provisioning at its Southbound interface 550 between the start of the trial and the expiry of given trial 551 duration 553 Discussion: 554 For SDN to support agile traffic forwarding, it is important to 555 measure how many end-to-end flows that the controller could setup in 556 the dataplane. This benchmark is obtained by sending traffic each 557 with unique source and destination pairs from the source Network 558 Device and determine the number of frames received at the 559 destination Network Device. 561 Measurement Units: 562 Paths provisioned per second. 564 See Also: 565 None 567 2.3.1.7. Proactive Path Provisioning Rate 569 Definition: 570 Measure the maximum number of independent paths a controller can 571 concurrently establish between source and destination nodes 572 proactively, defined as the number of paths provisioned by the 573 controller(s) at its Southbound interface for the paths provisioned 574 in its Northbound interface between the start of the trial and the 575 expiry of given trial duration 577 Discussion: 578 For SDN to support pre-provisioning of traffic path for a larger 579 network from the application, it is important to measure how many 580 end-to-end flows that the controller could setup in the dataplane. 581 This benchmark is obtained by sending traffic each with unique 582 source and destination pairs from the source Network Device. Program 583 the flows on controller's northbound interface for traffic to reach 584 from each of the unique source and destination pairs and determine 585 the number of frames received at the destination Network Device. 587 Measurement Units: 588 Paths provisioned per second. 590 See Also: 591 None 593 2.3.1.8. Network Topology Change Detection Time 595 Definition: 596 The amount of time required for the controller to detect any changes 597 in the network topology, defined as the interval starting with the 598 notification message received by the controller(s) at its Southbound 599 interface, ending with the first topology rediscovery messages sent 600 from the controller(s) at its Southbound interface. 602 Discussion: 603 In order to for the controller to support fast network failure 604 recovery, it is critical to measure how fast the controller is able 605 to detect any network-state change events. This benchmark is 606 obtained by triggering a topology change event and measuring the 607 time controller takes to detect and initiate a topology re-discovery 608 process. 610 Measurement Units: 611 milliseconds 613 See Also: 615 None 617 2.3.2. Scalability 619 2.3.2.1. Control Sessions Capacity 621 Definition: 622 Measure the maximum number of control sessions the controller can 623 maintain, defined as the number of sessions that the controller can 624 accept from network devices, starting with the first control 625 session, ending with the last control session that the controller(s) 626 accepts at its Southbound interface. 628 Discussion: 629 Measuring the controller's control sessions capacity is important to 630 determine the controller's system and bandwidth resource 631 requirements. This benchmark is obtained by establishing control 632 session with the controller from each of the Network Device until it 633 fails. The number of sessions that were successfully established 634 will provide the Control Sessions Capacity. 636 Measurement Units: 637 N/A 639 See Also: 640 None 642 2.3.2.2. Network Discovery Size 644 Definition: 645 Measure the network size (number of nodes, links and hosts) that a 646 controller can discover, defined as the size of a network that the 647 controller(s) can discover, starting from a network topology given 648 by the user for discovery, ending with the topology that the 649 controller(s) could successfully discover. 651 Discussion: 652 For optimal network planning, it is key to measure the maximum 653 network size that the controller can discover. This benchmark is 654 obtained by presenting an initial set of Network Devices for 655 discovery to the controller. Based on the initial discovery, the 656 number of Network Devices is increased or decreased to determine the 657 maximum nodes that the controller can discover. 659 Measurement Units: 661 N/A 663 See Also: 664 None 666 2.3.2.3. Forwarding Table Capacity 668 Definition: 669 The maximum number of flow entries that a controller can manage in 670 its Forwarding table. 672 Discussion: 673 It is significant to measure the capacity of controller's Forwarding 674 Table to determine the number of flows that controller could forward 675 without flooding/dropping. This benchmark is obtained by 676 continuously presenting the controller with new flow entries through 677 reactive or proactive flow provisioning mode until the forwarding 678 table becomes full. The maximum number of nodes that the controller 679 can hold in its Forwarding Table will provide Forwarding Table 680 Capacity. 682 Measurement Units: 683 Maximum number of flow entries managed. 685 See Also: 686 None 688 2.3.3. Security 690 2.3.3.1. Exception Handling 692 Definition: 693 To determine the effect of handling error packets and notifications 694 on performance tests. 696 Discussion: 697 This benchmark test is to be performed after obtaining the baseline 698 performance of the performance tests defined in Section 2.3.1. This 699 benchmark determines the deviation from the baseline performance due 700 to the handling of error or failure messages from the connected 701 Network Devices. 703 Measurement Units: 704 N/A 706 See Also: 708 None 710 2.3.3.2. Denial of Service Handling 712 Definition: 713 To determine the effect of handling denial of service (DoS) attacks 714 on performance and scalability tests. 716 Discussion: 717 This benchmark test is to be performed after obtaining the baseline 718 performance of the performance and scalability tests defined in 719 section 2.3.1 and section 2.3.1.. This benchmark determines the 720 deviation from the baseline performance due to the handling of 721 denial of service attacks on controller. 723 Measurement Units: 724 Deviation of baseline metrics while handling Denial of Service 725 Attacks. 727 See Also: 728 None 730 2.3.4. Reliability 732 2.3.4.1. Controller Failover Time 734 Definition: 735 The time taken to switch from an active controller to the backup 736 controller, when the controllers work in redundancy mode and the 737 active controller fails, defined as the interval starting with the 738 active controller bringing down, ending with the first re-discovery 739 message received from the new controller at its Southbound 740 interface. 742 Discussion: 743 This benchmark determine the impact of provisioning new flows when 744 controllers are teamed and the active controller fails. 746 Measurement Units: 747 milliseconds. 749 See Also: 750 None 752 2.3.4.2. Network Re-Provisioning Time 754 Definition: 755 The time taken to re-route the traffic by the Controller, when there 756 is a failure in existing traffic paths, defined as the interval 757 starting from the first failure notification message received by the 758 controller, ending with the last flow re-provisioning message sent 759 by the controller at its Southbound interface . 761 Discussion: 762 This benchmark determines the controller's re-provisioning ability 763 upon network failures. This benchmark test assumes the following: 764 i. Network topology supports redundant path between 765 source and destination endpoints. 766 ii. Controller does not pre-provision the redundant path. 768 Measurement Units: 769 milliseconds. 771 See Also: 772 None 774 3. Test Setup 776 This section provides common reference topologies that are later 777 referred to in individual tests defined in the companion methodology 778 document. 780 3.1. Test setup - Controller working in Standalone Mode 782 +-----------------------------------------------------------+ 783 | Application Plane Test Emulator | 784 | | 785 | +-----------------+ +-------------+ | 786 | | Application | | Service | | 787 | +-----------------+ +-------------+ | 788 | | 789 +-----------------------------+(I2)-------------------------+ 790 | 791 | 792 | (Northbound interface) 793 +-------------------------------+ 794 | +----------------+ | 795 | | SDN Controller | | 796 | +----------------+ | 797 | | 798 | Device Under Test (DUT) | 799 +-------------------------------+ 800 | (Southbound interface) 801 | 802 | 803 +-----------------------------+(I1)-------------------------+ 804 | | 805 | +-----------+ +-----------+ | 806 | | Network |l1 ln-1| Network | | 807 | | Device 1 |---- .... ----| Device n | | 808 | +-----------+ +-----------+ | 809 | |l0 |ln | 810 | | | | 811 | | | | 812 | +---------------+ +---------------+ | 813 | | Test Traffic | | Test Traffic | | 814 | | Generator | | Generator | | 815 | | (TP1) | | (TP2) | | 816 | +---------------+ +---------------+ | 817 | | 818 | Forwarding Plane Test Emulator | 819 +-----------------------------------------------------------+ 821 Figure 1 823 3.2. Test setup - Controller working in Cluster Mode 825 +-----------------------------------------------------------+ 826 | Application Plane Test Emulator | 827 | | 828 | +-----------------+ +-------------+ | 829 | | Application | | Service | | 830 | +-----------------+ +-------------+ | 831 | | 832 +-----------------------------+(I2)-------------------------+ 833 | 834 | 835 | (Northbound interface) 836 +---------------------------------------------------------+ 837 | | 838 | ------------------ ------------------ | 839 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 840 | ------------------ ------------------ | 841 | | 842 | Device Under Test (DUT) | 843 +---------------------------------------------------------+ 844 | (Southbound interface) 845 | 846 | 847 +-----------------------------+(I1)-------------------------+ 848 | | 849 | +-----------+ +-----------+ | 850 | | Network |l1 ln-1| Network | | 851 | | Device 1 |---- .... ----| Device n | | 852 | +-----------+ +-----------+ | 853 | |l0 |ln | 854 | | | | 855 | | | | 856 | +---------------+ +---------------+ | 857 | | Test Traffic | | Test Traffic | | 858 | | Generator | | Generator | | 859 | | (TP1) | | (TP2) | | 860 | +---------------+ +---------------+ | 861 | | 862 | Forwarding Plane Test Emulator | 863 +-----------------------------------------------------------+ 865 Figure 2 867 4. Test Coverage 869 + -----------------------------------------------------------------+ 870 | | Speed | Scalability | Reliability | 871 + -----------+-------------------+---------------+-----------------+ 872 | | 1. Network Topolo-|1. Network | | 873 | | -gy Discovery | Discovery | | 874 | | | Size | | 875 | | 2. Reactive Path | | | 876 | | Provisioning | | | 877 | | Time | | | 878 | | | | | 879 | | 3. Proactive Path | | | 880 | | Provisioning | | | 881 | Setup | Time | | | 882 | | | | | 883 | | 4. Reactive Path | | | 884 | | Provisioning | | | 885 | | Rate | | | 886 | | | | | 887 | | 5. Proactive Path | | | 888 | | Provisioning | | | 889 | | Rate | | | 890 | | | | | 891 +------------+-------------------+---------------+-----------------+ 892 | | 1. Maximum |1. Control |1. Network | 893 | | Asynchronous | Sessions | Topology | 894 | | Message Proces-| Capacity | Change | 895 | | -sing Rate | | Detection Time| 896 | | |2. Forwarding | | 897 | | 2. Loss-Free | Table |2. Exception | 898 | | Asynchronous | Capacity | Handling | 899 | | Message Proces-| | Detection Time| 900 | Operational| -sing Rate | | | 901 | | | |3. Denial of | 902 | | 3. Asynchronous | | Service | 903 | | Message Proces-| | Handling | 904 | | -sing Time | | | 905 | | | |4. Network Re- | 906 | | | | Provisioning | 907 | | | | Time | 908 | | | | | 909 +------------+-------------------+---------------+-----------------+ 910 | | | | | 911 | Tear Down | | |1. Controller | 912 | | | | Failover Time | 913 +------------+-------------------+---------------+-----------------+ 915 5. References 917 5.1. Normative References 919 [RFC7426] E. Haleplidis, K. Pentikousis, S. Denazis, J. Hadi Salim, 920 D. Meyer, O. Koufopavlou "Software-Defined Networking 921 (SDN): Layers and Architecture Terminology", RFC 7426, 922 January 2015. 924 [RFC4689] S. Poretsky, J. Perser, S. Erramilli, S. Khurana 925 "Terminology for Benchmarking Network-layer Traffic 926 Control Mechanisms", RFC 4689, October 2006. 928 [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, 929 "Framework for IP Performance Metrics", RFC 2330, 930 May 1998. 932 [I-D.sdn-controller-benchmark-meth] Bhuvaneswaran.V, Anton Basil, 933 Mark.T, Vishwas Manral, Sarah Banks "Benchmarking 934 Methodology for SDN Controller Performance", 935 draft-ietf-bmwg-sdn-controller-benchmark-meth-06 936 (Work in progress), November 16, 2017 938 5.2. Informative References 940 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 941 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 943 6. IANA Considerations 945 This document does not have any IANA requests. 947 7. Security Considerations 949 Security issues are not discussed in this memo. 951 8. Acknowledgements 953 The authors would like to acknowledge Al Morton (AT&T) for the 954 significant contributions to the earlier versions of this document. 955 The authors would like to thank the following individuals for 956 providing their valuable comments to the earlier versions of this 957 document: Sandeep Gangadharan (HP), M. Georgescu (NAIST), Andrew 958 McGregor (Google), Scott Bradner (Harvard University), Jay Karthik 959 (Cisco), Ramakrishnan (Dell), Khasanov Boris (Huawei). 961 9. Authors' Addresses 963 Bhuvaneswaran Vengainathan 964 Veryx Technologies Inc. 965 1 International Plaza, Suite 550 966 Philadelphia 967 PA 19113 969 Email: bhuvaneswaran.vengainathan@veryxtech.com 971 Anton Basil 972 Veryx Technologies Inc. 973 1 International Plaza, Suite 550 974 Philadelphia 975 PA 19113 977 Email: anton.basil@veryxtech.com 979 Mark Tassinari 980 Hewlett-Packard, 981 8000 Foothills Blvd, 982 Roseville, CA 95747 984 Email: mark.tassinari@hpe.com 986 Vishwas Manral 987 Nano Sec,CA 989 Email: vishwas.manral@gmail.com 991 Sarah Banks 992 VSS Monitoring 994 Email: sbanks@encrypted.net