idnits 2.17.1 draft-ietf-bmwg-sdn-controller-benchmark-term-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 25, 2018) is 2135 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Bhuvaneswaran Vengainathan 2 Network Working Group Anton Basil 3 Intended Status: Informational Veryx Technologies 4 Expires: November 25, 2018 Mark Tassinari 5 Hewlett-Packard 6 Vishwas Manral 7 Nano Sec 8 Sarah Banks 9 VSS Monitoring 10 May 25, 2018 12 Terminology for Benchmarking SDN Controller Performance 13 draft-ietf-bmwg-sdn-controller-benchmark-term-10 15 Abstract 17 This document defines terminology for benchmarking an SDN 18 controller's control plane performance. It extends the terminology 19 already defined in RFC 7426 for the purpose of benchmarking SDN 20 controllers. The terms provided in this document help to benchmark 21 SDN controller's performance independent of the controller's 22 supported protocols and/or network services. 24 Status of this Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current. 34 Internet-Drafts are draft documents valid for a maximum of six 35 months and may be updated, replaced, or obsoleted by other documents 36 at any time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress. 39 This Internet-Draft will expire on November 25, 2018. 41 Copyright Notice 43 Copyright (c) 2018 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with 51 respect to this document. Code Components extracted from this 52 document must include Simplified BSD License text as described in 53 Section 4.e of the Trust Legal Provisions and are provided without 54 warranty as described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction...................................................4 59 2. Term Definitions...............................................4 60 2.1. SDN Terms.................................................4 61 2.1.1. Flow.................................................4 62 2.1.2. Northbound Interface.................................5 63 2.1.3. Southbound Interface.................................5 64 2.1.4. Controller Forwarding Table..........................5 65 2.1.5. Proactive Flow Provisioning Mode.....................6 66 2.1.6. Reactive Flow Provisioning Mode......................6 67 2.1.7. Path.................................................7 68 2.1.8. Standalone Mode......................................7 69 2.1.9. Cluster/Redundancy Mode..............................7 70 2.1.10. Asynchronous Message................................8 71 2.1.11. Test Traffic Generator..............................8 72 2.1.12. Leaf-Spine Topology.................................9 73 2.2. Test Configuration/Setup Terms............................9 74 2.2.1. Number of Network Devices............................9 75 2.2.2. Trial Repetition.....................................9 76 2.2.3. Trial Duration......................................10 77 2.2.4. Number of Cluster nodes.............................10 78 2.3. Benchmarking Terms.......................................10 79 2.3.1. Performance.........................................11 80 2.3.1.1. Network Topology Discovery Time................11 81 2.3.1.2. Asynchronous Message Processing Time...........11 82 2.3.1.3. Asynchronous Message Processing Rate...........12 83 2.3.1.4. Reactive Path Provisioning Time................13 84 2.3.1.5. Proactive Path Provisioning Time...............13 85 2.3.1.6. Reactive Path Provisioning Rate................14 86 2.3.1.7. Proactive Path Provisioning Rate...............14 87 2.3.1.8. Network Topology Change Detection Time.........15 88 2.3.2. Scalability.........................................16 89 2.3.2.1. Control Sessions Capacity......................16 90 2.3.2.2. Network Discovery Size.........................16 91 2.3.2.3. Forwarding Table Capacity......................17 92 2.3.3. Security............................................17 93 2.3.3.1. Exception Handling.............................17 94 2.3.3.2. Denial of Service Handling.....................18 95 2.3.4. Reliability.........................................18 96 2.3.4.1. Controller Failover Time.......................18 97 2.3.4.2. Network Re-Provisioning Time...................19 98 3. Test Setup....................................................19 99 3.1. Test setup - Controller working in Standalone Mode.......20 100 3.2. Test setup - Controller working in Cluster Mode..........21 101 4. Test Coverage.................................................22 102 5. References....................................................23 103 5.1. Normative References.....................................23 104 5.2. Informative References...................................23 105 6. IANA Considerations...........................................23 106 7. Security Considerations.......................................23 107 8. Acknowledgements..............................................24 108 9. Authors' Addresses............................................24 110 1. Introduction 112 Software Defined Networking (SDN) is a networking architecture in 113 which network control is decoupled from the underlying forwarding 114 function and is placed in a centralized location called the SDN 115 controller. The SDN controller provides an abstraction of the 116 underlying network and offers a global view of the overall network 117 to applications and business logic. Thus, an SDN controller provides 118 the flexibility to program, control, and manage network behaviour 119 dynamically through northbound and southbound interfaces. Since the 120 network controls are logically centralized, the need to benchmark 121 the SDN controller performance becomes significant. This document 122 defines terms to benchmark various controller designs for 123 performance, scalability, reliability and security, independent of 124 northbound and southbound protocols. A mechanism for benchmarking 125 the performance of SDN controllers is defined in the companion 126 methodology document [I-D.sdn-controller-benchmark-meth]. These two 127 documents provide a method to measure and evaluate the performance 128 of various controller implementations. 130 Conventions used in this document 132 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 133 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 134 "OPTIONAL" in this document are to be interpreted as described in 135 BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all 136 capitals, as shown here. 138 2. Term Definitions 140 2.1. SDN Terms 142 The terms defined in this section are extensions to the terms 143 defined in [RFC7426] "Software-Defined Networking (SDN): Layers and 144 Architecture Terminology". That RFC should be referred before 145 attempting to make use of this document. 147 2.1.1. Flow 149 Definition: 150 The definition of Flow is same as microflows defined in [RFC4689] 151 Section 3.1.5. 153 Discussion: 154 A flow can be set of packets having same source address, destination 155 address, source port and destination port, or any of these 156 combinations. 158 Measurement Units: 159 N/A 161 See Also: 162 None 164 2.1.2. Northbound Interface 166 Definition: 167 The definition of northbound interface is same the Service Interface 168 defined in [RFC7426]. 170 Discussion: 171 The northbound interface allows SDN applications and orchestration 172 systems to program and retrieve the network information through the 173 SDN controller. 175 Measurement Units: 176 N/A 178 See Also: 179 None 181 2.1.3. Southbound Interface 183 Definition: 184 The southbound interface is the application programming interface 185 provided by the SDN controller to interact with the SDN nodes. 187 Discussion: 188 Southbound interface enables controller to interact with the SDN 189 nodes in the network for dynamically defining the traffic forwarding 190 behaviour. 192 Measurement Units: 193 N/A 195 See Also: 196 None 198 2.1.4. Controller Forwarding Table 200 Definition: 201 A controller forwarding table contains flow entries learned in one 202 of two ways: first, entries could be learned from traffic received 203 through the data plane, or second, these entries could be statically 204 provisioned on the controller and distributed to devices via the 205 southbound interface. 207 Discussion: 208 The controller forwarding table has an aging mechanism which will be 209 applied only for dynamically learned entries. 211 Measurement Units: 212 N/A 214 See Also: 215 None 217 2.1.5. Proactive Flow Provisioning Mode 219 Definition: 220 Controller programming flows in Network Devices based on the flow 221 entries provisioned through controller's northbound interface. 223 Discussion: 224 Network orchestration systems and SDN applications can define the 225 network forwarding behaviour by programming the controller using 226 proactive flow provisioning. The controller can then program the 227 Network Devices with the pre-provisioned entries. 229 Measurement Units: 230 N/A 232 See Also: 233 None 235 2.1.6. Reactive Flow Provisioning Mode 237 Definition: 238 Controller programming flows in Network Devices based on the traffic 239 received from Network Devices through controller's southbound 240 interface 242 Discussion: 243 The SDN controller dynamically decides the forwarding behaviour 244 based on the incoming traffic from the Network Devices. The 245 controller then programs the Network Devices using Reactive Flow 246 Provisioning. 248 Measurement Units: 249 N/A 251 See Also: 252 None 254 2.1.7. Path 256 Definition: 257 Refer to Section 5 in [RFC2330] 259 Discussion: 260 None 262 Measurement Units: 263 N/A 265 See Also: 266 None 268 2.1.8. Standalone Mode 270 Definition: 271 Single controller handling all control plane functionalities without 272 redundancy, or the ability to provide high availability and/or 273 automatic failover. 275 Discussion: 276 In standalone mode, one controller manages one or more network 277 domains. 279 Measurement Units: 280 N/A 282 See Also: 283 None 285 2.1.9. Cluster/Redundancy Mode 287 Definition: 288 A group of 2 or more controllers handling all control plane 289 functionalities. 291 Discussion: 292 In cluster mode, multiple controllers are teamed together for the 293 purpose of load sharing and/or high availability. The controllers in 294 the group may work in active/standby (master/slave) or active/active 295 (equal) mode depending on the intended purpose. 297 Measurement Units: 298 N/A 300 See Also: 301 None 303 2.1.10. Asynchronous Message 305 Definition: 306 Any message from the Network Device that is generated for network 307 events. 309 Discussion: 310 Control messages like flow setup request and response message is 311 classified as asynchronous message. The controller has to return a 312 response message. Note that the Network Device will not be in 313 blocking mode and continues to send/receive other control messages. 315 Measurement Units: 316 N/A 318 See Also: 319 None 321 2.1.11. Test Traffic Generator 323 Definition: 324 Test Traffic Generator is an entity that generates/receives network 325 traffic. 327 Discussion: 328 Test Traffic Generator typically connects with Network Devices to 329 send/receive real-time network traffic. 331 Measurement Units: 332 N/A 334 See Also: 335 None 337 2.1.12. Leaf-Spine Topology 339 Definition: 340 Leaf-Spine is a two layered network topology, where a series of 341 leaf switches, form the access layer, are fully meshed to a series 342 of spine switches that form the backbone layer. 344 Discussion: 345 In Leaf-Spine Topology, every leaf switch is connected to each of 346 the spine switches in the topology. 348 Measurement Units: 349 N/A 351 See Also: 352 None 354 2.2. Test Configuration/Setup Terms 356 2.2.1. Number of Network Devices 358 Definition: 359 The number of Network Devices present in the defined test topology. 361 Discussion: 362 The Network Devices defined in the test topology can be deployed 363 using real hardware or emulated in hardware platforms. 365 Measurement Units: 366 Number of network devices 368 See Also: 369 None 371 2.2.2. Trial Repetition 373 Definition: 374 The number of times the test needs to be repeated. 376 Discussion: 377 The test needs to be repeated for multiple iterations to obtain a 378 reliable metric. It is recommended that this test SHOULD be 379 performed for at least 10 iterations to increase the confidence in 380 measured result. 382 Measurement Units: 383 Number of trials 385 See Also: 386 None 388 2.2.3. Trial Duration 390 Definition: 391 Defines the duration of test trials for each iteration. 393 Discussion: 394 Trial duration forms the basis for stop criteria for benchmarking 395 tests. Trials not completed within this time interval is considered 396 as incomplete. 398 Measurement Units: 399 Seconds 401 See Also: 402 None 404 2.2.4. Number of Cluster nodes 406 Definition: 407 Defines the number of controllers present in the controller cluster. 409 Discussion: 410 This parameter is relevant when testing the controller performance 411 in clustering/teaming mode. The number of nodes in the cluster MUST 412 be greater than 1. 414 Measurement Units: 415 Number of controller nodes 417 See Also: 418 None 420 2.3. Benchmarking Terms 422 This section defines metrics for benchmarking the SDN controller. 423 The procedure to perform the defined metrics is defined in the 424 accompanying methodology document[I-D.sdn-controller-benchmark-meth] 426 2.3.1. Performance 428 2.3.1.1. Network Topology Discovery Time 430 Definition: 431 The time taken by controller(s) to determine the complete network 432 topology, defined as the interval starting with the first discovery 433 message from the controller(s) at its Southbound interface, ending 434 with all features of the static topology determined. 436 Discussion: 437 Network topology discovery is key for the SDN controller to 438 provision and manage the network. So it is important to measure how 439 quickly the controller discovers the topology to learn the current 440 network state. This benchmark is obtained by presenting a network 441 topology (Tree, Mesh or Linear) with the given number of nodes to 442 the controller and wait for the discovery process to complete. It is 443 expected that the controller supports network discovery mechanism 444 and uses protocol messages for its discovery process. 446 Measurement Units: 447 Milliseconds 449 See Also: 450 None 452 2.3.1.2. Asynchronous Message Processing Time 454 Definition: 455 The time taken by controller(s) to process an asynchronous message, 456 defined as the interval starting with an asynchronous message from a 457 network device after the discovery of all the devices by the 458 controller(s), ending with a response message from the controller(s) 459 at its Southbound interface. 461 Discussion: 462 For SDN to support dynamic network provisioning, it is important to 463 measure how quickly the controller responds to an event triggered 464 from the network. The event could be any notification messages 465 generated by a Network Device upon arrival of a new flow, link down 466 etc. This benchmark is obtained by sending asynchronous messages 467 from every connected Network Devices one at a time for the defined 468 trial duration. This test assumes that the controller will respond 469 to the received asynchronous message. 471 Measurement Units: 472 Milliseconds 474 See Also: 475 None 477 2.3.1.3. Asynchronous Message Processing Rate 479 Definition: 480 The number responses to asynchronous messages per second (such as 481 new flow arrival notification message, link down, etc.) for which 482 the controller(s) performed processing and replied with a valid and 483 productive (non-trivial) response message. 485 Discussion: 486 As SDN assures flexible network and agile provisioning, it is 487 important to measure how many network events (such as new flow 488 arrival notification message, link down, etc.) the controller can 489 handle at a time. This benchmark is measured by sending asynchronous 490 messages from every connected Network Device at the rate that the 491 controller processes (without dropping them). This test assumes that 492 the controller responds to all the received asynchronous messages 493 (the messages can be designed to elicit individual responses). 495 When sending asynchronous messages to the controller(s) at high 496 rates, some messages or responses may be discarded or corrupted and 497 require retransmission to controller(s). Therefore, a useful 498 qualification on Asynchronous Message Processing Rate is whether the 499 in-coming message count equals the response count in each trial. 500 This is called the Loss-free Asynchronous Message Processing Rate. 502 Note that several of the early controller benchmarking tools did not 503 consider lost messages, and instead report the maximum response 504 rate. This is called the Maximum Asynchronous Message Processing 505 Rate. 507 To characterize both the Loss-free and Maximum Rates, a test could 508 begin the first trial by sending asynchronous messages to the 509 controller(s) at the maximum possible rate and record the message 510 reply rate and the message loss rate. The message sending rate is 511 then decreased by the step-size. The message reply rate and the 512 message loss rate are recorded. The test ends with a trial where the 513 controller(s) processes the all asynchronous messages sent without 514 loss. This is the Loss-free Asynchronous Message Processing Rate. 516 The trial where the controller(s) produced the maximum response rate 517 is the Maximum Asynchronous Message Processing Rate. Of course, the 518 first trial could begin at a low sending rate with zero lost 519 responses, and increase until the Loss-free and Maximum Rates are 520 discovered. 522 Measurement Units: 523 Messages processed per second. 525 See Also: 526 None 528 2.3.1.4. Reactive Path Provisioning Time 530 Definition: 531 The time taken by the controller to setup a path reactively between 532 source and destination node, defined as the interval starting with 533 the first flow provisioning request message received by the 534 controller(s), ending with the last flow provisioning response 535 message sent from the controller(s) at its Southbound interface. 537 Discussion: 538 As SDN supports agile provisioning, it is important to measure how 539 fast that the controller provisions an end-to-end flow in the 540 dataplane. The benchmark is obtained by sending traffic from a 541 source endpoint to the destination endpoint, finding the time 542 difference between the first and the last flow provisioning message 543 exchanged between the controller and the Network Devices for the 544 traffic path. 546 Measurement Units: 547 Milliseconds. 549 See Also: 550 None 552 2.3.1.5. Proactive Path Provisioning Time 554 Definition: 555 The time taken by the controller to proactively setup a path between 556 source and destination node, defined as the interval starting with 557 the first proactive flow provisioned in the controller(s) at its 558 Northbound interface, ending with the last flow provisioning command 559 message sent from the controller(s) at its Southbound interface. 561 Discussion: 563 For SDN to support pre-provisioning of traffic path from 564 application, it is important to measure how fast that the controller 565 provisions an end-to-end flow in the dataplane. The benchmark is 566 obtained by provisioning a flow on controller's northbound interface 567 for the traffic to reach from a source to a destination endpoint, 568 finding the time difference between the first and the last flow 569 provisioning message exchanged between the controller and the 570 Network Devices for the traffic path. 572 Measurement Units: 573 Milliseconds. 575 See Also: 576 None 578 2.3.1.6. Reactive Path Provisioning Rate 580 Definition: 581 The maximum number of independent paths a controller can 582 concurrently establish per second between source and destination 583 nodes reactively, defined as the number of paths provisioned per 584 second by the controller(s) at its Southbound interface for the flow 585 provisioning requests received for path provisioning at its 586 Southbound interface between the start of the trial and the expiry 587 of given trial duration. 589 Discussion: 590 For SDN to support agile traffic forwarding, it is important to 591 measure how many end-to-end flows that the controller could setup in 592 the dataplane. This benchmark is obtained by sending traffic each 593 with unique source and destination pairs from the source Network 594 Device and determine the number of frames received at the 595 destination Network Device. 597 Measurement Units: 598 Paths provisioned per second. 600 See Also: 601 None 603 2.3.1.7. Proactive Path Provisioning Rate 605 Definition: 606 Measure the maximum number of independent paths a controller can 607 concurrently establish per second between source and destination 608 nodes proactively, defined as the number of paths provisioned per 609 second by the controller(s) at its Southbound interface for the 610 paths provisioned in its Northbound interface between the start of 611 the trial and the expiry of given trial duration. 613 Discussion: 614 For SDN to support pre-provisioning of traffic path for a larger 615 network from the application, it is important to measure how many 616 end-to-end flows that the controller could setup in the dataplane. 617 This benchmark is obtained by sending traffic each with unique 618 source and destination pairs from the source Network Device. Program 619 the flows on controller's northbound interface for traffic to reach 620 from each of the unique source and destination pairs and determine 621 the number of frames received at the destination Network Device. 623 Measurement Units: 624 Paths provisioned per second. 626 See Also: 627 None 629 2.3.1.8. Network Topology Change Detection Time 631 Definition: 632 The amount of time required for the controller to detect any changes 633 in the network topology, defined as the interval starting with the 634 notification message received by the controller(s) at its Southbound 635 interface, ending with the first topology rediscovery messages sent 636 from the controller(s) at its Southbound interface. 638 Discussion: 639 In order for the controller to support fast network failure 640 recovery, it is critical to measure how fast the controller is able 641 to detect any network-state change events. This benchmark is 642 obtained by triggering a topology change event and measuring the 643 time controller takes to detect and initiate a topology re-discovery 644 process. 646 Measurement Units: 647 Milliseconds 649 See Also: 650 None 652 2.3.2. Scalability 654 2.3.2.1. Control Sessions Capacity 656 Definition: 657 Measure the maximum number of control sessions the controller can 658 maintain, defined as the number of sessions that the controller can 659 accept from network devices, starting with the first control 660 session, ending with the last control session that the controller(s) 661 accepts at its Southbound interface. 663 Discussion: 664 Measuring the controller's control sessions capacity is important to 665 determine the controller's system and bandwidth resource 666 requirements. This benchmark is obtained by establishing control 667 session with the controller from each of the Network Device until it 668 fails. The number of sessions that were successfully established 669 will provide the Control Sessions Capacity. 671 Measurement Units: 672 Maximum number of control sessions 674 See Also: 675 None 677 2.3.2.2. Network Discovery Size 679 Definition: 680 Measure the network size (number of nodes and links) that a 681 controller can discover, defined as the size of a network that the 682 controller(s) can discover, starting from a network topology given 683 by the user for discovery, ending with the topology that the 684 controller(s) could successfully discover. 686 Discussion: 687 For optimal network planning, it is key to measure the maximum 688 network size that the controller can discover. This benchmark is 689 obtained by presenting an initial set of Network Devices for 690 discovery to the controller. Based on the initial discovery, the 691 number of Network Devices is increased or decreased to determine the 692 maximum nodes that the controller can discover. 694 Measurement Units: 695 Maximum number of network nodes and links 697 See Also: 698 None 700 2.3.2.3. Forwarding Table Capacity 702 Definition: 703 The maximum number of flow entries that a controller can manage in 704 its Forwarding table. 706 Discussion: 707 It is significant to measure the capacity of controller's Forwarding 708 Table to determine the number of flows that controller could forward 709 without flooding/dropping. This benchmark is obtained by 710 continuously presenting the controller with new flow entries through 711 reactive or proactive flow provisioning mode until the forwarding 712 table becomes full. The maximum number of nodes that the controller 713 can hold in its Forwarding Table will provide Forwarding Table 714 Capacity. 716 Measurement Units: 717 Maximum number of flow entries managed. 719 See Also: 720 None 722 2.3.3. Security 724 2.3.3.1. Exception Handling 726 Definition: 727 To determine the effect of handling error packets and notifications 728 on performance tests. 730 Discussion: 731 This benchmark test is to be performed after obtaining the baseline 732 performance of the performance tests defined in Section 2.3.1. This 733 benchmark determines the deviation from the baseline performance due 734 to the handling of error or failure messages from the connected 735 Network Devices. 737 Measurement Units: 738 Deviation of baseline metrics while handling Exceptions. 740 See Also: 741 None 743 2.3.3.2. Denial of Service Handling 745 Definition: 746 To determine the effect of handling denial of service (DoS) attacks 747 on performance and scalability tests. 749 Discussion: 750 This benchmark test is to be performed after obtaining the baseline 751 performance of the performance and scalability tests defined in 752 section 2.3.1 and section 2.3.2. This benchmark determines the 753 deviation from the baseline performance due to the handling of 754 denial of service attacks on controller. 756 Measurement Units: 757 Deviation of baseline metrics while handling Denial of Service 758 Attacks. 760 See Also: 761 None 763 2.3.4. Reliability 765 2.3.4.1. Controller Failover Time 767 Definition: 768 The time taken to switch from an active controller to the backup 769 controller, when the controllers work in redundancy mode and the 770 active controller fails, defined as the interval starting with the 771 active controller bringing down, ending with the first re-discovery 772 message received from the new controller at its Southbound 773 interface. 775 Discussion: 776 This benchmark determines the impact of provisioning new flows when 777 controllers are teamed and the active controller fails. 779 Measurement Units: 780 Milliseconds. 782 See Also: 783 None 785 2.3.4.2. Network Re-Provisioning Time 787 Definition: 788 The time taken to re-route the traffic by the Controller, when there 789 is a failure in existing traffic paths, defined as the interval 790 starting from the first failure notification message received by the 791 controller, ending with the last flow re-provisioning message sent 792 by the controller at its Southbound interface. 794 Discussion: 795 This benchmark determines the controller's re-provisioning ability 796 upon network failures. This benchmark test assumes the following: 797 1. Network topology supports redundant path between source and 798 destination endpoints. 799 2. Controller does not pre-provision the redundant path. 801 Measurement Units: 802 Milliseconds. 804 See Also: 805 None 807 3. Test Setup 809 This section provides common reference topologies that are later 810 referred to in individual tests defined in the companion methodology 811 document. 813 3.1. Test setup - Controller working in Standalone Mode 815 +-----------------------------------------------------------+ 816 | Application Plane Test Emulator | 817 | | 818 | +-----------------+ +-------------+ | 819 | | Application | | Service | | 820 | +-----------------+ +-------------+ | 821 | | 822 +-----------------------------+(I2)-------------------------+ 823 | 824 | (Northbound interface) 825 +-------------------------------+ 826 | +----------------+ | 827 | | SDN Controller | | 828 | +----------------+ | 829 | | 830 | Device Under Test (DUT) | 831 +-------------------------------+ 832 | (Southbound interface) 833 | 834 +-----------------------------+(I1)-------------------------+ 835 | | 836 | +-----------+ +-----------+ | 837 | | Network | | Network | | 838 | | Device 2 |--..-| Device n-1| | 839 | +-----------+ +-----------+ | 840 | / \ / \ | 841 | / \ / \ | 842 | l0 / X \ ln | 843 | / / \ \ | 844 | +-----------+ +-----------+ | 845 | | Network | | Network | | 846 | | Device 1 |..| Device n | | 847 | +-----------+ +-----------+ | 848 | | | | 849 | +---------------+ +---------------+ | 850 | | Test Traffic | | Test Traffic | | 851 | | Generator | | Generator | | 852 | | (TP1) | | (TP2) | | 853 | +---------------+ +---------------+ | 854 | | 855 | Forwarding Plane Test Emulator | 856 +-----------------------------------------------------------+ 858 Figure 1 860 3.2. Test setup - Controller working in Cluster Mode 862 +-----------------------------------------------------------+ 863 | Application Plane Test Emulator | 864 | | 865 | +-----------------+ +-------------+ | 866 | | Application | | Service | | 867 | +-----------------+ +-------------+ | 868 | | 869 +-----------------------------+(I2)-------------------------+ 870 | 871 | (Northbound interface) 872 +---------------------------------------------------------+ 873 | | 874 | ------------------ ------------------ | 875 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 876 | ------------------ ------------------ | 877 | | 878 | Device Under Test (DUT) | 879 +---------------------------------------------------------+ 880 | (Southbound interface) 881 | 882 +-----------------------------+(I1)-------------------------+ 883 | | 884 | +-----------+ +-----------+ | 885 | | Network | | Network | | 886 | | Device 2 |--..-| Device n-1| | 887 | +-----------+ +-----------+ | 888 | / \ / \ | 889 | / \ / \ | 890 | l0 / X \ ln | 891 | / / \ \ | 892 | +-----------+ +-----------+ | 893 | | Network | | Network | | 894 | | Device 1 |..| Device n | | 895 | +-----------+ +-----------+ | 896 | | | | 897 | +---------------+ +---------------+ | 898 | | Test Traffic | | Test Traffic | | 899 | | Generator | | Generator | | 900 | | (TP1) | | (TP2) | | 901 | +---------------+ +---------------+ | 902 | | 903 | Forwarding Plane Test Emulator | 904 +-----------------------------------------------------------+ 906 Figure 2 908 4. Test Coverage 910 + -----------------------------------------------------------------+ 911 | Lifecycle | Speed | Scalability | Reliability | 912 + -----------+-------------------+---------------+-----------------+ 913 | | 1. Network Topolo-|1. Network | | 914 | | -gy Discovery | Discovery | | 915 | | Time | Size | | 916 | | | | | 917 | | 2. Reactive Path | | | 918 | | Provisioning | | | 919 | | Time | | | 920 | | | | | 921 | | 3. Proactive Path | | | 922 | | Provisioning | | | 923 | Setup | Time | | | 924 | | | | | 925 | | 4. Reactive Path | | | 926 | | Provisioning | | | 927 | | Rate | | | 928 | | | | | 929 | | 5. Proactive Path | | | 930 | | Provisioning | | | 931 | | Rate | | | 932 | | | | | 933 +------------+-------------------+---------------+-----------------+ 934 | | 1. Maximum |1. Control |1. Network | 935 | | Asynchronous | Sessions | Topology | 936 | | Message Proces-| Capacity | Change | 937 | | -sing Rate | | Detection Time| 938 | | |2. Forwarding | | 939 | | 2. Loss-Free | Table |2. Exception | 940 | | Asynchronous | Capacity | Handling | 941 | | Message Proces-| | | 942 | Operational| -sing Rate | |3. Denial of | 943 | | | | Service | 944 | | 3. Asynchronous | | Handling | 945 | | Message Proces-| | | 946 | | -sing Time | |4. Network Re- | 947 | | | | Provisioning | 948 | | | | Time | 949 | | | | | 950 +------------+-------------------+---------------+-----------------+ 951 | | | | | 952 | Tear Down | | |1. Controller | 953 | | | | Failover Time | 954 +------------+-------------------+---------------+-----------------+ 956 5. References 958 5.1. Normative References 960 [RFC7426] E. Haleplidis, K. Pentikousis, S. Denazis, J. Hadi Salim, 961 D. Meyer, O. Koufopavlou "Software-Defined Networking 962 (SDN): Layers and Architecture Terminology", RFC 7426, 963 January 2015. 965 [RFC4689] S. Poretsky, J. Perser, S. Erramilli, S. Khurana 966 "Terminology for Benchmarking Network-layer Traffic 967 Control Mechanisms", RFC 4689, October 2006. 969 [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, 970 "Framework for IP Performance Metrics", RFC 2330, 971 May 1998. 973 [RFC2119] S. Bradner, "Key words for use in RFCs to Indicate 974 Requirement Levels", RFC 2119, March 1997. 976 [RFC8174] B. Leiba, "Ambiguity of Uppercase vs Lowercase in RFC 977 2119 Key Words", RFC 8174, May 2017. 979 [I-D.sdn-controller-benchmark-meth] Bhuvaneswaran.V, Anton Basil, 980 Mark.T, Vishwas Manral, Sarah Banks "Benchmarking 981 Methodology for SDN Controller Performance", 982 draft-ietf-bmwg-sdn-controller-benchmark-meth-09 983 (Work in progress), May 25, 2018 985 5.2. Informative References 987 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 988 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 990 6. IANA Considerations 992 This document does not have any IANA requests. 994 7. Security Considerations 996 Security issues are not discussed in this memo. 998 8. Acknowledgements 1000 The authors would like to acknowledge Al Morton (AT&T) for the 1001 significant contributions to the earlier versions of this document. 1002 The authors would like to thank the following individuals for 1003 providing their valuable comments to the earlier versions of this 1004 document: Sandeep Gangadharan (HP), M. Georgescu (NAIST), Andrew 1005 McGregor (Google), Scott Bradner , Jay Karthik (Cisco), Ramakrishnan 1006 (Dell), Khasanov Boris (Huawei). 1008 9. Authors' Addresses 1010 Bhuvaneswaran Vengainathan 1011 Veryx Technologies Inc. 1012 1 International Plaza, Suite 550 1013 Philadelphia 1014 PA 19113 1016 Email: bhuvaneswaran.vengainathan@veryxtech.com 1018 Anton Basil 1019 Veryx Technologies Inc. 1020 1 International Plaza, Suite 550 1021 Philadelphia 1022 PA 19113 1024 Email: anton.basil@veryxtech.com 1026 Mark Tassinari 1027 Hewlett-Packard, 1028 8000 Foothills Blvd, 1029 Roseville, CA 95747 1031 Email: mark.tassinari@hpe.com 1033 Vishwas Manral 1034 Nano Sec,CA 1036 Email: vishwas.manral@gmail.com 1038 Sarah Banks 1039 VSS Monitoring 1040 930 De Guigne Drive, 1041 Sunnyvale, CA 1043 Email: sbanks@encrypted.net