idnits 2.17.1 draft-ietf-bmwg-sdn-controller-benchmark-term-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (January 8, 2017) is 2665 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC7426' is defined on line 887, but no explicit reference was found in the text == Unused Reference: 'RFC4689' is defined on line 892, but no explicit reference was found in the text == Unused Reference: 'RFC2330' is defined on line 896, but no explicit reference was found in the text == Unused Reference: 'I-D.sdn-controller-benchmark-meth' is defined on line 903, but no explicit reference was found in the text == Unused Reference: 'OpenContrail' is defined on line 911, but no explicit reference was found in the text == Unused Reference: 'OpenDaylight' is defined on line 915, but no explicit reference was found in the text == Outdated reference: A later version (-09) exists of draft-ietf-bmwg-sdn-controller-benchmark-meth-03 Summary: 0 errors (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Bhuvaneswaran Vengainathan 2 Network Working Group Anton Basil 3 Intended Status: Informational Veryx Technologies 4 Expires: June 8, 2017 Mark Tassinari 5 Hewlett-Packard 6 Vishwas Manral 7 Nano Sec 8 Sarah Banks 9 VSS Monitoring 10 January 8, 2017 12 Terminology for Benchmarking SDN Controller Performance 13 draft-ietf-bmwg-sdn-controller-benchmark-term-03 15 Abstract 17 This document defines terminology for benchmarking an SDN 18 controller's control plane performance. It extends the terminology 19 already defined in RFC 7426 for the purpose of benchmarking SDN 20 controllers. The terms provided in this document help to benchmark 21 SDN controller's performance independent of the controller's 22 supported protocols and/or network services. A mechanism for 23 benchmarking the performance of SDN controllers is defined in the 24 companion methodology document. These two documents provide a 25 standard mechanism to measure and evaluate the performance of 26 various controller implementations. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current. 38 Internet-Drafts are draft documents valid for a maximum of six 39 months and may be updated, replaced, or obsoleted by other documents 40 at any time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress. 43 This Internet-Draft will expire on June 8, 2017. 45 Copyright Notice 47 Copyright (c) 2016 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with 55 respect to this document. Code Components extracted from this 56 document must include Simplified BSD License text as described in 57 Section 4.e of the Trust Legal Provisions and are provided without 58 warranty as described in the Simplified BSD License. 60 Table of Contents 62 1. Introduction...................................................4 63 2. Term Definitions...............................................4 64 2.1. SDN Terms.................................................4 65 2.1.1. Flow.................................................4 66 2.1.2. Northbound Interface.................................5 67 2.1.3. Controller Forwarding Table..........................5 68 2.1.4. Proactive Flow Provisioning Mode.....................5 69 2.1.5. Reactive Flow Provisioning Mode......................6 70 2.1.6. Path.................................................6 71 2.1.7. Standalone Mode......................................6 72 2.1.8. Cluster/Redundancy Mode..............................7 73 2.1.9. Asynchronous Message.................................7 74 2.1.10. Test Traffic Generator..............................8 75 2.2. Test Configuration/Setup Terms............................8 76 2.2.1. Number of Network Devices............................8 77 2.2.2. Test Iterations......................................8 78 2.2.3. Test Duration........................................9 79 2.2.4. Number of Cluster nodes..............................9 80 2.3. Benchmarking Terms.......................................10 81 2.3.1. Performance.........................................10 82 2.3.1.1. Network Topology Discovery Time................10 83 2.3.1.2. Asynchronous Message Processing Time...........10 84 2.3.1.3. Asynchronous Message Processing Rate...........11 85 2.3.1.4. Reactive Path Provisioning Time................11 86 2.3.1.5. Proactive Path Provisioning Time...............12 87 2.3.1.6. Reactive Path Provisioning Rate................12 88 2.3.1.7. Proactive Path Provisioning Rate...............13 89 2.3.1.8. Network Topology Change Detection Time.........13 90 2.3.2. Scalability.........................................14 91 2.3.2.1. Control Sessions Capacity......................14 92 2.3.2.2. Network Discovery Size.........................15 93 2.3.2.3. Forwarding Table Capacity......................15 94 2.3.3. Security............................................16 95 2.3.3.1. Exception Handling.............................16 96 2.3.3.2. Denial of Service Handling.....................16 97 2.3.4. Reliability.........................................17 98 2.3.4.1. Controller Failover Time.......................17 99 2.3.4.2. Network Re-Provisioning Time...................17 100 3. Test Setup....................................................18 101 3.1. Test setup - Controller working in Standalone Mode.......18 102 3.2. Test setup - Controller working in Cluster Mode..........19 103 4. Test Coverage.................................................20 104 5. References....................................................21 105 5.1. Normative References.....................................21 106 5.2. Informative References...................................21 107 6. IANA Considerations...........................................21 108 7. Security Considerations.......................................22 109 8. Acknowledgements..............................................22 110 9. Authors' Addresses............................................22 112 1. Introduction 114 Software Defined Networking (SDN) is a networking architecture in 115 which network control is decoupled from the underlying forwarding 116 function and is placed in a centralized location called the SDN 117 controller. The SDN controller abstracts the underlying network and 118 offers a global view of the overall network to applications and 119 business logic. Thus, an SDN controller provides the flexibility to 120 program, control, and manage network behaviour dynamically through 121 standard interfaces. Since the network controls are logically 122 centralized, the need to benchmark the SDN controller performance 123 becomes significant. This document defines terms to benchmark 124 various controller designs for performance, scalability, reliability 125 and security, independent of northbound and southbound protocols. 127 Conventions used in this document 129 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 130 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 131 document are to be interpreted as described in RFC 2119. 133 2. Term Definitions 135 2.1. SDN Terms 137 The terms defined in this section are extensions to the terms 138 defined in RFC 7426 "Software-Defined Networking (SDN): Layers and 139 Architecture Terminology". This RFC should be referred before 140 attempting to make use of this document. 142 2.1.1. Flow 144 Definition: 145 The definition of Flow is same as microflows defined in RFC 4689 146 Section 3.1.5. 148 Discussion: 149 A flow can be set of packets having same source address, destination 150 address, source port and destination port, or any of these 151 combinations. 153 Measurement Units: 154 N/A 156 See Also: 157 None 159 2.1.2. Northbound Interface 161 Definition: 162 The definition of northbound interface is same Service Interface 163 defined in RFC 7426. 165 Discussion: 166 The northbound interface allows SDN applications and orchestration 167 systems to program and retrieve the network information through the 168 SDN controller. 170 Measurement Units: 171 N/A 173 See Also: 174 None 176 2.1.3. Controller Forwarding Table 178 Definition: 179 A controller forwarding table contains flow entries learned in one 180 of two ways: first, entries could be learned from traffic received 181 through the data plane, or, second, these entries could be 182 statically provisioned on the controller, and distributed to devices 183 via the southbound interface. 185 Discussion: 186 The controller forwarding table has an aging mechanism which will be 187 applied only for dynamically learnt entries. 189 Measurement Units: 190 N/A 192 See Also: 193 None 195 2.1.4. Proactive Flow Provisioning Mode 197 Definition: 198 Controller programming flows in Network Devices based on the flow 199 entries provisioned through controller's northbound interface. 201 Discussion: 202 Orchestration systems and SDN applications can define the network 203 forwarding behaviour by programming the controller using proactive 204 flow provisioning. The controller can then program the Network 205 Devices with the pre-provisioned entries. 207 Measurement Units: 208 N/A 210 See Also: 211 None 213 2.1.5. Reactive Flow Provisioning Mode 215 Definition: 216 Controller programming flows in Network Devices based on the traffic 217 received from Network Devices through controller's southbound 218 interface 220 Discussion: 221 The SDN controller dynamically decides the forwarding behaviour 222 based on the incoming traffic from the Network Devices. The 223 controller then programs the Network Devices using Reactive Flow 224 Provisioning. 226 Measurement Units: 227 N/A 229 See Also: 230 None 232 2.1.6. Path 234 Definition: 235 Refer to Section 5 in RFC 2330. 237 Discussion: 238 None 240 Measurement Units: 241 N/A 243 See Also: 244 None 246 2.1.7. Standalone Mode 248 Definition: 250 Single controller handling all control plane functionalities without 251 redundancy, or the ability to provide high availability and/or 252 automatic failover. 254 Discussion: 255 In standalone mode, one controller manages one or more network 256 domains. 258 Measurement Units: 259 N/A 261 See Also: 262 None 264 2.1.8. Cluster/Redundancy Mode 266 Definition: 267 A group of 2 or more controllers handling all control plane 268 functionalities. 270 Discussion: 271 In cluster mode, multiple controllers are teamed together for the 272 purpose of load sharing and/or high availability. The controllers in 273 the group may work in active/standby (master/slave) or active/active 274 (equal) mode depending on the intended purpose. 276 Measurement Units: 277 N/A 279 See Also: 280 None 282 2.1.9. Asynchronous Message 284 Definition: 285 Any message from the Network Device that is generated for network 286 events. 288 Discussion: 289 Control messages like flow setup request and response message is 290 classified as asynchronous message. The controller has to return a 291 response message. Note that the Network Device will not be in 292 blocking mode and continues to send/receive other control messages 294 Measurement Units: 296 N/A 298 See Also: 299 None 301 2.1.10. Test Traffic Generator 303 Definition: 304 Test Traffic Generator is an entity that generates/receives network 305 traffic. 307 Discussion: 308 Test Traffic Generator can be an entity that interfaces with Network 309 Devices to send/receive real-time network traffic. 311 Measurement Units: 312 N/A 314 See Also: 315 None 317 2.2. Test Configuration/Setup Terms 319 2.2.1. Number of Network Devices 321 Definition: 322 The number of Network Devices present in the defined test topology. 324 Discussion: 325 The Network Devices defined in the test topology can be deployed 326 using real hardware or emulated in hardware platforms. 328 Measurement Units: 329 N/A 331 See Also: 332 None 334 2.2.2. Test Iterations 336 Definition: 337 The number of times the test needs to be repeated. 339 Discussion: 341 The test needs to be repeated for multiple iterations to obtain a 342 reliable metric. It is recommended that this test SHOULD be 343 performed for at least 10 iterations to increase the confidence in 344 measured result. 346 Measurement Units: 347 N/A 349 See Also: 350 None 352 2.2.3. Test Duration 354 Definition: 355 Defines the duration of test trails for each iteration. 357 Discussion: 358 Test duration forms the basis for stop criteria for benchmarking 359 tests. Test not completed within this time interval is considered as 360 incomplete. 362 Measurement Units: 363 seconds 365 See Also: 366 None 368 2.2.4. Number of Cluster nodes 370 Definition: 371 Defines the number of controllers present in the controller cluster. 373 Discussion: 374 This parameter is relevant when testing the controller performance 375 in clustering/teaming mode. The number of nodes in the cluster MUST 376 be greater than 1. 378 Measurement Units: 379 N/A 381 See Also: 382 None 384 2.3. Benchmarking Terms 386 This section defines metrics for benchmarking the SDN controller. 387 The procedure to perform the defined metrics is defined in the 388 accompanying methodology document. 390 2.3.1. Performance 392 2.3.1.1. Network Topology Discovery Time 394 Definition: 395 The time taken by controller(s) to determine the complete network 396 topology, defined as the interval starting with the first discovery 397 message from the controller(s) at its Southbound interface, ending 398 with all features of the static topology determined. 400 Discussion: 401 Network topology discovery is key for the SDN controller to 402 provision and manage the network. So it is important to measure how 403 quickly the controller discovers the topology to learn the current 404 network state. This benchmark is obtained by presenting a network 405 topology (Tree, Mesh or Linear) with the given number of nodes to 406 the controller and wait for the discovery process to complete .It is 407 expected that the controller supports network discovery mechanism 408 and uses protocol messages for its discovery process. 410 Measurement Units: 411 milliseconds 413 See Also: 414 None 416 2.3.1.2. Asynchronous Message Processing Time 418 Definition: 419 The time taken by controller(s) to process an asynchronous message, 420 defined as the interval starting with an asynchronous message from a 421 network device after the discovery of all the devices by the 422 controller(s), ending with a response message from the controller(s) 423 at its Southbound interface. 425 Discussion: 426 For SDN to support dynamic network provisioning, it is important to 427 measure how quickly the controller responds to an event triggered 428 from the network. The event could be any notification messages 429 generated by an Network Device upon arrival of a new flow, link down 430 etc. This benchmark is obtained by sending asynchronous messages 431 from every connected Network Devices one at a time for the defined 432 test duration. This test assumes that the controller will respond to 433 the received asynchronous message. 435 Measurement Units: 436 milliseconds 438 See Also: 439 None 441 2.3.1.3. Asynchronous Message Processing Rate 443 Definition: 444 The maximum number of asynchronous messages (session aliveness check 445 message, new flow arrival notification message etc.) that the 446 controller(s) can process, defined as the iteration starting with 447 sending asynchronous messages to the controller (s) at the maximum 448 possible rate and ending with an iteration that the controller(s) 449 processes the received asynchronous messages without dropping. 451 Discussion: 452 As SDN assures flexible network and agile provisioning, it is 453 important to measure how many network events that the controller can 454 handle at a time. This benchmark is obtained by sending asynchronous 455 messages from every connected Network Devices at the rate that the 456 controller processes without dropping. This test assumes that the 457 controller will respond to all the received asynchronous messages. 459 Measurement Units: 460 Messages processed per second. 462 See Also: 463 None 465 2.3.1.4. Reactive Path Provisioning Time 467 Definition: 468 The time taken by the controller to setup a path reactively between 469 source and destination node, defined as the interval starting with 470 the first flow provisioning request message received by the 471 controller(s), ending with the last flow provisioning response 472 message sent from the controller(s) at it Southbound interface. 474 Discussion: 476 As SDN supports agile provisioning, it is important to measure how 477 fast that the controller provisions an end-to-end flow in the 478 dataplane. The benchmark is obtained by sending traffic from a 479 source endpoint to the destination endpoint, finding the time 480 difference between the first and the last flow provisioning message 481 exchanged between the controller and the Network Devices for the 482 traffic path. 484 Measurement Units: 485 milliseconds. 487 See Also: 488 None 490 2.3.1.5. Proactive Path Provisioning Time 492 Definition: 493 The time taken by the controller to setup a path proactively between 494 source and destination node, defined as the interval starting with 495 the first proactive flow provisioned in the controller(s) at its 496 Northbound interface, ending with the last flow provisioning 497 response message sent from the controller(s) at it Southbound 498 interface. 500 Discussion: 501 For SDN to support pre-provisioning of traffic path from 502 application, it is important to measure how fast that the controller 503 provisions an end-to-end flow in the dataplane. The benchmark is 504 obtained by provisioning a flow on controller's northbound interface 505 for the traffic to reach from a source to a destination endpoint, 506 finding the time difference between the first and the last flow 507 provisioning message exchanged between the controller and the 508 Network Devices for the traffic path. 510 Measurement Units: 511 milliseconds. 513 See Also: 514 None 516 2.3.1.6. Reactive Path Provisioning Rate 518 Definition: 519 The maximum number of independent paths a controller can 520 concurrently establish between source and destination nodes 521 reactively, defined as the number of paths provisioned by the 522 controller(s) at its Southbound interface for the flow provisioning 523 requests received for path provisioning at its Southbound interface 524 between the start of the test and the expiry of given test duration 526 Discussion: 527 For SDN to support agile traffic forwarding, it is important to 528 measure how many end-to-end flows that the controller could setup in 529 the dataplane. This benchmark is obtained by sending traffic each 530 with unique source and destination pairs from the source Network 531 Device and determine the number of frames received at the 532 destination Network Device. 534 Measurement Units: 535 Paths provisioned per second. 537 See Also: 538 None 540 2.3.1.7. Proactive Path Provisioning Rate 542 Definition: 543 Measure the maximum number of independent paths a controller can 544 concurrently establish between source and destination nodes 545 proactively, defined as the number of paths provisioned by the 546 controller(s) at its Southbound interface for the paths provisioned 547 in its Northbound interface between the start of the test and the 548 expiry of given test duration 550 Discussion: 551 For SDN to support pre-provisioning of traffic path for a larger 552 network from the application, it is important to measure how many 553 end-to-end flows that the controller could setup in the dataplane. 554 This benchmark is obtained by sending traffic each with unique 555 source and destination pairs from the source Network Device. Program 556 the flows on controller's northbound interface for traffic to reach 557 from each of the unique source and destination pairs and determine 558 the number of frames received at the destination Network Device. 560 Measurement Units: 561 Paths provisioned per second. 563 See Also: 564 None 566 2.3.1.8. Network Topology Change Detection Time 568 Definition: 570 The amount of time required for the controller to detect any changes 571 in the network topology, defined as the interval starting with the 572 notification message received by the controller(s) at its Southbound 573 interface, ending with the first topology rediscovery messages sent 574 from the controller(s) at its Southbound interface. 576 Discussion: 577 In order to for the controller to support fast network failure 578 recovery, it is critical to measure how fast the controller is able 579 to detect any network-state change events. This benchmark is 580 obtained by triggering a topology change event and measuring the 581 time controller takes to detect and initiate a topology re-discovery 582 process. 584 Measurement Units: 585 milliseconds 587 See Also: 588 None 590 2.3.2. Scalability 592 2.3.2.1. Control Sessions Capacity 594 Definition: 595 Measure the maximum number of control sessions the controller can 596 maintain, defined as the number of sessions that the controller can 597 accept from network devices, starting with the first control 598 session, ending with the last control session that the controller(s) 599 accepts at its Southbound interface. 601 Discussion: 602 Measuring the controller's control sessions capacity is important to 603 determine the controller's system and bandwidth resource 604 requirements. This benchmark is obtained by establishing control 605 session with the controller from each of the Network Device until it 606 fails. The number of sessions that were successfully established 607 will provide the Control Sessions Capacity. 609 Measurement Units: 610 N/A 612 See Also: 613 None 615 2.3.2.2. Network Discovery Size 617 Definition: 618 Measure the network size (number of nodes, links and hosts) that a 619 controller can discover, defined as the size of a network that the 620 controller(s) can discover, starting from a network topology given 621 by the user for discovery, ending with the topology that the 622 controller(s) could successfully discover. 624 Discussion: 625 For optimal network planning, it is key to measure the maximum 626 network size that the controller can discover. This benchmark is 627 obtained by presenting an initial set of Network Devices for 628 discovery to the controller. Based on the initial discovery, the 629 number of Network Devices is increased or decreased to determine the 630 maximum nodes that the controller can discover. 632 Measurement Units: 633 N/A 635 See Also: 636 None 638 2.3.2.3. Forwarding Table Capacity 640 Definition: 641 The maximum number of flow entries that a controller can manage in 642 its Forwarding table. 644 Discussion: 645 It is significant to measure the capacity of controller's Forwarding 646 Table to determine the number of flows that controller could forward 647 without flooding/dropping. This benchmark is obtained by 648 continuously presenting the controller with new flow entries through 649 reactive or proactive flow provisioning mode until the forwarding 650 table becomes full. The maximum number of nodes that the controller 651 can hold in its Forwarding Table will provide Forwarding Table 652 Capacity. 654 Measurement Units: 655 Maximum number of flow entries managed. 657 See Also: 658 None 660 2.3.3. Security 662 2.3.3.1. Exception Handling 664 Definition: 665 To determine the effect of handling error packets and notifications 666 on performance tests. 668 Discussion: 669 This benchmark test is to be performed after obtaining the baseline 670 performance of the performance tests defined in Section 2.3.1. This 671 benchmark determines the deviation from the baseline performance due 672 to the handling of error or failure messages from the connected 673 Network Devices. 675 Measurement Units: 676 N/A 678 See Also: 679 None 681 2.3.3.2. Denial of Service Handling 683 Definition: 684 To determine the effect of handling denial of service (DoS) attacks 685 on performance and scalability tests. 687 Discussion: 688 This benchmark test is to be performed after obtaining the baseline 689 performance of the performance and scalability tests defined in 690 section 2.3.1 and section 2.3.1.. This benchmark determines the 691 deviation from the baseline performance due to the handling of 692 denial of service attacks on controller. 694 Measurement Units: 695 Deviation of baseline metrics while handling Denial of Service 696 Attacks. 698 See Also: 699 None 701 2.3.4. Reliability 703 2.3.4.1. Controller Failover Time 705 Definition: 706 The time taken to switch from an active controller to the backup 707 controller, when the controllers work in redundancy mode and the 708 active controller fails, defined as the interval starting with the 709 active controller bringing down, ending with the first re-discovery 710 message received from the new controller at its Southbound 711 interface. 713 Discussion: 714 This benchmark determine the impact of provisioning new flows when 715 controllers are teamed and the active controller fails. 717 Measurement Units: 718 milliseconds. 720 See Also: 721 None 723 2.3.4.2. Network Re-Provisioning Time 725 Definition: 726 The time taken to re-route the traffic by the Controller, when there 727 is a failure in existing traffic paths, defined as the interval 728 starting from the first failure notification message received by the 729 controller, ending with the last flow re-provisioning message sent 730 by the controller at its Southbound interface . 732 Discussion: 733 This benchmark determines the controller's re-provisioning ability 734 upon network failures. This benchmark test assumes the following: 735 i. Network topology supports redundant path between 736 source and destination endpoints. 737 ii. Controller does not pre-provision the redundant path. 739 Measurement Units: 740 milliseconds. 742 See Also: 743 None 745 3. Test Setup 747 This section provides common reference topologies that are later 748 referred to in individual tests defined in the companion methodology 749 document. 751 3.1. Test setup - Controller working in Standalone Mode 753 +-----------------------------------------------------------+ 754 | Application Plane Test Emulator | 755 | | 756 | +-----------------+ +-------------+ | 757 | | Application | | Service | | 758 | +-----------------+ +-------------+ | 759 | | 760 +-----------------------------+(I2)-------------------------+ 761 | 762 | 763 | (Northbound interface) 764 +-------------------------------+ 765 | +----------------+ | 766 | | SDN Controller | | 767 | +----------------+ | 768 | | 769 | Device Under Test (DUT) | 770 +-------------------------------+ 771 | (Southbound interface) 772 | 773 | 774 +-----------------------------+(I1)-------------------------+ 775 | | 776 | +-----------+ +-----------+ | 777 | | Network |l1 ln-1| Network | | 778 | | Device 1 |---- .... ----| Device n | | 779 | +-----------+ +-----------+ | 780 | |l0 |ln | 781 | | | | 782 | | | | 783 | +---------------+ +---------------+ | 784 | | Test Traffic | | Test Traffic | | 785 | | Generator | | Generator | | 786 | | (TP1) | | (TP2) | | 787 | +---------------+ +---------------+ | 788 | | 789 | Forwarding Plane Test Emulator | 790 +-----------------------------------------------------------+ 791 Figure 1 793 3.2. Test setup - Controller working in Cluster Mode 795 +-----------------------------------------------------------+ 796 | Application Plane Test Emulator | 797 | | 798 | +-----------------+ +-------------+ | 799 | | Application | | Service | | 800 | +-----------------+ +-------------+ | 801 | | 802 +-----------------------------+(I2)-------------------------+ 803 | 804 | 805 | (Northbound interface) 806 +---------------------------------------------------------+ 807 | | 808 | ------------------ ------------------ | 809 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 810 | ------------------ ------------------ | 811 | | 812 | Device Under Test (DUT) | 813 +---------------------------------------------------------+ 814 | (Southbound interface) 815 | 816 | 817 +-----------------------------+(I1)-------------------------+ 818 | | 819 | +-----------+ +-----------+ | 820 | | Network |l1 ln-1| Network | | 821 | | Device 1 |---- .... ----| Device n | | 822 | +-----------+ +-----------+ | 823 | |l0 |ln | 824 | | | | 825 | | | | 826 | +---------------+ +---------------+ | 827 | | Test Traffic | | Test Traffic | | 828 | | Generator | | Generator | | 829 | | (TP1) | | (TP2) | | 830 | +---------------+ +---------------+ | 831 | | 832 | Forwarding Plane Test Emulator | 833 +-----------------------------------------------------------+ 834 Figure 2 836 4. Test Coverage 838 + -----------------------------------------------------------------+ 839 | | Speed | Scalability | Reliability | 840 + -----------+-------------------+---------------+-----------------+ 841 | | 1. Network Topolo-|1. Network | | 842 | | -gy Discovery | Discovery | | 843 | | | Size | | 844 | | 2. Reactive Path | | | 845 | | Provisioning | | | 846 | | Time | | | 847 | | | | | 848 | | 3. Proactive Path | | | 849 | | Provisioning | | | 850 | Setup | Time | | | 851 | | | | | 852 | | 4. Reactive Path | | | 853 | | Provisioning | | | 854 | | Rate | | | 855 | | | | | 856 | | 5. Proactive Path | | | 857 | | Provisioning | | | 858 | | Rate | | | 859 | | | | | 860 +------------+-------------------+---------------+-----------------+ 861 | | 1. Asynchronous |1. Control |1. Network | 862 | | Message Proces-| Sessions | Topology | 863 | | -sing Rate | Capacity | Change | 864 | | | | Detection Time| 865 | | 2. Asynchronous |2. Forwarding | | 866 | | Message Proces-| Table |2. Exception | 867 | | -sing Time | Capacity | Handling | 868 | Operational| | | | 869 | | | |3. Denial of | 870 | | | | Service | 871 | | | | Handling | 872 | | | | | 873 | | | |4. Network Re- | 874 | | | | Provisioning | 875 | | | | Time | 876 | | | | | 877 +------------+-------------------+---------------+-----------------+ 878 | | | | | 879 | Tear Down | | |1. Controller | 880 | | | | Failover Time | 881 +------------+-------------------+---------------+-----------------+ 883 5. References 885 5.1. Normative References 887 [RFC7426] E. Haleplidis, K. Pentikousis, S. Denazis, J. Hadi Salim, 888 D. Meyer, O. Koufopavlou "Software-Defined Networking 889 (SDN): Layers and Architecture Terminology", RFC 7426, 890 January 2015. 892 [RFC4689] S. Poretsky, J. Perser, S. Erramilli, S. Khurana 893 "Terminology for Benchmarking Network-layer Traffic 894 Control Mechanisms", RFC 4689, October 2006. 896 [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, 897 "Framework for IP Performance Metrics", RFC 2330, 898 May 1998. 900 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 901 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 903 [I-D.sdn-controller-benchmark-meth] Bhuvaneswaran.V, Anton Basil, 904 Mark.T, Vishwas Manral, Sarah Banks "Benchmarking 905 Methodology for SDN Controller Performance", 906 draft-ietf-bmwg-sdn-controller-benchmark-meth-03 907 (Work in progress), January 8, 2017 909 5.2. Informative References 911 [OpenContrail] Ankur Singla, Bruno Rijsman, "OpenContrail 912 Architecture Documentation", 913 http://opencontrail.org/opencontrail-architecture-documentation 915 [OpenDaylight] OpenDaylight Controller:Architectural Framework, 916 https://wiki.opendaylight.org/view/OpenDaylight_Controller 918 6. IANA Considerations 920 This document does not have any IANA requests. 922 7. Security Considerations 924 Security issues are not discussed in this memo. 926 8. Acknowledgements 928 The authors would like to acknowledge Al Morton (AT&T) for the 929 significant contributions to the earlier versions of this document. 930 The authors would like to thank the following individuals for 931 providing their valuable comments to the earlier versions of this 932 document: Sandeep Gangadharan (HP), M. Georgescu (NAIST), Andrew 933 McGregor (Google), Scott Bradner (Harvard University), Jay Karthik 934 (Cisco), Ramakrishnan (Dell), Khasanov Boris (Huawei). 936 9. Authors' Addresses 938 Bhuvaneswaran Vengainathan 939 Veryx Technologies Inc. 940 1 International Plaza, Suite 550 941 Philadelphia 942 PA 19113 944 Email: bhuvaneswaran.vengainathan@veryxtech.com 946 Anton Basil 947 Veryx Technologies Inc. 948 1 International Plaza, Suite 550 949 Philadelphia 950 PA 19113 952 Email: anton.basil@veryxtech.com 954 Mark Tassinari 955 Hewlett-Packard, 956 8000 Foothills Blvd, 957 Roseville, CA 95747 959 Email: mark.tassinari@hpe.com 961 Vishwas Manral 962 Nano Sec, 963 CA 965 Email: vishwas.manral@gmail.com 967 Sarah Banks 968 VSS Monitoring 969 Email: sbanks@encrypted.net