idnits 2.17.1 draft-ietf-bmwg-sdn-controller-benchmark-term-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 29, 2017) is 2492 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-09) exists of draft-ietf-bmwg-sdn-controller-benchmark-meth-04 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Bhuvaneswaran Vengainathan 2 Network Working Group Anton Basil 3 Intended Status: Informational Veryx Technologies 4 Expires: December 29, 2017 Mark Tassinari 5 Hewlett-Packard 6 Vishwas Manral 7 Nano Sec 8 Sarah Banks 9 VSS Monitoring 10 June 29, 2017 12 Terminology for Benchmarking SDN Controller Performance 13 draft-ietf-bmwg-sdn-controller-benchmark-term-04 15 Abstract 17 This document defines terminology for benchmarking an SDN 18 controller's control plane performance. It extends the terminology 19 already defined in RFC 7426 for the purpose of benchmarking SDN 20 controllers. The terms provided in this document help to benchmark 21 SDN controller's performance independent of the controller's 22 supported protocols and/or network services. A mechanism for 23 benchmarking the performance of SDN controllers is defined in the 24 companion methodology document. These two documents provide a 25 standard mechanism to measure and evaluate the performance of 26 various controller implementations. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current. 38 Internet-Drafts are draft documents valid for a maximum of six 39 months and may be updated, replaced, or obsoleted by other documents 40 at any time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress. 43 This Internet-Draft will expire on December 29, 2017. 45 Copyright Notice 47 Copyright (c) 2017 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with 55 respect to this document. Code Components extracted from this 56 document must include Simplified BSD License text as described in 57 Section 4.e of the Trust Legal Provisions and are provided without 58 warranty as described in the Simplified BSD License. 60 Table of Contents 62 1. Introduction...................................................4 63 2. Term Definitions...............................................4 64 2.1. SDN Terms.................................................4 65 2.1.1. Flow.................................................4 66 2.1.2. Northbound Interface.................................5 67 2.1.3. Controller Forwarding Table..........................5 68 2.1.4. Proactive Flow Provisioning Mode.....................5 69 2.1.5. Reactive Flow Provisioning Mode......................6 70 2.1.6. Path.................................................6 71 2.1.7. Standalone Mode......................................6 72 2.1.8. Cluster/Redundancy Mode..............................7 73 2.1.9. Asynchronous Message.................................7 74 2.1.10. Test Traffic Generator..............................8 75 2.2. Test Configuration/Setup Terms............................8 76 2.2.1. Number of Network Devices............................8 77 2.2.2. Test Iterations......................................8 78 2.2.3. Test Duration........................................9 79 2.2.4. Number of Cluster nodes..............................9 80 2.3. Benchmarking Terms.......................................10 81 2.3.1. Performance.........................................10 82 2.3.1.1. Network Topology Discovery Time................10 83 2.3.1.2. Asynchronous Message Processing Time...........10 84 2.3.1.3. Asynchronous Message Processing Rate...........11 85 2.3.1.4. Reactive Path Provisioning Time................11 86 2.3.1.5. Proactive Path Provisioning Time...............12 87 2.3.1.6. Reactive Path Provisioning Rate................12 88 2.3.1.7. Proactive Path Provisioning Rate...............13 89 2.3.1.8. Network Topology Change Detection Time.........13 90 2.3.2. Scalability.........................................14 91 2.3.2.1. Control Sessions Capacity......................14 92 2.3.2.2. Network Discovery Size.........................15 93 2.3.2.3. Forwarding Table Capacity......................15 94 2.3.3. Security............................................16 95 2.3.3.1. Exception Handling.............................16 96 2.3.3.2. Denial of Service Handling.....................16 97 2.3.4. Reliability.........................................17 98 2.3.4.1. Controller Failover Time.......................17 99 2.3.4.2. Network Re-Provisioning Time...................17 100 3. Test Setup....................................................18 101 3.1. Test setup - Controller working in Standalone Mode.......18 102 3.2. Test setup - Controller working in Cluster Mode..........19 103 4. Test Coverage.................................................20 104 5. References....................................................21 105 5.1. Normative References.....................................21 106 5.2. Informative References...................................21 107 6. IANA Considerations...........................................21 108 7. Security Considerations.......................................22 109 8. Acknowledgements..............................................22 110 9. Authors' Addresses............................................22 112 1. Introduction 114 Software Defined Networking (SDN) is a networking architecture in 115 which network control is decoupled from the underlying forwarding 116 function and is placed in a centralized location called the SDN 117 controller. The SDN controller abstracts the underlying network and 118 offers a global view of the overall network to applications and 119 business logic. Thus, an SDN controller provides the flexibility to 120 program, control, and manage network behaviour dynamically through 121 standard interfaces. Since the network controls are logically 122 centralized, the need to benchmark the SDN controller performance 123 becomes significant. This document defines terms to benchmark 124 various controller designs for performance, scalability, reliability 125 and security, independent of northbound and southbound protocols. 126 The methodologies are defined in [I-D.sdn-controller-benchmark-meth]. 128 Conventions used in this document 130 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 131 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 132 document are to be interpreted as described in RFC 2119. 134 2. Term Definitions 136 2.1. SDN Terms 138 The terms defined in this section are extensions to the terms 139 defined in [RFC7426] "Software-Defined Networking (SDN): Layers and 140 Architecture Terminology". This RFC should be referred before 141 attempting to make use of this document. 143 2.1.1. Flow 145 Definition: 146 The definition of Flow is same as microflows defined in [RFC4689] 147 Section 3.1.5. 149 Discussion: 150 A flow can be set of packets having same source address, destination 151 address, source port and destination port, or any of these 152 combinations. 154 Measurement Units: 155 N/A 157 See Also: 158 None 160 2.1.2. Northbound Interface 162 Definition: 163 The definition of northbound interface is same Service Interface 164 defined in [RFC7426]. 166 Discussion: 167 The northbound interface allows SDN applications and orchestration 168 systems to program and retrieve the network information through the 169 SDN controller. 171 Measurement Units: 172 N/A 174 See Also: 175 None 177 2.1.3. Controller Forwarding Table 179 Definition: 180 A controller forwarding table contains flow entries learned in one 181 of two ways: first, entries could be learned from traffic received 182 through the data plane, or, second, these entries could be 183 statically provisioned on the controller, and distributed to devices 184 via the southbound interface. 186 Discussion: 187 The controller forwarding table has an aging mechanism which will be 188 applied only for dynamically learnt entries. 190 Measurement Units: 191 N/A 193 See Also: 194 None 196 2.1.4. Proactive Flow Provisioning Mode 198 Definition: 199 Controller programming flows in Network Devices based on the flow 200 entries provisioned through controller's northbound interface. 202 Discussion: 203 Orchestration systems and SDN applications can define the network 204 forwarding behaviour by programming the controller using proactive 205 flow provisioning. The controller can then program the Network 206 Devices with the pre-provisioned entries. 208 Measurement Units: 209 N/A 211 See Also: 212 None 214 2.1.5. Reactive Flow Provisioning Mode 216 Definition: 217 Controller programming flows in Network Devices based on the traffic 218 received from Network Devices through controller's southbound 219 interface 221 Discussion: 222 The SDN controller dynamically decides the forwarding behaviour 223 based on the incoming traffic from the Network Devices. The 224 controller then programs the Network Devices using Reactive Flow 225 Provisioning. 227 Measurement Units: 228 N/A 230 See Also: 231 None 233 2.1.6. Path 235 Definition: 236 Refer to Section 5 in [RFC2330]. 238 Discussion: 239 None 241 Measurement Units: 242 N/A 244 See Also: 245 None 247 2.1.7. Standalone Mode 249 Definition: 251 Single controller handling all control plane functionalities without 252 redundancy, or the ability to provide high availability and/or 253 automatic failover. 255 Discussion: 256 In standalone mode, one controller manages one or more network 257 domains. 259 Measurement Units: 260 N/A 262 See Also: 263 None 265 2.1.8. Cluster/Redundancy Mode 267 Definition: 268 A group of 2 or more controllers handling all control plane 269 functionalities. 271 Discussion: 272 In cluster mode, multiple controllers are teamed together for the 273 purpose of load sharing and/or high availability. The controllers in 274 the group may work in active/standby (master/slave) or active/active 275 (equal) mode depending on the intended purpose. 277 Measurement Units: 278 N/A 280 See Also: 281 None 283 2.1.9. Asynchronous Message 285 Definition: 286 Any message from the Network Device that is generated for network 287 events. 289 Discussion: 290 Control messages like flow setup request and response message is 291 classified as asynchronous message. The controller has to return a 292 response message. Note that the Network Device will not be in 293 blocking mode and continues to send/receive other control messages 295 Measurement Units: 297 N/A 299 See Also: 300 None 302 2.1.10. Test Traffic Generator 304 Definition: 305 Test Traffic Generator is an entity that generates/receives network 306 traffic. 308 Discussion: 309 Test Traffic Generator can be an entity that interfaces with Network 310 Devices to send/receive real-time network traffic. 312 Measurement Units: 313 N/A 315 See Also: 316 None 318 2.2. Test Configuration/Setup Terms 320 2.2.1. Number of Network Devices 322 Definition: 323 The number of Network Devices present in the defined test topology. 325 Discussion: 326 The Network Devices defined in the test topology can be deployed 327 using real hardware or emulated in hardware platforms. 329 Measurement Units: 330 N/A 332 See Also: 333 None 335 2.2.2. Test Iterations 337 Definition: 338 The number of times the test needs to be repeated. 340 Discussion: 342 The test needs to be repeated for multiple iterations to obtain a 343 reliable metric. It is recommended that this test SHOULD be 344 performed for at least 10 iterations to increase the confidence in 345 measured result. 347 Measurement Units: 348 N/A 350 See Also: 351 None 353 2.2.3. Test Duration 355 Definition: 356 Defines the duration of test trails for each iteration. 358 Discussion: 359 Test duration forms the basis for stop criteria for benchmarking 360 tests. Test not completed within this time interval is considered as 361 incomplete. 363 Measurement Units: 364 seconds 366 See Also: 367 None 369 2.2.4. Number of Cluster nodes 371 Definition: 372 Defines the number of controllers present in the controller cluster. 374 Discussion: 375 This parameter is relevant when testing the controller performance 376 in clustering/teaming mode. The number of nodes in the cluster MUST 377 be greater than 1. 379 Measurement Units: 380 N/A 382 See Also: 383 None 385 2.3. Benchmarking Terms 387 This section defines metrics for benchmarking the SDN controller. 388 The procedure to perform the defined metrics is defined in the 389 accompanying methodology document. 391 2.3.1. Performance 393 2.3.1.1. Network Topology Discovery Time 395 Definition: 396 The time taken by controller(s) to determine the complete network 397 topology, defined as the interval starting with the first discovery 398 message from the controller(s) at its Southbound interface, ending 399 with all features of the static topology determined. 401 Discussion: 402 Network topology discovery is key for the SDN controller to 403 provision and manage the network. So it is important to measure how 404 quickly the controller discovers the topology to learn the current 405 network state. This benchmark is obtained by presenting a network 406 topology (Tree, Mesh or Linear) with the given number of nodes to 407 the controller and wait for the discovery process to complete .It is 408 expected that the controller supports network discovery mechanism 409 and uses protocol messages for its discovery process. 411 Measurement Units: 412 milliseconds 414 See Also: 415 None 417 2.3.1.2. Asynchronous Message Processing Time 419 Definition: 420 The time taken by controller(s) to process an asynchronous message, 421 defined as the interval starting with an asynchronous message from a 422 network device after the discovery of all the devices by the 423 controller(s), ending with a response message from the controller(s) 424 at its Southbound interface. 426 Discussion: 427 For SDN to support dynamic network provisioning, it is important to 428 measure how quickly the controller responds to an event triggered 429 from the network. The event could be any notification messages 430 generated by an Network Device upon arrival of a new flow, link down 431 etc. This benchmark is obtained by sending asynchronous messages 432 from every connected Network Devices one at a time for the defined 433 test duration. This test assumes that the controller will respond to 434 the received asynchronous message. 436 Measurement Units: 437 milliseconds 439 See Also: 440 None 442 2.3.1.3. Asynchronous Message Processing Rate 444 Definition: 445 The maximum number of asynchronous messages (session aliveness check 446 message, new flow arrival notification message etc.) that the 447 controller(s) can process, defined as the iteration starting with 448 sending asynchronous messages to the controller (s) at the maximum 449 possible rate and ending with an iteration that the controller(s) 450 processes the received asynchronous messages without dropping. 452 Discussion: 453 As SDN assures flexible network and agile provisioning, it is 454 important to measure how many network events that the controller can 455 handle at a time. This benchmark is obtained by sending asynchronous 456 messages from every connected Network Devices at the rate that the 457 controller processes without dropping. This test assumes that the 458 controller will respond to all the received asynchronous messages. 460 Measurement Units: 461 Messages processed per second. 463 See Also: 464 None 466 2.3.1.4. Reactive Path Provisioning Time 468 Definition: 469 The time taken by the controller to setup a path reactively between 470 source and destination node, defined as the interval starting with 471 the first flow provisioning request message received by the 472 controller(s), ending with the last flow provisioning response 473 message sent from the controller(s) at it Southbound interface. 475 Discussion: 477 As SDN supports agile provisioning, it is important to measure how 478 fast that the controller provisions an end-to-end flow in the 479 dataplane. The benchmark is obtained by sending traffic from a 480 source endpoint to the destination endpoint, finding the time 481 difference between the first and the last flow provisioning message 482 exchanged between the controller and the Network Devices for the 483 traffic path. 485 Measurement Units: 486 milliseconds. 488 See Also: 489 None 491 2.3.1.5. Proactive Path Provisioning Time 493 Definition: 494 The time taken by the controller to setup a path proactively between 495 source and destination node, defined as the interval starting with 496 the first proactive flow provisioned in the controller(s) at its 497 Northbound interface, ending with the last flow provisioning 498 response message sent from the controller(s) at it Southbound 499 interface. 501 Discussion: 502 For SDN to support pre-provisioning of traffic path from 503 application, it is important to measure how fast that the controller 504 provisions an end-to-end flow in the dataplane. The benchmark is 505 obtained by provisioning a flow on controller's northbound interface 506 for the traffic to reach from a source to a destination endpoint, 507 finding the time difference between the first and the last flow 508 provisioning message exchanged between the controller and the 509 Network Devices for the traffic path. 511 Measurement Units: 512 milliseconds. 514 See Also: 515 None 517 2.3.1.6. Reactive Path Provisioning Rate 519 Definition: 520 The maximum number of independent paths a controller can 521 concurrently establish between source and destination nodes 522 reactively, defined as the number of paths provisioned by the 523 controller(s) at its Southbound interface for the flow provisioning 524 requests received for path provisioning at its Southbound interface 525 between the start of the test and the expiry of given test duration 527 Discussion: 528 For SDN to support agile traffic forwarding, it is important to 529 measure how many end-to-end flows that the controller could setup in 530 the dataplane. This benchmark is obtained by sending traffic each 531 with unique source and destination pairs from the source Network 532 Device and determine the number of frames received at the 533 destination Network Device. 535 Measurement Units: 536 Paths provisioned per second. 538 See Also: 539 None 541 2.3.1.7. Proactive Path Provisioning Rate 543 Definition: 544 Measure the maximum number of independent paths a controller can 545 concurrently establish between source and destination nodes 546 proactively, defined as the number of paths provisioned by the 547 controller(s) at its Southbound interface for the paths provisioned 548 in its Northbound interface between the start of the test and the 549 expiry of given test duration 551 Discussion: 552 For SDN to support pre-provisioning of traffic path for a larger 553 network from the application, it is important to measure how many 554 end-to-end flows that the controller could setup in the dataplane. 555 This benchmark is obtained by sending traffic each with unique 556 source and destination pairs from the source Network Device. Program 557 the flows on controller's northbound interface for traffic to reach 558 from each of the unique source and destination pairs and determine 559 the number of frames received at the destination Network Device. 561 Measurement Units: 562 Paths provisioned per second. 564 See Also: 565 None 567 2.3.1.8. Network Topology Change Detection Time 569 Definition: 571 The amount of time required for the controller to detect any changes 572 in the network topology, defined as the interval starting with the 573 notification message received by the controller(s) at its Southbound 574 interface, ending with the first topology rediscovery messages sent 575 from the controller(s) at its Southbound interface. 577 Discussion: 578 In order to for the controller to support fast network failure 579 recovery, it is critical to measure how fast the controller is able 580 to detect any network-state change events. This benchmark is 581 obtained by triggering a topology change event and measuring the 582 time controller takes to detect and initiate a topology re-discovery 583 process. 585 Measurement Units: 586 milliseconds 588 See Also: 589 None 591 2.3.2. Scalability 593 2.3.2.1. Control Sessions Capacity 595 Definition: 596 Measure the maximum number of control sessions the controller can 597 maintain, defined as the number of sessions that the controller can 598 accept from network devices, starting with the first control 599 session, ending with the last control session that the controller(s) 600 accepts at its Southbound interface. 602 Discussion: 603 Measuring the controller's control sessions capacity is important to 604 determine the controller's system and bandwidth resource 605 requirements. This benchmark is obtained by establishing control 606 session with the controller from each of the Network Device until it 607 fails. The number of sessions that were successfully established 608 will provide the Control Sessions Capacity. 610 Measurement Units: 611 N/A 613 See Also: 614 None 616 2.3.2.2. Network Discovery Size 618 Definition: 619 Measure the network size (number of nodes, links and hosts) that a 620 controller can discover, defined as the size of a network that the 621 controller(s) can discover, starting from a network topology given 622 by the user for discovery, ending with the topology that the 623 controller(s) could successfully discover. 625 Discussion: 626 For optimal network planning, it is key to measure the maximum 627 network size that the controller can discover. This benchmark is 628 obtained by presenting an initial set of Network Devices for 629 discovery to the controller. Based on the initial discovery, the 630 number of Network Devices is increased or decreased to determine the 631 maximum nodes that the controller can discover. 633 Measurement Units: 634 N/A 636 See Also: 637 None 639 2.3.2.3. Forwarding Table Capacity 641 Definition: 642 The maximum number of flow entries that a controller can manage in 643 its Forwarding table. 645 Discussion: 646 It is significant to measure the capacity of controller's Forwarding 647 Table to determine the number of flows that controller could forward 648 without flooding/dropping. This benchmark is obtained by 649 continuously presenting the controller with new flow entries through 650 reactive or proactive flow provisioning mode until the forwarding 651 table becomes full. The maximum number of nodes that the controller 652 can hold in its Forwarding Table will provide Forwarding Table 653 Capacity. 655 Measurement Units: 656 Maximum number of flow entries managed. 658 See Also: 659 None 661 2.3.3. Security 663 2.3.3.1. Exception Handling 665 Definition: 666 To determine the effect of handling error packets and notifications 667 on performance tests. 669 Discussion: 670 This benchmark test is to be performed after obtaining the baseline 671 performance of the performance tests defined in Section 2.3.1. This 672 benchmark determines the deviation from the baseline performance due 673 to the handling of error or failure messages from the connected 674 Network Devices. 676 Measurement Units: 677 N/A 679 See Also: 680 None 682 2.3.3.2. Denial of Service Handling 684 Definition: 685 To determine the effect of handling denial of service (DoS) attacks 686 on performance and scalability tests. 688 Discussion: 689 This benchmark test is to be performed after obtaining the baseline 690 performance of the performance and scalability tests defined in 691 section 2.3.1 and section 2.3.1.. This benchmark determines the 692 deviation from the baseline performance due to the handling of 693 denial of service attacks on controller. 695 Measurement Units: 696 Deviation of baseline metrics while handling Denial of Service 697 Attacks. 699 See Also: 700 None 702 2.3.4. Reliability 704 2.3.4.1. Controller Failover Time 706 Definition: 707 The time taken to switch from an active controller to the backup 708 controller, when the controllers work in redundancy mode and the 709 active controller fails, defined as the interval starting with the 710 active controller bringing down, ending with the first re-discovery 711 message received from the new controller at its Southbound 712 interface. 714 Discussion: 715 This benchmark determine the impact of provisioning new flows when 716 controllers are teamed and the active controller fails. 718 Measurement Units: 719 milliseconds. 721 See Also: 722 None 724 2.3.4.2. Network Re-Provisioning Time 726 Definition: 727 The time taken to re-route the traffic by the Controller, when there 728 is a failure in existing traffic paths, defined as the interval 729 starting from the first failure notification message received by the 730 controller, ending with the last flow re-provisioning message sent 731 by the controller at its Southbound interface . 733 Discussion: 734 This benchmark determines the controller's re-provisioning ability 735 upon network failures. This benchmark test assumes the following: 736 i. Network topology supports redundant path between 737 source and destination endpoints. 738 ii. Controller does not pre-provision the redundant path. 740 Measurement Units: 741 milliseconds. 743 See Also: 744 None 746 3. Test Setup 748 This section provides common reference topologies that are later 749 referred to in individual tests defined in the companion methodology 750 document. 752 3.1. Test setup - Controller working in Standalone Mode 754 +-----------------------------------------------------------+ 755 | Application Plane Test Emulator | 756 | | 757 | +-----------------+ +-------------+ | 758 | | Application | | Service | | 759 | +-----------------+ +-------------+ | 760 | | 761 +-----------------------------+(I2)-------------------------+ 762 | 763 | 764 | (Northbound interface) 765 +-------------------------------+ 766 | +----------------+ | 767 | | SDN Controller | | 768 | +----------------+ | 769 | | 770 | Device Under Test (DUT) | 771 +-------------------------------+ 772 | (Southbound interface) 773 | 774 | 775 +-----------------------------+(I1)-------------------------+ 776 | | 777 | +-----------+ +-----------+ | 778 | | Network |l1 ln-1| Network | | 779 | | Device 1 |---- .... ----| Device n | | 780 | +-----------+ +-----------+ | 781 | |l0 |ln | 782 | | | | 783 | | | | 784 | +---------------+ +---------------+ | 785 | | Test Traffic | | Test Traffic | | 786 | | Generator | | Generator | | 787 | | (TP1) | | (TP2) | | 788 | +---------------+ +---------------+ | 789 | | 790 | Forwarding Plane Test Emulator | 791 +-----------------------------------------------------------+ 792 Figure 1 794 3.2. Test setup - Controller working in Cluster Mode 796 +-----------------------------------------------------------+ 797 | Application Plane Test Emulator | 798 | | 799 | +-----------------+ +-------------+ | 800 | | Application | | Service | | 801 | +-----------------+ +-------------+ | 802 | | 803 +-----------------------------+(I2)-------------------------+ 804 | 805 | 806 | (Northbound interface) 807 +---------------------------------------------------------+ 808 | | 809 | ------------------ ------------------ | 810 | | SDN Controller 1 | <--E/W--> | SDN Controller n | | 811 | ------------------ ------------------ | 812 | | 813 | Device Under Test (DUT) | 814 +---------------------------------------------------------+ 815 | (Southbound interface) 816 | 817 | 818 +-----------------------------+(I1)-------------------------+ 819 | | 820 | +-----------+ +-----------+ | 821 | | Network |l1 ln-1| Network | | 822 | | Device 1 |---- .... ----| Device n | | 823 | +-----------+ +-----------+ | 824 | |l0 |ln | 825 | | | | 826 | | | | 827 | +---------------+ +---------------+ | 828 | | Test Traffic | | Test Traffic | | 829 | | Generator | | Generator | | 830 | | (TP1) | | (TP2) | | 831 | +---------------+ +---------------+ | 832 | | 833 | Forwarding Plane Test Emulator | 834 +-----------------------------------------------------------+ 835 Figure 2 837 4. Test Coverage 839 + -----------------------------------------------------------------+ 840 | | Speed | Scalability | Reliability | 841 + -----------+-------------------+---------------+-----------------+ 842 | | 1. Network Topolo-|1. Network | | 843 | | -gy Discovery | Discovery | | 844 | | | Size | | 845 | | 2. Reactive Path | | | 846 | | Provisioning | | | 847 | | Time | | | 848 | | | | | 849 | | 3. Proactive Path | | | 850 | | Provisioning | | | 851 | Setup | Time | | | 852 | | | | | 853 | | 4. Reactive Path | | | 854 | | Provisioning | | | 855 | | Rate | | | 856 | | | | | 857 | | 5. Proactive Path | | | 858 | | Provisioning | | | 859 | | Rate | | | 860 | | | | | 861 +------------+-------------------+---------------+-----------------+ 862 | | 1. Asynchronous |1. Control |1. Network | 863 | | Message Proces-| Sessions | Topology | 864 | | -sing Rate | Capacity | Change | 865 | | | | Detection Time| 866 | | 2. Asynchronous |2. Forwarding | | 867 | | Message Proces-| Table |2. Exception | 868 | | -sing Time | Capacity | Handling | 869 | Operational| | | | 870 | | | |3. Denial of | 871 | | | | Service | 872 | | | | Handling | 873 | | | | | 874 | | | |4. Network Re- | 875 | | | | Provisioning | 876 | | | | Time | 877 | | | | | 878 +------------+-------------------+---------------+-----------------+ 879 | | | | | 880 | Tear Down | | |1. Controller | 881 | | | | Failover Time | 882 +------------+-------------------+---------------+-----------------+ 884 5. References 886 5.1. Normative References 888 [RFC7426] E. Haleplidis, K. Pentikousis, S. Denazis, J. Hadi Salim, 889 D. Meyer, O. Koufopavlou "Software-Defined Networking 890 (SDN): Layers and Architecture Terminology", RFC 7426, 891 January 2015. 893 [RFC4689] S. Poretsky, J. Perser, S. Erramilli, S. Khurana 894 "Terminology for Benchmarking Network-layer Traffic 895 Control Mechanisms", RFC 4689, October 2006. 897 [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, 898 "Framework for IP Performance Metrics", RFC 2330, 899 May 1998. 901 [I-D.sdn-controller-benchmark-meth] Bhuvaneswaran.V, Anton Basil, 902 Mark.T, Vishwas Manral, Sarah Banks "Benchmarking 903 Methodology for SDN Controller Performance", 904 draft-ietf-bmwg-sdn-controller-benchmark-meth-04 905 (Work in progress), June 8, 2017 907 5.2. Informative References 909 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" 910 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. 912 6. IANA Considerations 914 This document does not have any IANA requests. 916 7. Security Considerations 918 Security issues are not discussed in this memo. 920 8. Acknowledgements 922 The authors would like to acknowledge Al Morton (AT&T) for the 923 significant contributions to the earlier versions of this document. 924 The authors would like to thank the following individuals for 925 providing their valuable comments to the earlier versions of this 926 document: Sandeep Gangadharan (HP), M. Georgescu (NAIST), Andrew 927 McGregor (Google), Scott Bradner (Harvard University), Jay Karthik 928 (Cisco), Ramakrishnan (Dell), Khasanov Boris (Huawei). 930 9. Authors' Addresses 932 Bhuvaneswaran Vengainathan 933 Veryx Technologies Inc. 934 1 International Plaza, Suite 550 935 Philadelphia 936 PA 19113 938 Email: bhuvaneswaran.vengainathan@veryxtech.com 940 Anton Basil 941 Veryx Technologies Inc. 942 1 International Plaza, Suite 550 943 Philadelphia 944 PA 19113 946 Email: anton.basil@veryxtech.com 948 Mark Tassinari 949 Hewlett-Packard, 950 8000 Foothills Blvd, 951 Roseville, CA 95747 953 Email: mark.tassinari@hpe.com 955 Vishwas Manral 956 Nano Sec,CA 958 Email: vishwas.manral@gmail.com 959 Sarah Banks 960 VSS Monitoring 962 Email: sbanks@encrypted.net