idnits 2.17.1
draft-ietf-bmwg-sdn-controller-benchmark-term-00.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
No issues found here.
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (October 19, 2015) is 3104 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Unused Reference: 'RFC2330' is defined on line 799, but no explicit
reference was found in the text
== Unused Reference: 'RFC6241' is defined on line 803, but no explicit
reference was found in the text
== Unused Reference: 'RFC6020' is defined on line 807, but no explicit
reference was found in the text
== Unused Reference: 'RFC5440' is defined on line 811, but no explicit
reference was found in the text
== Unused Reference: 'I-D.sdn-controller-benchmark-meth' is defined on line
817, but no explicit reference was found in the text
== Unused Reference: 'OpenContrail' is defined on line 825, but no explicit
reference was found in the text
== Unused Reference: 'OpenDaylight' is defined on line 829, but no explicit
reference was found in the text
== Outdated reference: A later version (-09) exists of
draft-ietf-bmwg-sdn-controller-benchmark-meth-00
Summary: 0 errors (**), 0 flaws (~~), 9 warnings (==), 1 comment (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
1 Internet-Draft Bhuvaneswaran Vengainathan
2 Network Working Group Anton Basil
3 Intended Status: Informational Veryx Technologies
4 Expires: April 18, 2016 Mark Tassinari
5 Hewlett-Packard
6 Vishwas Manral
7 Ionos Corp
8 Sarah Banks
9 VSS Monitoring
10 October 19, 2015
12 Terminology for Benchmarking SDN Controller Performance
13 draft-ietf-bmwg-sdn-controller-benchmark-term-00
15 Abstract
17 This document defines terminology for benchmarking an SDN
18 Controller's performance. The terms provided in this document help
19 to benchmark SDN controller's performance independent of the
20 controller's supported protocols and/or network services. A
21 mechanism for benchmarking the performance of SDN controllers is
22 defined in the companion methodology document. These two documents
23 provide a standard mechanism to measure and evaluate the performance
24 of various controller implementations.
26 Status of this Memo
28 This Internet-Draft is submitted in full conformance with the
29 provisions of BCP 78 and BCP 79.
31 Internet-Drafts are working documents of the Internet Engineering
32 Task Force (IETF). Note that other groups may also distribute
33 working documents as Internet-Drafts. The list of current Internet-
34 Drafts is at http://datatracker.ietf.org/drafts/current.
36 Internet-Drafts are draft documents valid for a maximum of six
37 months and may be updated, replaced, or obsoleted by other documents
38 at any time. It is inappropriate to use Internet-Drafts as reference
39 material or to cite them other than as "work in progress.
41 This Internet-Draft will expire on April 18, 2016.
43 Copyright Notice
45 Copyright (c) 2015 IETF Trust and the persons identified as the
46 document authors. All rights reserved.
48 This document is subject to BCP 78 and the IETF Trust's Legal
49 Provisions Relating to IETF Documents
50 (http://trustee.ietf.org/license-info) in effect on the date of
51 publication of this document. Please review these documents
52 carefully, as they describe your rights and restrictions with
53 respect to this document. Code Components extracted from this
54 document must include Simplified BSD License text as described in
55 Section 4.e of the Trust Legal Provisions and are provided without
56 warranty as described in the Simplified BSD License.
58 Table of Contents
60 1. Introduction ................................................ 3
61 2. Term Definitions ............................................ 4
62 2.1. SDN Terms .............................................. 4
63 2.1.1. SDN Node .......................................... 4
64 2.1.2. SDN Application ................................... 4
65 2.1.3. Flow .............................................. 4
66 2.1.4. Northbound Interface .............................. 5
67 2.1.5. Southbound Interface .............................. 5
68 2.1.6. Controller Forwarding Table ....................... 5
69 2.1.7. Proactive Flow Provisioning Mode .................. 6
70 2.1.8. Reactive Flow Provisioning Mode ................... 6
71 2.1.9. Path .............................................. 7
72 2.1.10. Standalone Mode .................................. 7
73 2.1.11. Cluster/Redundancy Mode .......................... 7
74 2.1.12. Asynchronous Message ............................. 8
75 2.1.13. Test Traffic Generator ........................... 8
76 2.2. Test Configuration/Setup Terms ......................... 9
77 2.2.1. Number of SDN Nodes ............................... 9
78 2.2.2. Test Iterations ................................... 9
79 2.2.3. Test Duration ..................................... 9
80 2.2.4. Number of Cluster nodes ........................... 10
81 2.3. Benchmarking Terms ..................................... 10
82 2.3.1. Performance ....................................... 10
83 2.3.1.1. Network Topology Discovery Time .............. 10
84 2.3.1.2. Asynchronous Message Processing Time.......... 11
85 2.3.1.3. Asynchronous Message Processing Rate.......... 11
86 2.3.1.4. Reactive Path Provisioning Time .............. 12
87 2.3.1.5. Proactive Path Provisioning Time ............. 12
88 2.3.1.6. Reactive Path Provisioning Rate .............. 13
89 2.3.1.7. Proactive Path Provisioning Rate ............. 13
90 2.3.1.8. Network Topology Change Detection Time ....... 13
91 2.3.2. Scalability ....................................... 14
92 2.3.2.1. Control Sessions Capacity .................... 14
93 2.3.2.2. Network Discovery Size ....................... 14
94 2.3.2.3. Forwarding Table Capacity .................... 15
95 2.3.3. Security ......................................... 15
96 2.3.3.1. Exception Handling ........................... 15
97 2.3.3.2. Denial of Service Handling ................... 16
98 2.3.4. Reliability ....................................... 16
99 2.3.4.1. Controller Failover Time ..................... 16
100 2.3.4.2. Network Re-Provisioning Time ................. 17
101 3. Test Coverage .............................................. 17
102 4. References ................................................. 18
103 4.1. Normative References ................................... 18
104 4.2. Informative References ................................. 19
105 5. IANA Considerations ........................................ 19
106 6. Security Considerations ..................................... 19
107 7. Acknowledgements ........................................... 19
108 8. Authors' Addresses ......................................... 19
110 1. Introduction
112 Software Defined Networking (SDN) is a networking architecture in
113 which network control is decoupled from the underlying forwarding
114 function and is placed in a centralized location called the SDN
115 controller. The SDN controller abstracts the underlying network and
116 offers a global view of the overall network to applications and
117 business logic. Thus, an SDN controller provides the flexibility to
118 program, control, and manage network behaviour dynamically through
119 standard interfaces. Since the network controls are logically
120 centralized, the need to benchmark the SDN controller performance
121 becomes significant. This document defines terms to benchmark
122 various controller designs for performance, scalability, reliability
123 and security, independent of northbound and southbound protocols.
125 Conventions used in this document
127 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
128 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
129 document are to be interpreted as described in RFC 2119.
131 2. Term Definitions
133 2.1. SDN Terms
135 2.1.1. SDN Node
137 Definition:
138 An SDN node is a simulated/emulated entity that forwards data in a
139 software defined environment.
141 Discussion:
142 An SDN node can be an emulated/simulated switch, router, gateway, or
143 any network service appliance that supports standardized or
144 proprietary programmable interface.
146 Measurement Units:
147 N/A
149 See Also:
150 None
152 2.1.2. SDN Application
154 Definition:
155 Any business logic that alter the network behaviour dynamically
156 through controller's northbound interface.
158 Discussion:
159 SDN application can be any business application, cloud orchestration
160 system, network services orchestration etc.,
161 Measurement Units:
162 N/A
164 See Also:
165 None
167 2.1.3. Flow
169 Definition:
170 A flow is a uni-directional sequence of packets having common
171 properties derived from the data contained in the packet.
173 Discussion:
174 A flow can be set of packets having same source address, destination
175 address, source port and destination port, or any of these
176 combinations.
178 Measurement Units:
179 N/A
181 See Also:
182 None
184 2.1.4. Northbound Interface
186 Definition:
187 The northbound interface is the application programming interface
188 provided by the SDN controller for the SDN services and applications
189 to interact with the SDN controller.
191 Discussion:
192 The northbound interface allows SDN applications and orchestration
193 systems to program and retrieve the network information through the
194 SDN controller.
196 Measurement Units:
197 N/A
199 See Also:
200 None
202 2.1.5. Southbound Interface
204 Definition:
205 The southbound interface is the application programming interface
206 provided by the SDN controller to interact with the SDN nodes.
208 Discussion:
209 Southbound interface enables controller to interact with the SDN
210 nodes in the infrastructure for dynamically defining the traffic
211 forwarding behaviour.
213 Measurement Units:
214 N/A
216 See Also:
217 None
219 2.1.6. Controller Forwarding Table
221 Definition:
222 A controller forwarding table contains flow entries learned in one
223 of two ways: first, entries could be learned from traffic received
224 through the data plane, or, second, these entries could be
225 statically provisioned on the controller, and distributed to devices
226 via the southbound interface.
228 Discussion:
229 The controller forwarding table has an aging mechanism which will be
230 applied only for dynamically learnt entries.
232 Measurement Units:
233 N/A
235 See Also:
236 None
238 2.1.7. Proactive Flow Provisioning Mode
240 Definition:
241 Controller programming flows in SDN nodes based on the flow entries
242 provisioned through controller's northbound interface.
244 Discussion:
245 Orchestration systems and SDN applications can define the network
246 forwarding behaviour by programming the controller using proactive
247 flow provisioning. The controller can then program the SDN nodes
248 with the pre-provisioned entries.
250 Measurement Units:
251 N/A
253 See Also:
254 None
256 2.1.8. Reactive Flow Provisioning Mode
258 Definition:
259 Controller programming flows in SDN nodes based on the traffic
260 received from SDN nodes through controller's southbound interface
262 Discussion:
263 The SDN controller dynamically decides the forwarding behaviour
264 based on the incoming traffic from the SDN nodes. The controller
265 then programs the SDN nodes using Reactive Flow Provisioning.
267 Measurement Units:
268 N/A
270 See Also:
271 None
273 2.1.9. Path
275 Definition:
276 A path is a sequence of SDN nodes and links traversed by a flow.
278 Discussion:
279 As defined in RFC 2330, path is a sequence of the form < h0, l1, h1,
280 ..., ln, hn >, where n >=0, h0 and hn is a Host, h1...hn-1 is an SDN
281 Node, each li is a link between hi-1 and hi. A pair
is
282 termed a 'hop'. Note that path is a unidirectional concept.
284 Measurement Units:
285 N/A
287 See Also:
288 None
290 2.1.10. Standalone Mode
292 Definition:
293 Single controller handling all control plane functionalities without
294 redundancy, or the ability to provide high availability and/or
295 automatic failover.
297 Discussion:
298 In standalone mode, one controller manages one or more network
299 domains.
301 Measurement Units:
302 N/A
304 See Also:
305 None
307 2.1.11. Cluster/Redundancy Mode
309 Definition:
310 A group of 2 or more controllers handling all control plane
311 functionalities.
313 Discussion:
314 In cluster mode, multiple controllers are teamed together for the
315 purpose of load sharing and/or high availability. The controllers in
316 the group may work in active/standby (master/slave) or active/active
317 (equal) mode depending on the intended purpose.
319 Measurement Units:
320 N/A
322 See Also:
323 None
325 2.1.12. Asynchronous Message
327 Definition:
328 Any message from the SDN node that is generated for network events.
330 Discussion:
331 Control messages like flow setup request and response message is
332 classified as asynchronous message. The controller has to return a
333 response message. Note that the SDN node will not be in blocking
334 mode and continues to send/receive other control messages
336 Measurement Units:
337 N/A
339 See Also:
340 None
342 2.1.13. Test Traffic Generator
344 Definition:
345 Test Traffic Generator is an entity that generates/receives network
346 traffic.
348 Discussion:
349 Test Traffic Generator can be an entity that interfaces with SDN
350 Nodes to send/receive real-time network traffic.
352 Measurement Units:
353 N/A
355 See Also:
356 None
358 2.2. Test Configuration/Setup Terms
360 2.2.1. Number of SDN Nodes
362 Definition:
363 The number of SDN nodes present in the defined test topology.
365 Discussion:
366 The SDN nodes defined in the test topology can be deployed using
367 real hardware or emulated in hardware platforms.
369 Measurement Units:
370 N/A
372 See Also:
373 None
375 2.2.2. Test Iterations
377 Definition:
378 The number of times the test needs to be repeated.
380 Discussion:
381 The test needs to be repeated for multiple iterations to obtain a
382 reliable metric. It is recommended that this test SHOULD be
383 performed for at least 10 iterations to increase the confidence in
384 measured result.
386 Measurement Units:
387 N/A
389 See Also:
390 None
392 2.2.3. Test Duration
394 Definition:
395 Defines the duration of test trails for each iteration.
397 Discussion:
398 Test duration forms the basis for stop criteria for benchmarking
399 tests. Test not completed within this time interval is considered as
400 incomplete.
402 Measurement Units:
403 seconds
405 See Also:
406 None
408 2.2.4. Number of Cluster nodes
410 Definition:
411 Defines the number of controllers present in the controller cluster.
413 Discussion:
414 This parameter is relevant when testing the controller performance
415 in clustering/teaming mode. The number of nodes in the cluster MUST
416 be greater than 1.
418 Measurement Units:
419 N/A
421 See Also:
422 None
424 2.3. Benchmarking Terms
426 This section defines metrics for benchmarking the SDN controller.
427 The procedure to perform the defined metrics is defined in the
428 accompanying methodology document.
430 2.3.1. Performance
432 2.3.1.1. Network Topology Discovery Time
434 Definition:
435 To measure the time taken to discover the network topology - nodes
436 and links by a controller.
438 Discussion:
439 Network topology discovery is key for the SDN controller to
440 provision and manage the network. So it is important to measure how
441 quickly the controller discovers the topology to learn the current
442 network state. This benchmark is obtained by presenting a network
443 topology (Tree, Mesh or Linear) with the given number of nodes to
444 the controller and wait for the discovery process to complete .It is
445 expected that the controller supports network discovery mechanism
446 and uses protocol messages for its discovery process.
448 Measurement Units:
449 milliseconds
451 See Also:
452 None
454 2.3.1.2. Asynchronous Message Processing Time
456 Definition:
457 To measure the time taken by the controller to process an
458 asynchronous message.
460 Discussion:
461 For SDN to support dynamic network provisioning, it is important to
462 measure how quickly the controller responds to an event triggered
463 from the network. The event could be any notification messages
464 generated by an SDN node upon arrival of a new flow, link down etc.
465 This benchmark is obtained by sending asynchronous messages from
466 every connected SDN nodes one at a time for the defined test
467 duration. This test assumes that the controller will respond to the
468 received asynchronous message.
470 Measurement Units:
471 milliseconds
473 See Also:
474 None
476 2.3.1.3. Asynchronous Message Processing Rate
478 Definition:
479 To measure the maximum number of asynchronous messages that a
480 controller can process within the test duration.
482 Discussion:
483 As SDN assures flexible network and agile provisioning, it is
484 important to measure how many network events that the controller can
485 handle at a time. This benchmark is obtained by sending asynchronous
486 messages from every connected SDN nodes at full connection capacity
487 for the given test duration. This test assumes that the controller
488 will respond to all the received asynchronous messages.
490 Measurement Units:
491 Messages processed per second.
493 See Also:
494 None
496 2.3.1.4. Reactive Path Provisioning Time
498 Definition:
499 The time taken by the controller to setup a path reactively between
500 source and destination node, expressed in milliseconds.
502 Discussion:
503 As SDN supports agile provisioning, it is important to measure how
504 fast that the controller provisions an end-to-end flow in the
505 dataplane. The benchmark is obtained by sending traffic from a
506 source endpoint to the destination endpoint, finding the time
507 difference between the first and the last flow provisioning message
508 exchanged between the controller and the SDN nodes for the traffic
509 path.
511 Measurement Units:
512 milliseconds.
514 See Also:
515 None
517 2.3.1.5. Proactive Path Provisioning Time
519 Definition:
520 The time taken by the controller to setup a path proactively between
521 source and destination node, expressed in milliseconds.
523 Discussion:
524 For SDN to support pre-provisioning of traffic path from
525 application, it is important to measure how fast that the controller
526 provisions an end-to-end flow in the dataplane. The benchmark is
527 obtained by provisioning a flow on controller's northbound interface
528 for the traffic to reach from a source to a destination endpoint,
529 finding the time difference between the first and the last flow
530 provisioning message exchanged between the controller and the SDN
531 nodes for the traffic path.
533 Measurement Units:
534 milliseconds.
536 See Also:
537 None
539 2.3.1.6. Reactive Path Provisioning Rate
541 Definition:
542 Measure the maximum number of independent paths a controller can
543 concurrently establish between source and destination nodes
544 reactively within the test duration, expressed in paths per second.
546 Discussion:
547 For SDN to support agile traffic forwarding, it is important to
548 measure how many end-to-end flows that the controller could setup in
549 the dataplane. This benchmark is obtained by sending traffic each
550 with unique source and destination pairs from the source SDN node
551 and determine the number of frames received at the destination SDN
552 node.
554 Measurement Units:
555 Paths provisioned per second.
557 See Also:
558 None
560 2.3.1.7. Proactive Path Provisioning Rate
562 Definition:
563 Measure the maximum number of independent paths a controller can
564 concurrently establish between source and destination nodes
565 proactively within the test duration, expressed in paths per second.
567 Discussion:
568 For SDN to support pre-provisioning of traffic path for a larger
569 network from the application, it is important to measure how many
570 end-to-end flows that the controller could setup in the dataplane.
571 This benchmark is obtained by sending traffic each with unique
572 source and destination pairs from the source SDN node. Program the
573 flows on controller's northbound interface for traffic to reach from
574 each of the unique source and destination pairs and determine the
575 number of frames received at the destination SDN node.
577 Measurement Units:
578 Paths provisioned per second.
580 See Also:
581 None
583 2.3.1.8. Network Topology Change Detection Time
585 Definition:
587 The amount of time required for the controller to detect any changes
588 in the network topology.
590 Discussion:
591 In order to for the controller to support fast network failure
592 recovery, it is critical to measure how fast the controller is able
593 to detect any network-state change events. This benchmark is
594 obtained by triggering a topology change event and measuring the
595 time controller takes to detect and initiate a topology re-discovery
596 process.
598 Measurement Units:
599 milliseconds
601 See Also:
602 None
604 2.3.2. Scalability
606 2.3.2.1. Control Sessions Capacity
608 Definition:
609 To measure the maximum number of control sessions the controller
610 can maintain.
612 Discussion:
613 Measuring the controller's control sessions capacity is important to
614 determine the controller's system and bandwidth resource
615 requirements. This benchmark is obtained by establishing control
616 session with the controller from each of the SDN node until it
617 fails. The number of sessions that were successfully established
618 will provide the Control Sessions Capacity.
620 Measurement Units:
621 N/A
623 See Also:
624 None
626 2.3.2.2. Network Discovery Size
628 Definition:
629 To measure the network size (number of nodes, links and hosts) that
630 a controller can discover.
632 Discussion:
634 For optimal network planning, it is key to measure the maximum
635 network size that the controller can discover. This benchmark is
636 obtained by presenting an initial set of SDN nodes for discovery to
637 the controller. Based on the initial discovery, the number of SDN
638 nodes is increased or decreased to determine the maximum nodes that
639 the controller can discover.
641 Measurement Units:
642 N/A
644 See Also:
645 None
647 2.3.2.3. Forwarding Table Capacity
649 Definition:
650 The maximum number of flow entries that a controller can manage in
651 its Forwarding table.
653 Discussion:
654 It is significant to measure the capacity of controller's Forwarding
655 Table to determine the number of flows that controller could forward
656 without flooding/dropping. This benchmark is obtained by
657 continuously presenting the controller with new flow entries through
658 reactive or proactive flow provisioning mode until the forwarding
659 table becomes full. The maximum number of nodes that the controller
660 can hold in its Forwarding Table will provide Forwarding Table
661 Capacity.
663 Measurement Units:
664 Maximum number of flow entries managed.
666 See Also:
667 None
669 2.3.3. Security
671 2.3.3.1. Exception Handling
673 Definition:
674 To determine the effect of handling error packets and notifications
675 on performance tests.
677 Discussion:
678 This benchmark test is to be performed after obtaining the baseline
679 performance of the performance tests defined in Section 2.3.1. This
680 benchmark determines the deviation from the baseline performance due
681 to the handling of error or failure messages from the connected SDN
682 nodes.
684 Measurement Units:
685 N/A
687 See Also:
688 None
690 2.3.3.2. Denial of Service Handling
692 Definition:
693 To determine the effect of handling denial of service (DoS) attacks
694 on performance and scalability tests.
696 Discussion:
697 This benchmark test is to be performed after obtaining the baseline
698 performance of the performance and scalability tests defined in
699 section 2.3.1 and section 2.3.1.. This benchmark determines the
700 deviation from the baseline performance due to the handling of
701 denial of service attacks on controller.
703 Measurement Units:
704 Deviation of baseline metrics while handling Denial of Service
705 Attacks.
707 See Also:
708 None
710 2.3.4. Reliability
712 2.3.4.1. Controller Failover Time
714 Definition:
715 The time taken to switch from an active controller to the backup
716 controller, when the controllers work in redundancy mode and the
717 active controller fails.
719 Discussion:
720 This benchmark determine the impact of provisioning new flows when
721 controllers are teamed and the active controller fails.
723 Measurement Units:
724 milliseconds.
726 See Also:
727 None
729 2.3.4.2. Network Re-Provisioning Time
731 Definition:
732 The time taken to re-route the traffic by the Controller, when there
733 is a failure in existing traffic paths.
735 Discussion:
736 This benchmark determines the controller's re-provisioning ability
737 upon network failures. This benchmark test assumes the following:
738 i. Network topology supports redundant path between
739 source and destination endpoints.
740 ii. Controller does not pre-provision the redundant path.
742 Measurement Units:
743 milliseconds.
745 See Also:
746 None
748 3. Test Coverage
750 + -----------------------------------------------------------------+
751 | | Speed | Scalability | Reliability |
752 + -----------+-------------------+---------------+-----------------+
753 | | 1. Network Topolo-|1. Network | |
754 | | -gy Discovery | Discovery | |
755 | | | Size | |
756 | | 2. Reactive Path | | |
757 | | Provisioning | | |
758 | | Time | | |
759 | | | | |
760 | | 3. Proactive Path | | |
761 | | Provisioning | | |
762 | Setup | Time | | |
763 | | | | |
764 | | 4. Reactive Path | | |
765 | | Provisioning | | |
766 | | Rate | | |
767 | | | | |
768 | | 5. Proactive Path | | |
769 | | Provisioning | | |
770 | | Rate | | |
771 | | | | |
772 +------------+-------------------+---------------+-----------------+
773 | | 1. Asynchronous |1. Control |1. Network |
774 | | Message Proces-| Sessions | Topology |
775 | | -sing Rate | Capacity | Change |
776 | | | | Detection Time|
777 | | 2. Asynchronous |2. Forwarding | |
778 | | Message Proces-| Table |2. Exception |
779 | | -sing Time | Capacity | Handling |
780 | Operational| | | |
781 | | | |3. Denial of |
782 | | | | Service |
783 | | | | Handling |
784 | | | | |
785 | | | |4. Network Re- |
786 | | | | Provisioning |
787 | | | | Time |
788 | | | | |
789 +------------+-------------------+---------------+-----------------+
790 | | | | |
791 | Tear Down | | |1. Controller |
792 | | | | Failover Time |
793 +------------+-------------------+---------------+-----------------+
795 4. References
797 4.1. Normative References
799 [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis,
800 "Framework for IP Performance Metrics",RFC 2330,
801 May 1998.
803 [RFC6241] R. Enns, M. Bjorklund, J. Schoenwaelder, A. Bierman,
804 "Network Configuration Protocol (NETCONF)",RFC 6241,
805 June 2011.
807 [RFC6020] M. Bjorklund, "YANG - A Data Modeling Language for
808 the Network Configuration Protocol (NETCONF)", RFC 6020,
809 October 2010
811 [RFC5440] JP. Vasseur, JL. Le Roux, "Path Computation Element (PCE)
812 Communication Protocol (PCEP)", RFC 5440, March 2009.
814 [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification"
815 Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.
817 [I-D.sdn-controller-benchmark-meth] Bhuvaneswaran.V, Anton Basil,
818 Mark.T, Vishwas Manral, Sarah Banks "Benchmarking
819 Methodology for SDN Controller Performance",
820 draft-ietf-bmwg-sdn-controller-benchmark-meth-00
821 (Work in progress), October 19, 2015
823 4.2. Informative References
825 [OpenContrail] Ankur Singla, Bruno Rijsman, "OpenContrail
826 Architecture Documentation",
827 http://opencontrail.org/opencontrail-architecture-documentation
829 [OpenDaylight] OpenDaylight Controller:Architectural Framework,
830 https://wiki.opendaylight.org/view/OpenDaylight_Controller
832 5. IANA Considerations
834 This document does not have any IANA requests.
836 6. Security Considerations
838 Security issues are not discussed in this memo.
840 7. Acknowledgements
842 The authors would like to acknowledge Sandeep Gangadharan (HP) for
843 the significant contributions to the earlier versions of this
844 document. The authors would like to thank the following individuals
845 for providing their valuable comments to the earlier versions of
846 this document: Al Morton (AT&T), M. Georgescu (NAIST), Andrew
847 McGregor (Google), Scott Bradner (Harvard University), Jay Karthik
848 (Cisco), Ramakrishnan (Dell), Khasanov Boris (Huawei).
850 8. Authors' Addresses
852 Bhuvaneswaran Vengainathan
853 Veryx Technologies Inc.
854 1 International Plaza, Suite 550
855 Philadelphia
856 PA 19113
858 Email: bhuvaneswaran.vengainathan@veryxtech.com
860 Anton Basil
861 Veryx Technologies Inc.
862 1 International Plaza, Suite 550
863 Philadelphia
864 PA 19113
866 Email: anton.basil@veryxtech.com
868 Mark Tassinari
869 Hewlett-Packard,
870 8000 Foothills Blvd,
871 Roseville, CA 95747
873 Email: mark.tassinari@hp.com
875 Vishwas Manral
876 Ionos Corp,
877 4100 Moorpark Ave,
878 San Jose, CA
880 Email: vishwas@ionosnetworks.com
882 Sarah Banks
883 VSS Monitoring
884 930 De Guigne Drive,
885 Sunnyvale, CA
887 Email: sbanks@encrypted.net