idnits 2.17.1
draft-jin-homenet-dncp-experience-00.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
No issues found here.
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (July 6, 2015) is 3189 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Outdated reference: A later version (-12) exists of
draft-ietf-homenet-dncp-07
== Outdated reference: A later version (-10) exists of
draft-ietf-homenet-hncp-07
== Outdated reference: A later version (-08) exists of
draft-ietf-homenet-prefix-assignment-07
Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Home Networking K. Jin
3 Internet-Draft Ecole Polytechnique / Cisco
4 Intended status: Informational Systems
5 Expires: January 7, 2016 P. Pfister
6 Cisco Systems
7 J. Yi
8 LIX, Ecole Polytechnique
9 July 6, 2015
11 Experience and Evaluation of the Distributed Node Consensus Protocol
12 draft-jin-homenet-dncp-experience-00
14 Abstract
16 The Distributed Node Consensus Protocol (DNCP) is a generic state
17 synchronization protocol that offers data synchronization and dynamic
18 network topology discovery within a network. This document reports
19 experience with the DNCP, which includes its implementation status
20 and performance evaluation in a simulated environment.
22 Status of this Memo
24 This Internet-Draft is submitted in full conformance with the
25 provisions of BCP 78 and BCP 79.
27 Internet-Drafts are working documents of the Internet Engineering
28 Task Force (IETF). Note that other groups may also distribute
29 working documents as Internet-Drafts. The list of current Internet-
30 Drafts is at http://datatracker.ietf.org/drafts/current/.
32 Internet-Drafts are draft documents valid for a maximum of six months
33 and may be updated, replaced, or obsoleted by other documents at any
34 time. It is inappropriate to use Internet-Drafts as reference
35 material or to cite them other than as "work in progress."
37 This Internet-Draft will expire on January 7, 2016.
39 Copyright Notice
41 Copyright (c) 2015 IETF Trust and the persons identified as the
42 document authors. All rights reserved.
44 This document is subject to BCP 78 and the IETF Trust's Legal
45 Provisions Relating to IETF Documents
46 (http://trustee.ietf.org/license-info) in effect on the date of
47 publication of this document. Please review these documents
48 carefully, as they describe your rights and restrictions with respect
49 to this document. Code Components extracted from this document must
50 include Simplified BSD License text as described in Section 4.e of
51 the Trust Legal Provisions and are provided without warranty as
52 described in the Simplified BSD License.
54 Table of Contents
56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
57 2. Implementation Status . . . . . . . . . . . . . . . . . . . . 3
58 2.1. hnetd Implementation (libdncp) . . . . . . . . . . . . . . 3
59 2.2. libdncp2 Implementation . . . . . . . . . . . . . . . . . 4
60 2.3. shncpd Implementation . . . . . . . . . . . . . . . . . . 4
61 3. Experimental Setup . . . . . . . . . . . . . . . . . . . . . . 5
62 3.1. Simulation Environment . . . . . . . . . . . . . . . . . . 5
63 3.2. Performance Metrics . . . . . . . . . . . . . . . . . . . 7
64 3.3. Chosen Toplogies . . . . . . . . . . . . . . . . . . . . . 7
65 3.3.1. Single Link Topology . . . . . . . . . . . . . . . . . 7
66 3.3.2. String Topology . . . . . . . . . . . . . . . . . . . 8
67 3.3.3. Full Mesh Topology . . . . . . . . . . . . . . . . . . 8
68 3.3.4. Tree Topology . . . . . . . . . . . . . . . . . . . . 9
69 3.3.5. Double Tree Topology . . . . . . . . . . . . . . . . . 9
70 4. DNCP Performance Evaluation . . . . . . . . . . . . . . . . . 9
71 4.1. Results for the Single Link Topology . . . . . . . . . . . 10
72 4.2. Results for the String Topology . . . . . . . . . . . . . 11
73 4.3. Results for the full mesh topology . . . . . . . . . . . . 12
74 4.4. Results for the tree topology . . . . . . . . . . . . . . 13
75 4.5. Results for the double-tree topology . . . . . . . . . . . 13
76 5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 14
77 6. Security Considerations . . . . . . . . . . . . . . . . . . . 15
78 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15
79 8. Informative References . . . . . . . . . . . . . . . . . . . . 15
80 Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . . 16
81 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16
83 1. Introduction
85 The Distributed Node Consensus Protocol (DNCP)
86 [I-D.ietf-homenet-dncp] is a protocol providing node data
87 synchronization and dynamic network topology discovery within a
88 network. At the time of writing this document, DNCP is in
89 standardization process by the IETF Homenet working group.
91 In DNCP, the information of a node and its view of the network is
92 encoded in a set of TLV (Type-Length-Value) tuples. By publishing
93 and exchanging the TLVs with its neighbors, each node in the network
94 eventually receives the set of TLV published by every other node
95 within the network (in which case, the network is converged). The
96 Trickle algorithm [RFC6206] is used, instead of periodic signaling,
97 to reduce the overhead of synchronization. DNCP also provides an
98 option of "keep alive" message to detect when a neighbor is not
99 reachable anymore.
101 As a generic protocol, DNCP can not only be applied in home networks,
102 but also in other networks that require node data synchronization
103 (such as configuration profiles, network topology, services, etc.).
104 Therefore, DNCP leaves some parameters to be specified by DNCP
105 profiles, which are actual implementable instances of DNCP. Nodes
106 implementing the same DNCP profile, and operating within the same
107 DNCP network, are able share TLV tuples, discover the network
108 topology, and auto-detect arrival and departure of other nodes.
110 This document presents experience and performances evaluation using
111 the reference DNCP implementation in a simulated environment using
112 the Network Simulator 3 (NS3), a discrete event simulator widely used
113 in network research and recognized by the scientific community.
115 Note that for the purpose of this first study, DNCP was evaluated in
116 various, quite unrealistic and probably extreme, network topologies,
117 with the intent of finding the limits of the protocol.
119 2. Implementation Status
121 Until July 2015, there are 2 publicly known implementations of DNCP,
122 one of which was recently modified.
124 2.1. hnetd Implementation (libdncp)
126 'hnetd' is an open source implementation of the Homenet network
127 stack. As an implementation of the Home Network Control Protocol -
128 HNCP [I-D.ietf-homenet-hncp], it includes the DNCP, the prefix
129 assignment algorithm [I-D.ietf-homenet-prefix-assignment], as well as
130 other elements specific to Homenet. hnetd is available as a package
131 to be installed on OpenWrt, which is the platform it is most suited
132 for (hnetd also runs in a lesser extend on standard Linux). The code
133 is available on github [1] and is published under the GPLv2 license
134 (libdncp is available in the 'libdncp' git branch).
136 hnetd is based on a lightweight toolkit library containing useful
137 structures (e.g., lists, trees), functions (e.g., md5), as well as a
138 small event loop, that is widely used in OpenWrt. Thanks to the
139 quality of the code, libdncp, as separatly buildable library
140 implementing DNCP functions, was easily extracted from hnetd for the
141 purpose of this work.
143 hnetd, DNCP included, consists of 15651 lines of code (18220 when
144 including test files). The binary weights 576KB when compiled for
145 debian X86_64 with no optimization and 727KB when compiled for
146 OpenWrt MIPS. libdncp2 roughly consists of 2300 lines of code (2590
147 when including security option). It weights 193KB when compiled with
148 no optimization for debian x86_64 and 192KB when compiled for OpenWrt
149 MIPS.
151 2.2. libdncp2 Implementation
153 As a result of different interests expressed about using DNCP outside
154 of Homenet (including this study), DNCP code within hnetd was partly
155 rewritten and reorganized in a more independent implementation of
156 DNCP, with less coupling with HNCP [I-D.ietf-homenet-hncp]. libdncp2
157 moves the DNCP profiles specificities from compilation-time to run-
158 time. It is published under the same license as hnetd and is now
159 part of the mainstream branch [1].
161 libdncp2 provides some improvements with respect to its predecessor
162 libdncp. For the purpose of this document, libdncp was used, but we
163 plan to use libdncp2 instead in the next iterations of this document.
165 2.3. shncpd Implementation
167 shncpd is an open source implementation of DNCP developed by Juliusz
168 Chroboczek. It uses HNCP [I-D.ietf-homenet-hncp] profile. The
169 source code is available on github [2]. At the time of publishing
170 this document, shncpd implements:
172 o The HNCP profile of DNCP.
174 o Parsing of a significant subset of HNCP.
176 o Address allocation (not prefix allocation).
178 3. Experimental Setup
180 This section describes the environment, the parameters and topologies
181 which were used for measuring DNCP performances.
183 3.1. Simulation Environment
185 For the purpose of this work, the DNCP part (libdncp) of'hnetd'
186 introduced in Section 2.1 was modified in order to provide a
187 statically linkable library containing DNCP implementation. To
188 evaluate the performance of DNCP in large and complex networks, NS3
189 is employed as the simulation platform.
191 As shown in Figure 1, all the application-level actions (processing
192 packets, publishing TLVs...) are performed inside the libdncp
193 implementation. The packets are sent to or received from the lower
194 layers in ns3 through the redefined socket API. UDP is used for
195 layer 4 and IPv6 for layer 3. NS3 implements different types of
196 links as layer 1 and layer 2. For these measures, the so called
197 'CSMA' link type is used. The NS3 CSMA link is designed in the
198 spirit of Ethernet but implements fail-proof carrier sense used for
199 collision avoidance. For this first study, where measuring DNCP link
200 usage was desired, link of virtually infinite throughput are used.
202 +--------------------------------------+
203 | dncp implementation (Libdncp) |
204 +--------------------------------------+
205 +--------------socket API--------------+
206 | +---------------------+ |
207 | | L4 (UDP) | |
208 | +---------------------+ |
209 | |
210 | +---------------------+ |
211 | | L3 (Ipv6) | NS3 |
212 | +---------------------+ |
213 | |
214 | +---------------------+ |
215 | |L1 and L2 (CsmaModel)| |
216 | +---------------------+ |
217 +--------------------------------------+
219 Figure 1: libdncp over NS3 network layering model
221 Listed below are several attributes of the CSMA model:
223 MTU: The link layer maximum transmission unit, set to 1500.
225 Encapsulation Mode: Type of link layer encapsulation. "Dix" mode
226 was used, which models an Ethernet II layer.
228 TxQueue: Type of the transmit queue used by the device. NS3
229 provides "Codel queue", "Drop tail" and "RED" (random early
230 detection) queues. "Drop tail" queue was used with a size of 100
231 packets.
233 Inter-frame Gap: The pause between two frames, set to 0.
235 Listed below are several attributes of the CSMA channel:
237 Data Rate: Physical layer bit rate, enforcing the time it takes for
238 a frame to be transmitted. It is set to 1Gbps, which is
239 significantly greater than what DNCP is expected to use.
241 Delay: Signal propagation time within the medium. It was set to 1
242 micro second.
244 Assuming a constant frame size, the theoretical throughput of the
245 medium is given by the formula FrameSize/(FrameSize/DataRate +
246 Delay). For a frame size of 1500 bytes, the throughput is 923Mbps.
247 For a FrameSize of 100 bytes, the link throughput is 444Mbps.
249 Running DNCP in NS3 requires libdncp to be integrated and built with
250 NS3. DNCP runs on an event loop managed by libubox, which therefore
251 specifies how to set timers and listen to file descriptors.
252 Integration was quite straitforward. Event-loop and system calls
253 were identified and replaced with their NS3 equivalents. Besides, an
254 application in ns3 called "DncpApplication" is created, this
255 application can be installed on the node and can be started and
256 stopped at given time. Once the application is launched, dncp begins
257 to run on that node by calling functions in the libdncp static
258 library from inside the ns3 application. It is expected that
259 integration with libdncp2 will be even simpler, as this new library
260 put the DNCP profile definition at runtime instead of compilation
261 time.
263 Running experiments in simulated environments offers multiple
264 advantages such as the ability to run long-lived scenarios in short
265 period of time, simulate networks of hundread of nodes without
266 requiring lots of resources, or isolate tested components from
267 external interferences. On the other hand, NS3 executes program
268 steps virtually instantaneously, it is therefore hard to take into
269 account hardware speed when measuring time-related performances
270 metrics. For these first experiments, virtually perfect links with
271 no delay were used, and a processing delay of 0.5ms for each received
272 packet was introduced in order to simulate the packet processing
273 time.
275 3.2. Performance Metrics
277 The goal of this study is to measure the performances of DNCP in some
278 extreme scenarios, see whether, and how it converges. Therefore, the
279 following three performance metrics were observed:
281 Convergence time: The time it took for the network to converge.
283 Overall traffic sent: The amount of data that was sent on link
284 before convergence.
286 Average traffic sent per node: The overall traffic sent divided by
287 the number of nodes.
289 3.3. Chosen Toplogies
291 This section describes different topologies used in this study. The
292 main criteria for selecting a topology was that it should be:
294 o Easily described and generated as a function of the number of
295 nodes (called N).
297 o Deterministic.
299 o Representing different situations testing different scalability
300 properties.
302 The rest of this section describes the five different topologies:
304 o Single link
306 o String
308 o Full mesh
310 o Tree topology
312 o Double tree topology
314 3.3.1. Single Link Topology
316 The single link topology puts all the nodes on the same link. Each
317 node has a single DNCP End-Point with N-1 neighbors. Such topology
318 is well suited for evaluating DNCP scalability capabilities in terms
319 of the number of neighbors on a given link.
321 n1 n2 n3 n4
322 | | | |
323 +--------+--------+--------+
325 Figure 2: The single link topology for N = 4
327 3.3.2. String Topology
329 The string topology chains all nodes one after the other. Each node
330 has two DNCP End-Points with one neighbor on each (except for the two
331 extremities). Such topology is well suited for evaluating the
332 convergence time depending on the diameter of the network as well as
333 the scalability of DNCP in terms of the overall number of nodes.
335 +-------+ +-------+
336 | | | | |
337 n1 n2 n3 n4 n5 n6
338 | | | | | |
339 +-------+ +-------+ +-------+
341 Figure 3: The string topology for N = 6
343 3.3.3. Full Mesh Topology
345 The mesh topology connects all nodes with distinct links. Each node
346 has N-1 DNCP End-Points with one neighbor on each. Such topology is
347 well suited for evaluating DNCP scalability capabilities in terms of
348 the amount of nodes and end-points.
350 n1----n2--+
351 | \ | |
352 | \ | |
353 | \ | |
354 | \| |
355 n3----n4 |
356 | |
357 +---------+
359 Figure 4: The full mesh topology for N = 4
361 3.3.4. Tree Topology
363 The tree topology forms a typical binary tree. Node 'i' is connected
364 with nodes '2*i' and '2*i + 1', unless those numbers are greater than
365 N. In such topology, all nodes except the root one have three DNCP
366 End-Points with one neighbor on each. This topology offers a more
367 realistic equilibrium between the diameter and the amount of nodes.
369 n1
370 / \
371 n2 n3
372 / | |
373 / | |
374 n4 n5 n6
376 Figure 5: The tree topology for N = 6
378 3.3.5. Double Tree Topology
380 The double tree topology is identical to the binary tree, but each
381 node is paired with a redundancy node. In such topology, all nodes
382 except the two root node have 6 DNCP End-Point with one neighbor on
383 each. This topology also offers a more realistic trade-off between
384 the network diameter and the number of nodes, but also adds
385 redundancy and loops.
387 For example, for N = 9:
389 o n1 has point-to-point links with n3, n4, n5 and n6.
391 o n2 has point-to-point links with n3, n4, n5 and n6.
393 o n3 has additional point-to-point links with n7, n8 and n9.
395 o n4 has additional point-to-point links with n7, n8 and n9.
397 4. DNCP Performance Evaluation
399 This section provides different performance metrics for different
400 network topologies of different sizes. Each value is the average
401 over 10 simulations. Unless stated otherwise, the 10 measures always
402 were pretty close (Small standard deviation).
404 Each simulation reflects the same scenario. All DNCP instances are
405 simultaneously initialized. The simulation is then run until the
406 network converges, and performance metrics introduced in Section 3.2
407 are computed.
409 Important: Note that, as DNCP uses Trickle, it is expected to be very
410 verbose in case of dramatic changes, but as a trade-off should
411 provide good convergence times and low verbosity in the absence of
412 changes. An instantaneous and complete bootstrap of the network,
413 which is the present scenario, is a particularly extreme state
414 change. This scenario was selected as a worst case to see whether
415 DNCP converges well, or at-all. On that side, DNCP behaved well, but
416 as expected, the overall amount of generated traffic sometimes
417 appeared to be significant.
419 4.1. Results for the Single Link Topology
421 +-----------------------+----------+----------+----------+----------+
422 | Number of nodes | 10 nodes | 20 nodes | 30 nodes | 40 nodes |
423 +-----------------------+----------+----------+----------+----------+
424 | Convergence time | 1.84s | 3.09s | *4.43s | 5.14s |
425 | ---- | ---- | ---- | ---- | ---- |
426 | Traffic sent | 85.3KB | 604.7KB | 2.3MB | 5.4MB |
427 | ---- | ---- | ---- | ---- | ---- |
428 | Traffic sent per node | 8.5KB | 30.2KB | 79.6KB | 140.7KB |
429 +-----------------------+----------+----------+----------+----------+
431 Table 1: Single Link Topology (1)
433 +-----------------------+----------+----------+----------+----------+
434 | Number of nodes | 50 nodes | 60 nodes | 70 nodes | 80 nodes |
435 +-----------------------+----------+----------+----------+----------+
436 | Convergence time | 6.53s | *8.61s | 11.57s | 14.05s |
437 | ---- | ---- | ---- | ---- | ---- |
438 | Traffic sent | 11.9MB | 23.7MB | 51.7MB | 88.1MB |
439 | ---- | ---- | ---- | ---- | ---- |
440 | Traffic sent per node | 245.KB | 404.8KB | 757.2KB | 1.1MB |
441 +-----------------------+----------+----------+----------+----------+
443 *: the average value was calculated over the results of 9 experiments
444 because one of the value was significantly greater.
446 Table 2: Single Link Topology (2)
448 Note that two accidents were observed during the simulations. One
449 happened in one experiment among the 10 that we ran for the 30-nodes
450 network. The network first converged at 4.016s, which is very close
451 to the average convergence time, but at 25.949s this converging state
452 was broken and the network finally reconverged at 26.12s. Similarly,
453 for the 60-node network, it first converged at 7.081s then got
454 disturbed at 25.822s and converged again at 26.303.
456 Although the convergence time seems to grow linearly with the number
457 of nodes, the overall traffic sent before convergence increases
458 dramatically. This is caused by the fact that not-only each node
459 will synchronize with any other node on the link, but the data will
460 grow as nodes discover their neighbors. The problem is explained in
461 [I-D.ietf-homenet-dncp] Section 6.2 (Support For Dense Broadcast
462 Links) where an optional solution is also proposed, but that option
463 *was not* implemented in libdncp, which explains the bad
464 performances.
466 4.2. Results for the String Topology
468 +-----------------------+----------+----------+----------+----------+
469 | Number of nodes | 10 nodes | 20 nodes | 30 nodes | 40 nodes |
470 +-----------------------+----------+----------+----------+----------+
471 | Convergence time | 1.84s | 3.65s | 5.24s | 7.09s |
472 | ---- | ---- | ---- | ---- | ---- |
473 | Traffic sent | 51.5KB | 243.4KB | 605KB | 1.2MB |
474 | ---- | ---- | ---- | ---- | ---- |
475 | Traffic sent per node | 5.1KB | 12.2KB | 20.1KB | 30.9KB |
476 +-----------------------+----------+----------+----------+----------+
478 Table 3: String topology (1)
480 +-----------------------+----------+----------+----------+----------+
481 | Number of nodes | 50 nodes | 60 nodes | 70 nodes | 80 nodes |
482 +-----------------------+----------+----------+----------+----------+
483 | Convergence time | 8.79s | 11.11s | 12.87s | 15.03s |
484 | ---- | ---- | ---- | ---- | ---- |
485 | Traffic sent | 2MB | 3MB | 4.1MB | 5.6MB |
486 | ---- | ---- | ---- | ---- | ---- |
487 | Traffic sent per node | 40.5KB | 50.4KB | 59.2KB | 70.1KB |
488 +-----------------------+----------+----------+----------+----------+
490 Table 4: String topology (2)
492 These results show that in a linear network, convergence time is
493 linear in the number of chained nodes, but the overall transmitted
494 traffic is not. This can be easily explained as the convergence time
495 reflects the time for both extremities to discover each other
496 (updates have to traverse the whole string), but in the meantime,
497 other updates are sent. The slope of the convergence time line is
498 0.18 second per node, which is pretty quick, but also comes at the
499 cost of transmitting multiple updates before all nodes are
500 discovered.
502 4.3. Results for the full mesh topology
504 +-----------------------+----------+----------+----------+----------+
505 | Number of nodes | 10 nodes | 20 nodes | 30 nodes | 40 nodes |
506 +-----------------------+----------+----------+----------+----------+
507 | Convergence time | 1.71s | 3.2s | 4.83s | *6.19s |
508 | ---- | ---- | ---- | ---- | ---- |
509 | Traffic sent | 202.7KB | 1.6MB | 6.6MB | 18.1MB |
510 | ---- | ---- | ---- | ---- | ---- |
511 | Traffic sent per node | 20.3KB | 83.5KB | 222.1KB | 453.8KB |
512 +-----------------------+----------+----------+----------+----------+
514 *: the average value was calculated over the results of 9 experiments
515 because one of the value was significantly greater.
517 Table 5: Full mesh topology (1)
519 +-----------------------+----------+----------+----------+----------+
520 | Number of nodes | 50 nodes | 60 nodes | 70 nodes | 80 nodes |
521 +-----------------------+----------+----------+----------+----------+
522 | Convergence time | 10.64s | 13.02s | 15.33s | 17.93s |
523 | ---- | ---- | ---- | ---- | ---- |
524 | Traffic sent | 49.1MB | 95.8MB | 167.4MB | 271.9MB |
525 | ---- | ---- | ---- | ---- | ---- |
526 | Traffic sent per node | 983.1KB | 1.6MB | 2.4MB | 3.4MB |
527 +-----------------------+----------+----------+----------+----------+
529 Table 6: Full mesh topology (2)
531 An incident similar to the one that occurred with a single link
532 topology was observed. Although the convergence time is low, the
533 amount of transmitted traffic is very high compared to other
534 topologies. It can be explained by the high number of distinct links
535 (equal to N(N-1)/2): Trickle has to converge on each individual link.
536 Unlike the single link topology, this situation is harder to detect
537 and resolve.
539 4.4. Results for the tree topology
541 +-----------------------+----------+----------+----------+----------+
542 | Number of nodes | 10 nodes | 20 nodes | 30 nodes | 40 nodes |
543 +-----------------------+----------+----------+----------+----------+
544 | Convergence time | 1.16s | 1.57s | 1.86s | 2s |
545 | ---- | ---- | ---- | ---- | ---- |
546 | Traffic sent | 40.7KB | 166.7KB | 374KB | 644.5KB |
547 | ---- | ---- | ---- | ---- | ---- |
548 | Traffic sent per node | 4.1KB | 8.3KB | 12.4KB | 16.1KB |
549 +-----------------------+----------+----------+----------+----------+
551 Table 7: Tree topology (1)
553 +-----------------------+----------+----------+----------+----------+
554 | Number of nodes | 50 nodes | 60 nodes | 70 nodes | 80 nodes |
555 +-----------------------+----------+----------+----------+----------+
556 | Convergence time | 2.33s | 2.42s | 2.56s | 2.6s |
557 | ---- | ---- | ---- | ---- | ---- |
558 | Traffic sent | 1MB | 1.3MB | 1.9MB | 2.4MB |
559 | ---- | ---- | ---- | ---- | ---- |
560 | Traffic sent per node | 20.2KB | 22.8KB | 26.7KB | 29.9KB |
561 +-----------------------+----------+----------+----------+----------+
563 Table 8: Tree topology (2)
565 As expected in a tree topology, the convergence time is sub-linear.
566 We also oberve that the overall amount of traffic (and per node) is
567 relatively low compared to other topologies. This is quite
568 satisfying as the tree and double-tree topoligies are the more
569 realistic ones.
571 4.5. Results for the double-tree topology
572 +-----------------------+----------+----------+----------+----------+
573 | Number of nodes | 10 nodes | 20 nodes | 30 nodes | 40 nodes |
574 +-----------------------+----------+----------+----------+----------+
575 | Convergence time | 1.04s | 1.44s | 1.5s | 1.7s |
576 | ---- | ---- | ---- | ---- | ---- |
577 | Traffic sent | 66.9KB | 265KB | 605.1KB | 1MB |
578 | ---- | ---- | ---- | ---- | ---- |
579 | Traffic sent per node | 6.7KB | 13.2KB | 20.2KB | 25.3KB |
580 +-----------------------+----------+----------+----------+----------+
582 Table 9: Double-tree topology (1)
584 +-----------------------+----------+----------+----------+----------+
585 | Number of nodes | 50 nodes | 60 nodes | 70 nodes | 80 nodes |
586 +-----------------------+----------+----------+----------+----------+
587 | Convergence time | 1.96s | 1.98s | 2.06s | 2.09s |
588 | ---- | ---- | ---- | ---- | ---- |
589 | Traffic sent | 1.5MB | 2MB | 2.8MB | 3.5MB |
590 | ---- | ---- | ---- | ---- | ---- |
591 | Traffic sent per node | 30.8KB | 33.2KB | 39.7KB | 44.7KB |
592 +-----------------------+----------+----------+----------+----------+
594 Table 10: Double-tree topology (2)
596 Results are quite similar to the simple tree topology. The
597 convergence delay is very satisfying while the overall amount of
598 traffic is pretty low compared to other topologies.
600 5. Conclusion
602 DNCP does converge in small networks as well as larger ones. It
603 converges fast at the cost of a quite high overall transmitted amount
604 of data. It behaves particularly well in tree topologies as well as
605 double tree topologies, which are the most realistic tested
606 topologies.
608 The first observed concern appears in the single link topology, where
609 the amount of traffic grows dramatically with the number of nodes.
610 The problem is known and DNCP actually proposes a solution to that
611 problem. But this solution is optional and was not part of the
612 tested implementation. Further attention may be necessary on these
613 particular mechanisms in order to make sure that DNCP behaves well in
614 such situations (if such topology is in scope of DNCP at all).
616 The overall amount of traffic was also significant in full mesh
617 topologies as well as string topologies. A possible approach to
618 improve these results might be to rate-limit the amount of updates
619 that may be made in short periods of time. Such approach would
620 provide fast convergence after small changes and would reduce the
621 overall amount of traffic in reaction to dramatic changes.
623 Once again, it is important to note that DNCP is expected to be
624 verbose in case of regular or dramatic changes. DNCP users should
625 make sure that the network eventually stabilizes, such that DNCP can
626 take full advantage of the Trickle algorithm.
628 Next iterations of this document might include further results such
629 as:
631 o Measures with libdncp2 instead of libdncp(v1).
633 o Measures with different profiles (rather than the HNCP profile
634 alone).
636 o Other metrics (e.g., convergence ratio).
638 o Other scenarios (e.g., no updates at all, sporadic updates).
640 6. Security Considerations
642 This document does not specify a protocol or a procedure. DNCP's
643 security architecture is described in Section 8 of
644 [I-D.ietf-homenet-dncp].
646 7. IANA Considerations
648 This document contains no action for IANA.
650 8. Informative References
652 [I-D.ietf-homenet-dncp]
653 Stenberg, M. and S. Barth, "Distributed Node Consensus
654 Protocol", draft-ietf-homenet-dncp-07 (work in progress),
655 July 2015.
657 [I-D.ietf-homenet-hncp]
658 Stenberg, M., Barth, S., and P. Pfister, "Home Networking
659 Control Protocol", draft-ietf-homenet-hncp-07 (work in
660 progress), July 2015.
662 [I-D.ietf-homenet-prefix-assignment]
663 Pfister, P., Paterson, B., and J. Arkko, "Distributed
664 Prefix Assignment Algorithm",
665 draft-ietf-homenet-prefix-assignment-07 (work in
666 progress), June 2015.
668 [RFC6206] Levis, P., Clausen, T., Hui, J., Gnawali, O., and J. Ko,
669 "The Trickle Algorithm", RFC 6206, March 2011.
671 [1]
673 [2]
675 Appendix A. Acknowledgments
677 The authors would like to thank Markus Stenberg for his help with
678 hnetd and the remarkable quality of his code, as well as Juliusz
679 Chroboczek for the new (yet untested) DNCP implementation.
681 Authors' Addresses
683 Kaiwen Jin
684 Ecole Polytechnique / Cisco Systems
685 Palaiseau,
686 France
688 Email: kaiwen.jin@master.polytechnique.org
690 Pierre Pfister
691 Cisco Systems
692 Paris,
693 France
695 Email: pierre.pfister@darou.fr
697 Jiazi Yi
698 LIX, Ecole Polytechnique
699 Route de Saclay
700 Palaiseau 91128,
701 France
703 Phone: +33 1 77 57 80 85
704 Email: jiazi@jiaziyi.com
705 URI: http://www.jiaziyi.com/