idnits 2.17.1 draft-goyal-roll-p2p-performance-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Sep 2009 rather than the newer Notice from 28 Dec 2009. (See https://trustee.ietf.org/license-info/) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 2, 2010) is 5159 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '0' on line 138 -- Looks like a reference, but probably isn't: '4' on line 138 == Outdated reference: A later version (-19) exists of draft-ietf-roll-rpl-06 Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Goyal 3 Internet-Draft University of Wisconsin Milwaukee 4 Intended status: Informational J. Martocci 5 Expires: September 3, 2010 Y. Bashir 6 T. Humpal 7 Johnson Controls 8 E. Baccelli 9 INRIA 10 March 2, 2010 12 A Performance Analysis of P2P Routing along a DAG in LLNs 13 draft-goyal-roll-p2p-performance-00 15 Abstract 17 In this draft, we analyze the point-to-point (P2P) routing 18 performance of RPL by comparing the shortest P2P route costs in a 19 sample network topology with the corresponding costs when routing 20 along a tree built to minimize the routing costs to the tree's root. 22 Status of this Memo 24 This Internet-Draft is submitted to IETF in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF), its areas, and its working groups. Note that 29 other groups may also distribute working documents as Internet- 30 Drafts. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 The list of current Internet-Drafts can be accessed at 38 http://www.ietf.org/ietf/1id-abstracts.txt. 40 The list of Internet-Draft Shadow Directories can be accessed at 41 http://www.ietf.org/shadow.html. 43 This Internet-Draft will expire on September 3, 2010. 45 Copyright Notice 47 Copyright (c) 2010 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the BSD License. 60 Table of Contents 62 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 63 2. The Network Topology . . . . . . . . . . . . . . . . . . . . . 3 64 3. Comparing P2P Route Costs: Shortest Path Routing versus 65 Routing Along the Tree/DAG . . . . . . . . . . . . . . . . . . 4 66 4. Comparing P2P Route Costs When Routing Along the Tree/DAG: 67 Turning Around at First Common Ancestor Versus Turning 68 Around at Root . . . . . . . . . . . . . . . . . . . . . . . . 5 69 5. Traffic Load on the Links: Shortest Path Routing Versus 70 Routing Along the Tree/DAG . . . . . . . . . . . . . . . . . . 7 71 6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 8 72 7. Security Considerations . . . . . . . . . . . . . . . . . . . . 8 73 8. Informative References . . . . . . . . . . . . . . . . . . . . 9 74 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 9 76 1. Introduction 78 In this draft, we analyze the point-to-point (P2P) routing 79 performance of RPL [I-D.ietf-roll-rpl] by comparing the shortest P2P 80 route costs in a sample network topology with the corresponding costs 81 when routing along a tree built to minimize the routing costs to the 82 tree's root. Such a tree represents an RPL directed acyclic graph 83 (DAG), which, in general, allows a node to have multiple parents as 84 opposed to a single parent allowed in a tree. 86 Point-to-point routing along a DAG, as described in RPL 87 specification, works in the following manner. Two nodes, in each 88 other's radio range, can send packets to each other directly. 89 However, if two nodes are not in each other's radio range, they can 90 send packets to each other only along the DAG links, which means that 91 a packet travels up the DAG until it reaches a node that knows of the 92 downwards route to the destination and then it travels down the DAG 93 towards its destination. A node that belongs to a DAG must maintain 94 one or more parents in the DAG that serve as the default next hops 95 for packet forwarding. In addition, a node may optionally maintain 96 downwards routing information for its descendants in the DAG. The 97 DAG root may need to maintain downwards routing information for all 98 the nodes in the DAG. In the best case point-to-point scenario in a 99 DAG, the packet travels up the DAG only till it encounters the first 100 common ancestor of the source and the destination. In the worst 101 case, the packet may need to travel all the way to the DAG's root 102 before it starts its downward journey towards the destination. The 103 routing cost along a DAG for a source-destination pair refers to the 104 sum of the link costs along the route between the source and the 105 destination along the DAG (up the DAG and then down). While routing 106 along a DAG is constrained to the links in the DAG, the shortest 107 route from a source to a destination is calculated without any 108 restrictions. 110 2. The Network Topology 112 The performance analysis, presented in this draft, was done on a 113 network of 1001 nodes distributed in a 632m X 632m region. This 114 number represents the expected upper limit on the number of nodes per 115 DAG in the future real deployments. The node locations were 116 determined one-by-one in the following manner. The x and y 117 coordinates of a new location were determined in a uniform random 118 fashion in range {0m,632m} under the constraint that the minimum 119 distance between a new location and an existing location should not 120 be less than 10m or larger than 30m. This was done to ensure that a 121 new location is always in the radio range of atleast one existing 122 location. The radio range for each node in these simulations was a 123 circle with radius 31.45m. Thus, we ensured that there were no 124 partitions in the network topology. 126 Figure 1 illustrates various aspects of the network topology. Figure 127 1a gives a visual representation of the network topology and Figure 128 1b shows the connectivity in the topology in terms of the number of 129 nodes with a given number of neighbors in their radio range. Figure 130 1c shows the number of source-destination pairs with a given minimum 131 hop distance between them. The performance analysis presented in 132 this draft is based on both unit link costs as well as link costs 133 distributed between 1 and 16 in the following manner. As per RPL 134 specification, the link cost values 1, 4 and 16 signify a perfect, 135 normal and an almost unusable link respectively. The link cost 136 between two neighbor nodes in the network topology was determined as 137 2^x rounded to the closest integer, where x is a real number 138 uniformly distributed over range [0,4]. The same cost value was used 139 for both directions of the link. Figure 1d shows the distribution of 140 link costs calculated in this manner. Figures 1e and 1f show the 141 shortest path tree (representing a DAG), minimizing each node's cost 142 to reach the tree's root, based on the unit link costs and the link 143 costs calculated in the manner described above respectively. 145 3. Comparing P2P Route Costs: Shortest Path Routing versus Routing 146 Along the Tree/DAG 148 Figures 2 and 3 compare the costs of shortest routes with the costs 149 when routing along a DAG. In these figures, the DAG is represented 150 by a shortest path tree, referred to as the common tree in the 151 figures, minimizing each node's cost to reach the tree's root. 152 Figure 2 shows the results when using unit link costs (i.e. the 153 shortest routes are the minimum hop routes) and Figure 3 shows the 154 results when link costs are distributed between 1 and 16 in the 155 manner described earlier. 157 Consider figure 2a. The figure shows the cost comparison (when using 158 unit link costs) for the source-destination pairs whose shortest 159 routes are 2 to 5 hops long. Examining the cost comparison for these 160 source-destination pairs is important because, in many LLN 161 applications, point-to-point communication typically takes place 162 between nodes that are located close to each other although not 163 necessarily in each other's radio range. Since the network topology 164 used for performance comparison consists of 1001 nodes, there are a 165 total of 1001000 different source-destination pairs. Out of these, 166 160788 source-destination pairs were observed to be 2 to 5 hops apart 167 (19538 pairs 2 hops apart; 32634 pairs 3 hops apart; 47334 pairs 4 168 hops apart and 61282 pairs 5 hops apart). Figure 2a shows the 169 corresponding number of hops (in green) when routing between the 170 source and destination takes place along the tree/DAG. The figure 171 also shows (in blue) the average number of hops traversed when 172 routing along the tree/DAG. For the nodes that are only 2, 3, 4 and 173 5 hops away from each other, the average number of hops when routing 174 along the tree/DAG becomes 7.41, 8.95, 10.11 and 11.1 respectively. 175 The corresponding worst case hop distance when routing along the 176 tree/DAG was observed to be 34, 33, 32 and 32 respectively. As 177 demonstrated later, this significant increase in the hop count when 178 routing along the tree/DAG translates into significant increase in 179 the traffic load on the links, which will result in a serious 180 deterioration in the packet loss rate and latency for LLN 181 applications. Figures 2b and 2c show the corresponding comparison 182 for source-destination pairs that are 6 to 10 and higher hops away 183 from each other respectively. These figures demonstrate that the 184 routing along the DAG results in packets traveling much higher number 185 of hops than what they would when using shortest routes. 187 Figure 3 shows the cost comparison between shortest routes and routes 188 along a DAG when link costs are distributed between 1 and 16. Figure 189 3a} shows the comparison for source-destination pairs that have 190 shortest route costs less than 10. Figures 3b and 3c show the 191 comparison for source-destination pairs that have shortest route 192 costs from 10 to 30 and higher respectively. These results are 193 similar to the results obtained with unit link costs and confirm that 194 the routing along the DAG may result in much higher route costs than 195 shortest (or minimum cost) routes. 197 4. Comparing P2P Route Costs When Routing Along the Tree/DAG: Turning 198 Around at First Common Ancestor Versus Turning Around at Root 200 RPL allows a destination to advertize itself to its ancestors in the 201 DAG via the destination advertisement option (DAO) mechanism. 202 However, a node that receives routing information about a descendant 203 in the DAG may choose not to store this information because of memory 204 constraints. In that case, the node simply adds itself to the 205 \emph{reverse route stack} stored in the DAO packet and forwards it 206 to a parent. Thus, the P2P route along the DAG from a source to a 207 destination may not always turnaround at the first common ancestor of 208 the source and the destination. In the worst case, a packet may have 209 to travel all the way to the DAG's root before moving downwards 210 towards its destination. In figures 2 and 3 discussed in the 211 previous section, the costs associated with DAG-based routing were 212 calculated assuming that the turnaround takes place at the first 213 common ancestor of the source and the destination. In this section, 214 we compare the costs associated with DAG-based P2P routing when the 215 turnaround takes place at the first common ancestor of the source and 216 the destination versus when the packet travels all the way to the 217 DAG's root before turning around. 219 Figure 4 shows the cost comparison in two cases when all links have 220 unit costs. Figure 4a shows the comparison for the source- 221 destination pairs that are 2 to 5 hops away when routing along the 222 DAG and turning around at the first common ancestor. Figures 4b and 223 4c show the comparison for the source-destination pairs that are 6 to 224 10 and higher hops away when routing along the DAG and turning around 225 at the first common ancestor. For a given number of hops travelled 226 (shown in red) along DAG when turning around at the first common 227 ancestor of a source and a destination, these figures show the 228 corresponding number of hops travelled (shown in green) when the 229 packets have to go all the way to the root before turning around. 230 These figures also show (in blue) the average hop count when the 231 turnaround takes place at the root for the source-destination pair 232 with a particular hop count when the turnaround takes place at the 233 first common ancestor. As figure 4a shows, for the source- 234 destination pairs that have a small hop count when the turnaround 235 takes place at the first common ancestor, the hop count when the 236 turnaround takes place at the root can be much higher. However, as 237 the hop count with turnaround at the first common ancestor increases 238 (figures 4b and 4c), the difference with the hop count with 239 turnaround at the root decreases. As figure 4c shows, when a source 240 and a destination are far apart, in most cases the root itself is the 241 first common ancestor between the source and the destination and 242 hence the hop count is same in both cases. 244 Figure 5 shows the cost comparison when link costs are distributed 245 between 1 and 16. These results follow the same trend as the results 246 with unit link costs. Routing cost when the turnaround takes place 247 at the root could be significantly higher if the cost is small when 248 turnaround takes place at the first common ancestor (Figure 5a). As 249 the routing cost with turnaround at the first common ancestor 250 increases (Figure 5b), the difference with turnaround at the root 251 decreases. When routing cost with turnaround at the first common 252 ancestor itself is significant (Figure 5c), generally the root itself 253 is the first common ancestor and hence the routing cost is same in 254 both cases. 256 In Section Section 3, we demonstrated that the DAG-based P2P routing 257 costs (with turnaround at the first common ancestor) can be 258 significantly higher than the minimum route costs, especially when 259 the source and the destination are close to each other (but not in 260 the radio range). The trends described in figures 4 and 5 indicate 261 that the DAG-based P2P routing costs further deteriorate (especially 262 for closeby endpoints) when the turnaround takes place at the root 263 rather than at the first common ancestor. Thus, if an LLN 264 application employs many P2P flows and most P2P flows are between 265 closeby endpoints, it may be beneficial to require most/all DAG nodes 266 to store downwards routing information about their descendants. 268 5. Traffic Load on the Links: Shortest Path Routing Versus Routing 269 Along the Tree/DAG 271 In this section, we demonstrate how the increase in hop-count/ 272 route-cost with DAG-based routing (with turnaround at the first 273 common ancestor) translate in terms of increase in the traffic loads 274 on the links. For this purpose, we chose two sets of P2P flows and 275 calculated the link-level traffic loads on the network while carrying 276 these flows. The first set consisted of 1000 flows selected in the 277 following manner: each node in the network (except the root) randomly 278 selects a node in its 2 to 5 hop neighborhood (i.e. the nodes that 279 can be reached in 2 to 5 hops with shortest path routing) and sends 1 280 packet every second to this node. The second set consisted of 10000 281 flows with each node selecting 10 nodes in its 2 to 5 hop 282 neighborhood and sending each one of them 1 packet per second. Then, 283 we calculated the traffic load on each link when these flows are 284 routed along the common tree/DAG (with turnaround at the first common 285 ancestor) and when shortest path routing is used. The traffic load 286 on a link is calculated simply as the sum of traffic of all the flows 287 that pass through the link. 289 Figures 6a and 6b show the link level traffic loads in the network 290 with 1000 flows under unit link costs and costs distributed between 1 291 and 16 respectively. Figures 6c and 6d show the corresponding 292 results for 10000 flows. There are a total of 9044 links in the 293 topology and the figures do not show the links that were not used at 294 all. The values shown in these figures were first sorted in 295 increasing order of the traffic loads under DAG-based routing and 296 then under shortest path routing. The traffic loads were displayed 297 on a log scale to accomodate the vast range of traffic loads. These 298 figures clearly indicate that DAG-based routing results in large 299 overloads on a fraction of links that end up being a part of a large 300 number of flows. Notice that the DAG-based routing results shown in 301 these figures were based on the turnaround taking place at the first 302 common ancestor of the source and the destination. The traffic 303 overloads can be expected to be much worse in the links closer to the 304 DAG root if all the packets were to travel to the DAG root before 305 turning around. 307 Figure 7 shows the topological view of the traffic loads on the links 308 for 10000 flows under shortest path routing and DAG-based routingwith 309 turnaround at the first common ancestor of the source and the 310 destination. The links are color-coded in the following manner. All 311 links with traffic load more than 100 packets/sec have been colored 312 red. Links with traffic load between 50 and 100 packets/sec have 313 been colored orange. Links with traffic load between 20 and 50 314 packets/sec have been colored green while the links with traffic load 315 less than 20 packets/sec (but more than 0) have been colored blue. 316 Links that are not used at all are not shown. A traffic load of 100 317 packets/sec or more can be considered excessive for links using 318 popular IEEE 802.15.4 MAC/PHY protocols. The maximum data rate 319 possible on an IEEE 802.15.4 link is 250 Kbps which translates to the 320 maximum capacity of 208 packets/sec when the PHY-level packet size is 321 133 bytes (the maximum possible value). In practice, because of 322 hardware-related issues and CSMA-based competition among nodes for 323 packet transmission, the achievable packet rates are much smaller. A 324 traffic load of 100 packets/sec on a link is likely to result in 325 excessive packet loss and large delays for packets that do manage to 326 get transmitted successfully. Even a traffic load between 50 and 100 327 packets/sec can be considered large and such links are likely to 328 suffer heavy packet loss and latency. 330 As figures 7a and 7c show, shortest path routing was able to support 331 10000 flows in the sample network topology with only very few links 332 exceeding the traffic load of 50 packets/sec. On the other hand, 333 DAG-based routing, even when the turnaround takes place at the first 334 common ancestor, results in a significant number of (orange/red) 335 links having a traffic load of more than 50 packets/sec (figures 7b 336 and 7d). Such heavily loaded links are likely to suffer heavy packet 337 losses and latency. Clearly, DAG-based routing is not able to 338 support as many P2P flows in a network as the shortest path routing. 340 6. Conclusion 342 In this draft, we compared the P2P routing cost of DAG-based routing 343 with optimal shortest path routing for a sample 1000 node topology. 344 The results clearly demonstrate the inadequacy of DAG-based P2P 345 routing. The difference in cost between DAG-based routing and 346 shortest path routing is particularly appalling for the source- 347 destination pairs that are close to each other. This difference is 348 likely to worsen as the number of nodes that constitute a DAG further 349 increases. Clearly, there is a need to develop additional P2P 350 routing mechanisms, either within RPL or as separate protocols, that 351 can provide more optimal P2P routes. A scenario where the DAG-based 352 routing is the only P2P routing option may not be acceptable to LLN 353 applications that rely heavily on P2P flows. 355 7. Security Considerations 357 TBD 359 8. Informative References 361 [I-D.ietf-roll-rpl] 362 Winter, T., Thubert, P., and R. Team, "RPL: IPv6 Routing 363 Protocol for Low power and Lossy Networks", 364 draft-ietf-roll-rpl-06 (work in progress), February 2010. 366 Authors' Addresses 368 Mukul Goyal 369 University of Wisconsin Milwaukee 370 3200 N Cramer St 371 Milwaukee, WI 53201 372 USA 374 Phone: +1 414 229 5001 375 Email: mukul@uwm.edu 377 Jerald Martocci 378 Johnson Controls 379 Milwaukee, WI 53202 380 USA 382 Email: Jerald.P.Martocci@jci.com 384 Yusuf Bashir 385 Johnson Controls 386 Milwaukee, WI 53202 387 USA 389 Email: Yusuf.Bashir@jci.com 391 Ted Humpal 392 Johnson Controls 393 Milwaukee, WI 53202 394 USA 396 Email: Ted.Humpal@jci.com 397 Emmanuel Baccelli 398 INRIA 399 Paris 400 France 402 Email: Emmanuel.Baccelli@inria.fr