idnits 2.17.1 draft-ietf-roll-protocols-survey-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.ii or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Expected the document's filename to be given on the first page, but didn't find any Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 599 has weird spacing: '... fail fai...' == Line 601 has weird spacing: '... fail pas...' == Line 602 has weird spacing: '... pass fai...' == Line 603 has weird spacing: '... pass fai...' == Line 605 has weird spacing: '... fail pas...' == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. == Couldn't figure out when the document was first submitted -- there may comments or warnings related to the use of a disclaimer for pre-RFC5378 work that could not be issued because of this. Please check the Legal Provisions document at https://trustee.ietf.org/license-info to determine if you need the pre-RFC5378 disclaimer. -- The document date (Apr 15, 2009) is 5490 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-26) exists of draft-ietf-manet-dymo-15 == Outdated reference: A later version (-15) exists of draft-ietf-manet-nhdp-07 == Outdated reference: A later version (-19) exists of draft-ietf-manet-olsrv2-07 == Outdated reference: A later version (-11) exists of draft-ietf-roll-home-routing-reqs-06 == Outdated reference: A later version (-06) exists of draft-ietf-roll-indus-routing-reqs-03 == Outdated reference: A later version (-05) exists of draft-ietf-roll-urban-routing-reqs-03 == Outdated reference: A later version (-10) exists of draft-irtf-dtnrg-prophet-01 -- Obsolete informational reference (is this intentional?): RFC 1142 (Obsoleted by RFC 7142) -- Obsolete informational reference (is this intentional?): RFC 2740 (Obsoleted by RFC 5340) Summary: 2 errors (**), 0 flaws (~~), 15 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Networking Working Group P. Levis 3 Internet-Draft Stanford University 4 Intended status: Informational A. Tavakoli 5 Expires: October 17, 2009 S. Dawson-Haggerty 6 UC Berkeley 7 Apr 15, 2009 9 Overview of Existing Routing Protocols for Low Power and Lossy Networks 10 draft-ietf-roll-protocols-survey-07 12 Status of this Memo 14 This Internet-Draft is submitted to IETF in full conformance with the 15 provisions of BCP 78 and BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet-Drafts as reference 25 material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt. 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This Internet-Draft will expire on October 17, 2009. 35 Copyright Notice 37 Copyright (c) 2009 IETF Trust and the persons identified as the 38 document authors. All rights reserved. 40 This document is subject to BCP 78 and the IETF Trust's Legal 41 Provisions Relating to IETF Documents in effect on the date of 42 publication of this document (http://trustee.ietf.org/license-info). 43 Please review these documents carefully, as they describe your rights 44 and restrictions with respect to this document. 46 Abstract 48 Low-power wireless devices, such as sensors, actuators and smart 49 objects, present difficult constraints: very limited memory, little 50 processing power, and long sleep periods. As most of these devices 51 are battery-powered, energy efficiency is critically important. 52 Wireless link qualities can vary significantly over time, requiring 53 protocols to make agile decisions yet minimize topology change energy 54 costs. Routing over such low power and lossy networks introduces 55 requirements that existing routing protocols may not fully address. 56 Using existing application requirements documents, this document 57 derives a minimal and not exhaustive set of criteria for routing in 58 low-power and lossy networks. It provides a brief survey of the 59 strengths and weaknesses of existing protocols with respect to these 60 criteria. From this survey it examines whether existing and mature 61 IETF protocols can be used without modification in these networks, or 62 whether further work is necessary. It concludes that no existing 63 IETF protocol meets the requirements of this domain. 65 Table of Contents 67 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 69 2.1. LLN Properties . . . . . . . . . . . . . . . . . . . . . . 5 70 2.2. Question to Answer . . . . . . . . . . . . . . . . . . . . 6 71 3. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 6 72 3.1. Protocols Considered . . . . . . . . . . . . . . . . . . . 6 73 3.2. Criteria . . . . . . . . . . . . . . . . . . . . . . . . . 7 74 3.3. Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 7 75 4. Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 76 4.1. Formal Definitions . . . . . . . . . . . . . . . . . . . . 9 77 4.2. Routing State . . . . . . . . . . . . . . . . . . . . . . 9 78 4.3. Loss Response . . . . . . . . . . . . . . . . . . . . . . 10 79 4.4. Control Cost . . . . . . . . . . . . . . . . . . . . . . . 10 80 4.5. Link and Node Cost . . . . . . . . . . . . . . . . . . . . 11 81 5. Routing Protocol Taxonomy . . . . . . . . . . . . . . . . . . 12 82 5.1. Protocols Today . . . . . . . . . . . . . . . . . . . . . 13 83 6. Link State Protocols . . . . . . . . . . . . . . . . . . . . . 14 84 6.1. OSPF & IS-IS . . . . . . . . . . . . . . . . . . . . . . . 14 85 6.2. OLSR & OLSRv2 . . . . . . . . . . . . . . . . . . . . . . 15 86 6.3. TBRPF . . . . . . . . . . . . . . . . . . . . . . . . . . 15 87 7. Distance Vector protocols . . . . . . . . . . . . . . . . . . 16 88 7.1. RIP . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 89 7.2. Ad-hoc On Demand Vector Routing (AODV) . . . . . . . . . . 16 90 7.3. DYMO . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 91 7.4. DSR . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 92 8. Neighbor Discovery . . . . . . . . . . . . . . . . . . . . . . 17 93 8.1. IPv6 Neighbor Discovery . . . . . . . . . . . . . . . . . 18 94 8.2. MANET-NHDP . . . . . . . . . . . . . . . . . . . . . . . . 18 95 9. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 18 96 10. Security Considerations . . . . . . . . . . . . . . . . . . . 19 97 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 98 12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 19 99 13. References . . . . . . . . . . . . . . . . . . . . . . . . . . 19 100 13.1. Normative References . . . . . . . . . . . . . . . . . . . 19 101 13.2. Informative References . . . . . . . . . . . . . . . . . . 19 102 Appendix A. Routing protocol scalability analysis . . . . . . . . 21 103 Appendix B. Logarithmic scaling of control cost . . . . . . . . . 25 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 26 106 1. Terminology 108 AODV: Ad-hoc On Demand Vector Routing 110 DSR: Dynamic Source Routing 112 DYMO: Dynamic Mobile On-Demand 114 IS-IS: Intermediate System to Intermediate System 116 OLSR: Optimized Link State Routing 118 OSPF: Open Shortest Path First 120 RIP: Routing Information Protocol 122 TBRPF: Topology Dissemination Based on Reverse Path Forwarding 124 LLN: Low power and Lossy Network 126 LSA: Link State Advertisement 128 LSDB: Link State Database 130 MANET: Mobile Ad-hoc Network 132 MAC: Medium Access Control 134 MPLS: Multiprotocol Label Switching 136 MPR: Multipoint Relays 138 MTU: Maximum Transmission Unit 140 ROLL: Routing in Low power and Lossy Networks 142 TDMA: Time Division Multiple Access 144 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 145 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 146 "OPTIONAL" in this document are to be interpreted as described in RFC 147 2119 [RFC2119]. 149 2. Introduction 151 Wireless is increasingly important to computer networking. As the 152 technological progress behind Moore's Law has reduced computer prices 153 and form factors, networking has come to include not only servers and 154 desktops, but laptops, palmtops, and cellphones. As computing device 155 costs and sizes have shrunk, small wireless sensors, actuators, and 156 smart objects have emerged as an important next step. The sheer 157 number of such low-power networked devices means that they cannot 158 depend on human intervention (e.g., adjusting position) for good 159 connectivity: they must have routing protocols that enable them to 160 self-organize into multihop networks. 162 Energy is a fundamental challenge in these devices. Convenience and 163 ease of use requires they be wireless and therefore battery powered. 164 Low power operation is a key concern for these sensors and actuators 165 so as to allow them to function for months and years without 166 interruption. Cost points and energy limitations cause these devices 167 to have very limited computational and storage resources: a few kB of 168 RAM and a few MHz of CPU is typical. As energy efficiency does not 169 improve with Moore's Law, these limitations are not temporary. This 170 trend towards smaller, lower power, and more numerous devices has led 171 to new low-power wireless link layers to support them. 173 In practice, wireless networks observe much higher loss rates than 174 wired ones do, and low-power wireless is no exception. Furthermore, 175 many of these networks will include powered as well as energy 176 constrained nodes. Nevertheless, for cost and scaling reasons, many 177 of these powered devices will still have limited resources. 179 These low power and lossy networks introduce constraints and 180 requirements that other networks typically do not possess; for 181 instance, in addition to the constraints of limited resources and 182 small power sources which constrain the amount of traffic a protocol 183 may generate, these applications demand an embrace of heterogeneous 184 node capabilities, and good support for specific traffic patterns 185 ([I-D.ietf-roll-home-routing-reqs] and 186 [I-D.ietf-roll-indus-routing-reqs]). 188 2.1. LLN Properties 190 In short, LLN routing protocols must operate under a set of 191 constraints that traditional protocols typically do not consider. 193 o They need to operate with a hard, very small bound on state. 195 o In most cases, they optimize for preserving energy. 197 o Typical application data patterns are not simply unicast flows. 199 o They need to be effective with very small packet sizes, such as 200 less than 127 octets. 202 o They have to be very careful when trading off in-node efficiency 203 for generality; most LLN nodes don't have computational or memory 204 resources to waste. 206 2.2. Question to Answer 208 As existing protocols were not designed with all of these constraints 209 in mind, they have made trade-offs which may or may not be 210 appropriate for LLNs. The first step to reaching consensus on a 211 routing protocol for LLNs is to decide which of these two is true. 212 If an existing protocol can meet LLN requirements without any 213 changes, then barring extenuating circumstances, it behooves us to 214 use such an existing protocol. However, if no current protocol can 215 meet LLN's requirements, then further work will be needed to define 216 and standardize a protocol that can. Whether or not such a protocol 217 involves extensions to an existing protocol or developing a new 218 protocol is outside the scope of this document: this document simply 219 seeks to answer the question: do LLNs require a new protocol 220 specification document at all? 222 3. Methodology 224 To answer the question of whether LLNs require new protocol 225 specification work, this document examines existing routing protocols 226 and how well they can be applied to low power and lossy networks. It 227 provides a set of criteria with which to compare the costs and 228 benefits of different protocol designs and examines existing 229 protocols in terms of these criteria. 231 3.1. Protocols Considered 233 This document does not seek to answer the question of whether there 234 is any protocol outside the IETF which could meet LLN application 235 requirements. Rather, it seeks to answer whether any existing 236 protocols developed and specified by the IETF can meet such 237 requirements. If an existing protocol specification can be used 238 unchanged, then writing additional protocol specifications is 239 unnecessary. There are many academic papers and experimental 240 protocol implementations available. While one or more of these may 241 meet LLN requirements, if they are not specified in an RFC then a 242 working group will need to specify those in a new RFC for them to be 243 a standard. The question this document seeks to answer is not 244 whether proposed, evaluated, theoretical or hypothetical protocol 245 designs can satisfy LLN requirements: the question is whether 246 existing IETF protocols can. 248 Therefore, this document considers "existing routing protocols" to be 249 protocols that are specified in RFCs or, in the cases of DYMO 250 [I-D.ietf-manet-dymo] or OLSRv2 [I-D.ietf-manet-olsrv2], a very 251 stable and mature draft that is a charter item of an active IETF 252 working group. The list of considered protocols is OSPF [RFC2328], 253 IS-IS [RFC1142], RIP [RFC2453], OLSR [RFC3626], OLSv2 254 [I-D.ietf-manet-olsrv2], TBRPF [RFC3684], AODV [RFC3561], DYMO 255 [I-D.ietf-manet-dymo], and DSR [RFC4728]. This document also 256 considers notable variants of these protocols, such as Triggered RIP 257 [RFC2091]. 259 Some of these protocols are still works in progress, and so are 260 changing over time. To enable this document to be correct yet not 261 dependent on this evolution, it document considers specifications as 262 of a specific date: November 31, 2008. 264 This document does not consider DTN bundles [RFC5050] or the DTN 265 Licklider protocol [RFC5326] as suggested by the ROLL working group 266 charter, because they are not routing protocols. This document does 267 not consider the DTN routing protocol PRoPHET 268 [I-D.irtf-dtnrg-prophet] because its design is based on the non- 269 randomness of node mobility, which is not common to LLN application 270 domains. 272 3.2. Criteria 274 The five criteria this document uses are derived from a set of 275 documents that describe the requirements of a few major LLN 276 application scenarios. The five criteria, presented in Section 4, 277 are neither exhaustive nor complete. Instead, they are one specific 278 subset of high-level requirements shared across all of the 279 application requirement drafts. Because every application 280 requirement draft specifies these criteria, then a protocol which 281 does not meet one of them cannot be used without modifications or 282 extensions. However, because these criteria represent a subset of 283 the intersection of the application requirements, any given 284 application domain may impose additional requirements which a 285 particular protocol may not meet. For this reason, these criteria 286 are "necessary but not sufficient." A protocol that does not meet 287 the criteria cannot be used as specified, but it is possible that a 288 protocol meets the criteria yet is not able to meet the requirements 289 of a particular application domain. Nevertheless, a protocol that 290 meets all of the criteria would be very promising, and deserve a 291 closer look and consideration in light of LLN application domains. 293 3.3. Evaluation 295 This document evaluates the above protocols by thinking through each 296 specification and considering a hypothetical implementation that 297 performs as well as possible on the criteria. The evaluation is 298 based on what a specification allows, rather than any particular 299 implementation of that specification. For example, while many DYMO 300 implementations use hopcount as a routing metric, the DYMO 301 specification allows a hop to add more than one to the routing 302 metric, so DYMO as a specification can support some links or nodes 303 being more costly than others. The analysis does not consider 304 hypothetical extensions to protocols that require additional fields 305 or message exchanges. 307 4. Criteria 309 This section presents five important criteria for routing in low 310 power and lossy networks. Later sections evaluate protocols against 311 them. The evaluation attempts to take a complicated and interrelated 312 set of design decisions and trade-offs and condense them to a simple 313 "pass", "fail", or "?". As with any simplification, there is a risk 314 of removing some necessary nuance. However, we believe that being 315 forced to take a position on whether or not these protocols are 316 acceptable according to binary criteria is constructive. 318 We derive these criteria from existing documents that describe ROLL 319 network application requirements [I-D.ietf-roll-home-routing-reqs] 320 [I-D.ietf-roll-urban-routing-reqs] 321 [I-D.ietf-roll-indus-routing-reqs]. These criteria do not encompass 322 all application requirements. Instead, they are a common set of 323 routing protocol requirements that the applications domains in these 324 documents share. Considering this very general and common set of 325 requirements sets a minimal bar for a protocol to be applicable for 326 an LLN deployment. 328 If a protocol cannot meet these minimalist criteria, then it cannot 329 be used unchanged in several major ROLL application domains and so is 330 unlikely to be a good candidate for use within the broader scope of 331 all LLN application domains. Satisfying these minimal criteria is 332 necessary but not sufficient. They do not represent the complete 333 intersection of application requirements and applications introduce 334 additional, more stringent requirements. But this simplified view 335 provides a first cut of the applicability of existing protocols, and 336 those that do satisfy them might be reasonable candidates for further 337 study. 339 The five criteria are "routing state", "loss response", "control 340 cost", "link cost", and "node cost". For each of these, the value 341 "pass" indicates that a given protocol has satisfactory performance 342 according to the criterion. The value "fail" indicates that the 343 protocol does not have acceptable performance according to the 344 criterion, and that the document defining the protocol does not, as 345 written, contain sufficient flexibility to allow the protocol to meet 346 the criterion while conforming to the specification. Finally, "?" 347 indicates that an implementation could exhibit satisfactory 348 performance while following the protocol specification, but that the 349 implementation decisions necessary to do so are not specified and may 350 require some exploration. In other words, a "fail" means a protocol 351 would have to be modified so it is not compliant with its 352 specification in order to meet the criterion, while a "?" means a 353 protocol would require a supplementary document further constraining 354 and specifying how a protocol should behave. 356 A protocol failing to meet one or more of the criteria does not 357 exclude it from being used in LLNs. Rather, it means that, in order 358 to be used in LLNs, the protocol would need to be extended in ways 359 that do not conform with the current specification document. 361 4.1. Formal Definitions 363 To provide precise definitions of these criteria, we use formal big-O 364 notation, where N refers to the number of nodes in the network, D 365 refers to the number of unique destinations, and L refers to the size 366 of a node's local, single-hop neighborhood (the network density). We 367 explain the derivation of each criterion from application 368 requirements in its corresponding section. 370 4.2. Routing State 372 This criterion indicates whether routing state scales reasonably 373 within the memory resources of low-power nodes. According to this 374 criterion, routing state that scales linearly with the size of the 375 network or a node's neighborhood fail. Scaling with the size of the 376 network prevents networks from growing to to the sizes necessary for 377 many LLN applications when faced with the memory constraints devices 378 in such applications exhibit. Similarly, scaling with the network 379 density precludes dense deployments. 381 However, as many low-power and lossy networks behave principally as 382 data collection networks and principally communicate through routers 383 to data collection points in the larger Internet, scaling with the 384 number of such collection points is reasonable. Protocols whose 385 state scales with the number of destinations pass. 387 More precisely, routing state scaling with O(N) or O(L) fails. State 388 that scales O(D) (assuming no N or L) passes. 390 4.3. Loss Response 392 In low power and lossy networks, links routinely come and go due to 393 being close to the signal-to-noise threshold at the physical layer. 394 It is important that link churn not trigger unnecessary responses by 395 the routing protocol. This point is stressed in all the application 396 requirement documents, pointing to the need to localize response to 397 link failures with no triggering of global network re-optimization, 398 whether for reducing traffic or for maintaining low route convergence 399 times ([I-D.ietf-roll-home-routing-reqs], 400 [I-D.ietf-roll-urban-routing-reqs], and 401 [I-D.ietf-roll-indus-routing-reqs]). The industrial routing 402 requirements draft states that protocols must be able to "recompute 403 paths based on underlying link characteristics which may change 404 dynamically", as well as reoptimize when the device set changes to 405 maintain service requirements. The protocol should also "always be 406 in the process of optimizing the system in response to changing link 407 statistics." Protocols with these properties should take care not to 408 require global updates. 410 A protocol which requires many link changes to propagate across the 411 entire network fails. Protocols which constrain the scope of 412 information propagation to only when they affect routes to active 413 destinations, or to local neighborhoods, pass. Protocols which allow 414 proactively path maintenance pass if the choice of which paths to 415 maintain is user-specified. 417 More precisely, loss responses that require O(N) transmissions fail, 418 while responses that can rely on O(1) local broadcasts or O(D) route 419 updates pass. 421 4.4. Control Cost 423 Battery-operated devices are a critical component of all three 424 application spectrums, and as such special emphasis is placed on 425 minimizing power consumption to achieve long battery lifetime, 426 [I-D.ietf-roll-home-routing-reqs], with multi-year deployments being 427 a common case [I-D.ietf-roll-indus-routing-reqs]. In terms of 428 routing structure, any proposed LLN routing protocol ought to support 429 the autonomous organization and configuration of the network at the 430 lowest possible energy cost [I-D.ietf-roll-urban-routing-reqs]. 432 All routing protocols must transmit additional data to detect 433 neighbors, build routes, transmit routing tables, or otherwise 434 conduct routing. As low-power wireless networks can have very low 435 data rates, protocols which require a minimum control packet rate can 436 have an unbounded control overhead per data packet. This is 437 particularly true for event-driven networks, which only report data 438 when certain conditions are met. Regions of a network which never 439 meet the condition can be forced to send significant control traffic 440 even when there is no data to send. For these use cases, hard-coded 441 timing constants are unacceptable, because they imply a prior 442 knowledge of the expected data rate. 444 Of course, protocols require the ability to send at least a very 445 small amount of control traffic, in order to discover a topology. 446 But this bootstrapping discovery and maintenance traffic should be 447 small: communicating once an hour is far more reasonable than 448 communicating once a second. So while control traffic should be 449 bounded by data traffic, it requires some leeway to bootstrap and 450 maintain a long-lived yet idle network. 452 In the case of control traffic, the communication rate (sum of 453 transmissions and receptions at a node) is a better measure than the 454 transmission rate (since energy is consumed for both transmissions 455 and receptions). Controlling the transmission rate is insufficient, 456 as it would mean that the energy cost (sum of transmission and 457 receptions) of control traffic could grow with O(L). 459 A protocol fails the control cost criterion if its per-node control 460 traffic (transmissions plus receptions) rate is not bounded by the 461 data rate plus a small constant. For example, a protocol using a 462 beacon rate only passes if it can be turned arbitrarily low, in order 463 to match the data rate. Furthermore, packet losses necessitate that 464 the control traffic may scale within a O(log(L)) factor of the data 465 rate. Meaning, if R is the data rate and e is the small constant, 466 then a protocol's control traffic must be on the order of O(R log(L) 467 + e) to pass this criteria. The details of why O(log(L)) is 468 necessary are in Appendix B. 470 4.5. Link and Node Cost 472 These two criteria specify how a protocol chooses routes for data 473 packets to take through the network. Classical routing algorithms 474 typically acknowledge the differing costs of paths and may use a 475 shortest path algorithm to find paths. This is a requirement for low 476 power networks, as links must be evaluated as part of an objective 477 function across various metric types, such as minimizing latency and 478 maximizing reliability [I-D.ietf-roll-indus-routing-reqs]. 480 However, in low power networks it is also desirable to account for 481 the cost of forwarding through particular routers. Applications 482 require node or parameter constrained routing, which takes into 483 account node properties and attributes such as power, memory, and 484 battery life that dictate a router's willingness or ability to route 485 other packets. Home routing requirements note that devices will vary 486 in their duty cycle, and that routing protocols should prefer nodes 487 with permanent power [I-D.ietf-roll-home-routing-reqs]. The urban 488 requirements note that routing protocols may wish to take advantage 489 of differing data processing and management capabilities among 490 network devices [I-D.ietf-roll-urban-routing-reqs]. Finally, 491 industrial requirements cite differing lifetime requirements as an 492 important factor to account for [I-D.ietf-roll-indus-routing-reqs]. 493 Node cost refers to the ability for a protocol to incorporate router 494 properties into routing metrics and use node attributes for 495 constraint-based routing. 497 A "pass" indicates that the protocol contains a mechanism allowing 498 these considerations to be considered when choosing routes. 500 5. Routing Protocol Taxonomy 502 Routing protocols broadly fall into two classes: link-state and 503 distance-vector. 505 A router running a link-state protocol first establishes adjacency 506 with its neighbors and then reliably floods the local topology 507 information in the form of a Link State Advertisement packet. The 508 collection of LSAs constitutes the Link State Database (LSDB) that 509 represents the network topology, and routers synchronize their LSDBs. 510 Thus each node in the network has a complete view of the network 511 topology. Each router uses its LSDB to compute a routing table where 512 each entry (reachable IP destination address) points to the next hop 513 along the shortest path according to some metric. Link state 514 protocols (such as OSPF and IS-IS) support the concept of area 515 (called "level" for IS-IS) whereby all the routers in the same area 516 share the same view (they have the same LSDB) and areas are 517 interconnected by border routers according to specific rules that 518 advertise IP prefix reachability between areas. 520 A distance vector protocol exchanges routing information rather than 521 topological information. A router running a distance vector protocol 522 exchanges information with its "neighbors" with which it has link 523 layer connectivity. Tunneling and similar mechanisms can virtualize 524 link layer connectivity to allow neighbors that are multiple layer 2 525 hops away. Rather than a map of the network topology from which each 526 router can calculate routes, a distance vector protocol node has 527 information on what routes its neighbors have. Each node's set of 528 available routes is the union of its neighbors routes plus a route to 529 itself. In a distance vector protocol, nodes may only advertise 530 routes which are in use, enabling on-demand discovery. In comparison 531 to link state protocols, distance vector protocols have the advantage 532 of only requiring neighbor routing information, but also have 533 corresponding limitations which protocols must address, such as 534 routing loops, count to infinity, split horizon, and slow convergence 535 times. Furthermore, routing constraints are difficult to enforce 536 with distance vector protocols. 538 Neighbor discovery is a critical component of any routing protocol. 539 It enables a protocol to learn about which other nodes are nearby and 540 which it can use as the next hop for routes. As neighbor discovery 541 is a key component of many protocols, several general protocols and 542 protocol mechanisms have been designed to support it. A protocol's 543 neighbor set is defined by how many "hops" away the set reaches. For 544 example, the 1-hop neighbor set of a node is all nodes it can 545 directly communicate with at the link layer, while the 2-hop neighbor 546 set is its own 1-hop neighbor set and the 1-hop neighbor sets of all 547 of its 1-hop neighbors. 549 Because nodes often have very limited resources for storing routing 550 state, protocols cannot assume that they can store complete neighbor 551 information. For example, a node with 4kB of RAM, a typical amount 552 for top-end microcontrollers, cannot store full neighbor state when 553 it has 1000 other nodes nearby. This means that ROLL protocols must 554 have mechanisms to decide which of many possible neighbors they 555 monitor as routable next hops. For elements such as 2-hop 556 neighborhoods, these decisions can have a significant impact on the 557 topology that other nodes observe, and therefore may require 558 intelligent logic to prevent effects such as network partitions. 560 5.1. Protocols Today 562 Wired networks draw from both approaches. OSPF or IS-IS, for 563 example, are link-state protocols, while RIP is a distance-vector 564 protocol. 566 MANETs similarly draw from both approaches. OLSR is a link-state 567 protocol, while AODV and DYMO are distance vector protocols. The 568 general consensus in core networks is to use link state routing 569 protocols as IGPs for a number of reasons: in many cases having a 570 complete network topology view is required to adequately compute the 571 shortest path according to some metrics. For some applications such 572 as MPLS Traffic Engineering it is even required to have the knowledge 573 of the Traffic Engineering Database for constraint based routing. 575 Furthermore link state protocols typically have superior convergence 576 speeds (ability to find an alternate path in case of network element 577 failure), are easier to debug and troubleshoot, and introduce less 578 control packet overhead than distance vector protocols. In contrast, 579 distance vector protocols are simpler, require less computation, and 580 have smaller storage requirements. Most of these tradeoffs are 581 similar in wireless networks, with one exception. Because wireless 582 links can suffer from significant temporal variation, link state 583 protocols can have higher traffic loads as topology changes must 584 propagate globally, while in a distance vector protocol a node can 585 make local routing decisions with no effect on the global routing 586 topology. 588 One protocol, DSR, does not easily fit into one of these two classes. 589 Although it is a distance vector protocol, DSR has several properties 590 that make it differ from most other protocols in this class. We 591 examine these differences in our discussion of DSR. 593 The next two sections summarize several well established routing 594 protocols. The table below shows, based on the criteria described 595 above, whether these protocols meet ROLL criteria. Appendix A 596 contains the reasoning behind each value in the table. 598 Protocol State Loss Control Link Cost Node Cost 599 OSPF/IS-IS fail fail fail pass fail 600 OLSRv2 fail ? ? pass pass 601 TBRPF fail pass fail pass ? 602 RIP pass fail pass ? fail 603 AODV pass fail pass fail fail 604 DYMO pass ? pass ? ? 605 DSR fail pass pass fail fail 607 Figure 1 609 6. Link State Protocols 611 6.1. OSPF & IS-IS 613 OSPF (specified in [RFC2328] for IPv4 and in [RFC2740] for IPv6)) is 614 a link state protocol designed for routing within an Internet 615 Autonomous System (AS). OSPF provides the ability to divide a 616 network into areas, which can establish a routing hierarchy. The 617 topology within an area is hidden from other areas and IP prefix 618 reachability across areas (inter-area routing) is provided using 619 summary LSAs. The hierarchy implies that there is a top-level 620 routing area (the backbone area) which connects other areas. Areas 621 may be connected to the back-bone area through a virtual link. OSPF 622 maintains routing adjacencies by sending hello messages. OSPF 623 calculates the shortest path to a node using link metrics (that may 624 reflect the link bandwidth, propagation delay, ...). OSPF Traffic 625 Engineering (OSPF-TE, [RFC3630]) extends OSPF to include information 626 on reservable, unreserved, and available bandwidth. 628 IS-IS (specified in [RFC1142]) is similar in many respects to OSPF, 629 but as a descendent of the OSI protocol suite differs in some places 630 such as the way areas are defined and used. However, routing 631 adjacencies are also maintained by local propagation of the LSDB, and 632 a shortest path computation is used over a metric space which may 633 measure delay, errors, or other link properties. 635 6.2. OLSR & OLSRv2 637 Optimized Link State Routing (OLSR) (see [RFC3626] and 638 [I-D.ietf-manet-olsrv2]) is a link state routing protocol for MANETs. 639 OLSR routers flood link state advertisement packets throughout the 640 entire network, such that each node has a map of the network 641 topology. Because link variations can lead to heavy flooding traffic 642 when using a link state approach, OLSR establishes a topology for 643 minimizing this communication, imposes minimum time interval between 644 two successive control transmissions by a router, and makes triggered 645 updates optional. Each node maintains a set of nodes called its 646 Multipoint Relays (MPR), which is a subset of the one-hop neighbors 647 whose connectivity covers the two-hop neighborhood. Each node that 648 is an MPR maintains a set called its MPR selectors, which are nodes 649 that have chosen it to be an MPR. 651 OLSR uses these two sets to apply three optimizations. First, only 652 MPRs generate link state information. Second, nodes use MPRs to 653 limit the set of nodes that forward link state packets. Third, an 654 MPR, rather than advertise all of its links, can advertise only links 655 to its MPR selectors. Together, these three optimizations can 656 greatly reduce the control traffic in dense networks, as the number 657 of MPRs should not increase significantly as a network becomes 658 denser. 660 OLSR selects routes based on hop counts, and assumes an underlying 661 protocol that determines whether a link exists between two nodes. 662 OLSR's optimized flooding allows it to quickly adapt to and propagate 663 topology changes. 665 OLSR is closely related to clustering algorithms in the wireless 666 sensor networking literature, in which cluster heads are elected such 667 that routing occurs over links between cluster heads and all other 668 nodes are leaves that communicate to a cluster head. 670 6.3. TBRPF 672 Topology Dissemination Based on Reverse Path Forwarding (see 673 [RFC3684]) is another proactive link state protocol for MANETs. 675 TBRPF computes a source tree, which provides routes to all reachable 676 nodes. It reduces control packet overhead by having nodes only 677 transmit a subset of their source tree as well as by using 678 differential updates. 680 The major difference between TBRPF and OLSR is the routing data that 681 nodes advertise and who chooses to aggregate information. In OLSR, 682 nodes select neighbors to be MPRs and advertise their link state for 683 them; in TBRPF, nodes elect themselves to advertise relevant link 684 state based on whether it acts as a next hop. 686 7. Distance Vector protocols 688 7.1. RIP 690 The Routing Information Protocol (RIP) (defined in [RFC2453]) 691 predates OSPF. As it is a distance vector protocol, routing loops 692 can occur and considerable work has been done to accelerate 693 convergence since the initial RIP protocols were introduced. RIP 694 measures route cost in terms of hops, and detects routing loops by 695 observing a route cost approach infinity where "infinity" is referred 696 to as a maximum number of hops. RIP is typically not appropriate for 697 situations where routes need to be chosen based on real-time 698 parameters such as measured delay, reliability, or load or when the 699 network topology needs to be known for route computation. 701 "Triggered RIP" (defined in [RFC2091]) was originally designed to 702 support "on-demand" circuits. The aim of triggered RIP is to avoid 703 systematically sending the routing database on regular intervals. 704 Instead, triggered RIP sends the database when there is a routing 705 update or a next hop adjacency change: once neighbors have exchanged 706 their routing database, only incremental updates need to be sent. 707 Because incremental updates cannot depend on periodic traffic to 708 overcome loses, triggered RIP uses acknowledgment based mechanisms 709 for reliable delivery. 711 7.2. Ad-hoc On Demand Vector Routing (AODV) 713 AODV (specified in [RFC3561]) is a distance vector protocol for 714 MANETs. When one AODV node requires a route to another, it floods a 715 request in the network to discover a route. A depth-scoped flooding 716 process avoids discovery from expanding to the most distant regions 717 of the network that are in the opposite direction of the destination. 718 AODV chooses routes that have the minimum hop count. 720 If an AODV route request reaches a node that has a route to the 721 destination (this includes the destination itself), that node sends a 722 reply along the reverse route. All nodes along the reverse route can 723 cache the route. When routes break due to topology changes, AODV 724 floods error messages and issues a new request. Because AODV is on- 725 demand it only maintains routes for active nodes. When a link 726 breaks, AODV issues a Route Error (RERR) and a new route request 727 message (RREQ), with a higher sequence number so nodes do not respond 728 from their route caches. These packets can flood the entire network. 730 7.3. DYMO 732 Dynamic Mobile On-Demand routing (DYMO) ([I-D.ietf-manet-dymo]) is an 733 evolution of AODV. The basic functionality is the same, but it has 734 different packet formats, handling rules, and supports path 735 accumulation. Path accumulation allows a single DYMO route request 736 to generate routes to all nodes along the route to that destination. 737 Like AODV, DYMO uses a distance value as its routing metric which 738 must be at least the hop count, but allows DYMO to represent link 739 costs. Like AODV, on link breaks DYMO issues a new route request 740 message (RREQ), with a higher sequence number so nodes do not respond 741 from their route caches. Correspondingly, a route request can flood 742 the entire network. 744 7.4. DSR 746 Dynamic Source Routing ([RFC4728]) is a distance vector protocol for 747 MANETs, but a DSR packet source explicitly specifies the route for 748 each packet. Because the route is determined at a single place -- 749 the source -- DSR does not require sequence numbers or other 750 mechanisms to prevent routing loops, as there is no problem of 751 inconsistent routing tables. Unlike AODV and DYMO, by pushing state 752 into packet headers, DSR does not require per-destination routing 753 state. Instead, a node originating packets only needs to store a 754 spanning tree of the part of the network it is communicating with. 756 8. Neighbor Discovery 758 A limit on maintained routing state (light footprint) prevents ROLL 759 protocols from assuming they know all 1-hop, 2-hop, or N-hop 760 neighbors. For this reason, while protocols such as MANET-NHDP 761 ([I-D.ietf-manet-nhdp]) and IPv6's neighbor discovery ([RFC4861]) 762 provide basic mechanisms for discovering link-layer neighbors, not 763 all of their features are relevant. This section describes these two 764 protocols, their capabilities, and how ROLL protocols could leverage 765 them. 767 8.1. IPv6 Neighbor Discovery 769 IPv6 neighbor discovery provides mechanisms for nodes to discover 770 single-hop neighbors as well as routers that can forward packets past 771 the local neighborhood. There is an implicit assumption that the 772 delegation of whether a node is a router or not is static (e.g., 773 based on a wired topology). The fact that all routers must respond 774 to a Router Solicitation requires that the number of routers with a 775 1-hop neighborhood is small, or there will be a reply implosion. 776 Furthermore, IPv6 neighbor discovery's support of address 777 autoconfiguration assumes address provisioning, in that addresses 778 reflect the underlying communication topology. IPv6 neighbor 779 discovery does not consider asymmetric links. Nevertheless, it may 780 be possible to extend and adapt IPv6's mechanisms to wireless in 781 order to avoid response storms and implosions. 783 8.2. MANET-NHDP 785 The MANET Neighborhood Discovery Protocol (MANET-NHDP) provides 786 mechanisms for discovering a MANET router's symmetric 2-hop 787 neighborhood. It maintains information on discovered links, their 788 interfaces, status, and neighbor sets. MANET-NHDP advertises a 789 node's local link state; by listening to all of its 1-hop neighbor's 790 advertisements, a node can compute its 2-hop neighborhood. MANET- 791 NHDP link state advertisements can include a link quality metric. 792 MANET-NHDP's node information base includes all interface addresses 793 of each 1-hop neighbor: for low-power nodes, this state requirement 794 can be difficult to support. 796 9. Conclusion 798 Figure 1 shows that no existing IETF protocol specification meets the 799 criteria described in Section 4. Therefore, having a routing 800 protocol for LLNs requires new protocol specification documents. 801 Whether such documents describe modifications to existing protocols 802 or new protocols it outside the scope of this document and warrants 803 further discussion. However, the results in Figure 1 may provide 804 some insight or guidance in such a discusssion, indicating what 805 protocol mechanisms may be better suited to LLNs than others. 807 Such a discussion should not, however, be limited to the protocols 808 listed in Figure Figure 1. There are many existing protocols which 809 are unsuitable as a general routing protocol but describe mechanisms 810 that could be very useful in the context of LLNs. Any such future 811 discussion ought to consider how routing in LLNs may benefit from 812 examining mechanisms from a broader suite of protocols than those 813 listed in Figure Figure 1. 815 10. Security Considerations 817 LLNs have security considerations. These considerations vary greatly 818 depending on application domain. For example, deployers industrial 819 monitoring networks may impose more stringent confidentiality 820 requirements than home automation networks do. Such requirements are 821 an important consideration in protocol design, but their variety 822 makes distilling them to a minimalist set of "necessary but not 823 sufficient" criteria is of limited use. The criteria in this 824 document are not the only requirements and considerations in LLN 825 protocols, and as such the omission of a security criterion should 826 not be interpreted as a lack of a need for security in LLNs. 828 11. IANA Considerations 830 This document includes no request to IANA. 832 12. Acknowledgements 834 The authors would like to thank all the members of the ROLL working 835 group for their valuable comments, and the chairs for their helpful 836 guidance. 838 We are also indebted to the Sensor Network Architecture group at 839 Berkeley for contributing their helpful analysis: Prabal Dutta, 840 Rodrigo Fonseca, Xiaofan Jiang, Jaein Jeong, Jorge Ortiz, and Jay 841 Tanega. 843 13. References 845 13.1. Normative References 847 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 848 Requirement Levels", BCP 14, RFC 2119, March 1997. 850 13.2. Informative References 852 [I-D.ietf-manet-dymo] 853 Chakeres, I. and C. Perkins, "Dynamic MANET On-demand 854 (DYMO) Routing", draft-ietf-manet-dymo-15 (work in 855 progress), November 2008. 857 [I-D.ietf-manet-nhdp] 858 Clausen, T., Dearlove, C., and J. Dean, "MANET 859 Neighborhood Discovery Protocol (NHDP)", 860 draft-ietf-manet-nhdp-07 (work in progress), July 2008. 862 [I-D.ietf-manet-olsrv2] 863 Clausen, T., Dearlove, C., and P. Jacquet, "The Optimized 864 Link State Routing Protocol version 2", 865 draft-ietf-manet-olsrv2-07 (work in progress), July 2008. 867 [I-D.ietf-roll-home-routing-reqs] 868 Porcu, G., "Home Automation Routing Requirements in Low 869 Power and Lossy Networks", 870 draft-ietf-roll-home-routing-reqs-06 (work in progress), 871 November 2008. 873 [I-D.ietf-roll-indus-routing-reqs] 874 Networks, D., Thubert, P., Dwars, S., and T. Phinney, 875 "Industrial Routing Requirements in Low Power and Lossy 876 Networks", draft-ietf-roll-indus-routing-reqs-03 (work in 877 progress), December 2008. 879 [I-D.ietf-roll-urban-routing-reqs] 880 Dohler, M., Watteyne, T., Winter, T., Barthel, D., 881 Jacquenet, C., Madhusudan, G., and G. Chegaray, "Urban 882 WSNs Routing Requirements in Low Power and Lossy 883 Networks", draft-ietf-roll-urban-routing-reqs-03 (work in 884 progress), January 2009. 886 [I-D.irtf-dtnrg-prophet] 887 Lindgren, A. and A. Doria, "Probabilistic Routing Protocol 888 for Intermittently Connected Networks", 889 draft-irtf-dtnrg-prophet-01 (work in progress), 890 November 2008. 892 [RFC1142] Oran, D., "OSI IS-IS Intra-domain Routing Protocol", 893 RFC 1142, February 1990. 895 [RFC2091] Meyer, G. and S. Sherry, "Triggered Extensions to RIP to 896 Support Demand Circuits", RFC 2091, January 1997. 898 [RFC2328] Moy, J., "OSPF Version 2", STD 54, RFC 2328, April 1998. 900 [RFC2453] Malkin, G., "RIP Version 2", STD 56, RFC 2453, 901 November 1998. 903 [RFC2740] Coltun, R., Ferguson, D., and J. Moy, "OSPF for IPv6", 904 RFC 2740, December 1999. 906 [RFC3561] Perkins, C., Belding-Royer, E., and S. Das, "Ad hoc On- 907 Demand Distance Vector (AODV) Routing", RFC 3561, 908 July 2003. 910 [RFC3626] Clausen, T. and P. Jacquet, "Optimized Link State Routing 911 Protocol (OLSR)", RFC 3626, October 2003. 913 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 914 (TE) Extensions to OSPF Version 2", RFC 3630, 915 September 2003. 917 [RFC3684] Ogier, R., Templin, F., and M. Lewis, "Topology 918 Dissemination Based on Reverse-Path Forwarding (TBRPF)", 919 RFC 3684, February 2004. 921 [RFC4728] Johnson, D., Hu, Y., and D. Maltz, "The Dynamic Source 922 Routing Protocol (DSR) for Mobile Ad Hoc Networks for 923 IPv4", RFC 4728, February 2007. 925 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 926 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 927 September 2007. 929 [RFC5050] Scott, K. and S. Burleigh, "Bundle Protocol 930 Specification", RFC 5050, November 2007. 932 [RFC5326] Ramadas, M., Burleigh, S., and S. Farrell, "Licklider 933 Transmission Protocol - Specification", RFC 5326, 934 September 2008. 936 [chakeres.transparent] 937 Chakeres, I. and E. Belding-Royer, "Transparent Influence 938 of Path Selection in Heterogeneous Ad hoc Networks", 939 Proceedings of the 15th IEEE International Symposium on 940 Personal, Indoor and Mobile Radio Communications (PIMRC) , 941 September 2004. 943 Appendix A. Routing protocol scalability analysis 945 This aim of this Appendix is to provide the details for the analysis 946 routing scalability analysis. 948 "OSPF & IS-IS" 950 OSPF floods link state through a network. Each router must receive 951 this complete link set. OSPF fails the routing state criterion 952 because it requires each router to discover each link in the network, 953 for a total routing table size which is O(N * L). This also causes 954 it to fail the control cost criterion, since this information must be 955 propagated. Furthermore, changes in the link set require re-flooding 956 the network link state even if the changed links were not being used. 957 Since link state changes in wireless networks are often uncorrelated 958 with data traffic and are instead caused by external (environmental) 959 factors, this causes OSPF to fail both the control cost and loss 960 response criteria. OSPF routers can impose policies on the use of 961 links and can consider link properties (Type of Service), as the cost 962 associated with an edge is configurable by the system administrator 963 [RFC2328], so receive a pass for link cost. However, there is no way 964 to associate metrics with routers (as costs are only applied to 965 outgoing interfaces, i.e. edges) when computing paths, and so fails 966 the node cost criteria. While [RFC3630] discusses paths that take 967 into account node attributes, it specifically states that no known 968 algorithm or mechanism currently exists for incoporating this into 969 the OSPF RFC. 971 IS-IS receives the same results as OSPF, because it maintains a 972 consistent LSDB using similar mechanisms, and can account for link 973 costs but not router costs in its shortest path computation. 975 "OLSRv2" 977 OLSRv2 is a proactive link state protocol, flooding link state 978 information through a set of multipoint relays (MPRs). Routing state 979 includes 1-hop neighbor information for each node in the network, 980 1-hop and 2-hop information for neighbors (for MPR selection), and a 981 routing table (consisting of destination, and next hop), resulting in 982 state proportional to network size and density (O(N*L + L^2)), and 983 failing the routing state criterion. 985 Unacceptable control traffic overhead may arise from flooding and 986 maintenance. HELLO messages are periodically broadcast local beacon 987 messages, but TC messages spread topology information throughout the 988 network (using MPRs). As such, control traffic is proportional to 989 O(N^2). MPRs reduce this load to O(N^2 / L). As the number of MPRs 990 is inversely proportional to the density of the network and L is 991 bounded by N, this means control traffic is at best proportional to 992 O(N). 994 Fisheye routing is a technique to reduce the frequency routing 995 updates as the routing update propagates away from its source. This 996 has the potential to reduce the control overhead to acceptable 997 levels, and it is possible to implement this technique without 998 violating the specification because the specification does not 999 require that all updates be sent with the same frequency. However, 1000 there is no specification of how this should be accomplished. Thus, 1001 OLSR receives a "?" for the control traffic criterion. Fisheye 1002 routing does not alter the table size, so it does not modify OLSR's 1003 result ("fail") on the routing state criterion. 1005 Furthermore, changes in the link set may require re-flooding the 1006 network link state even if those links were not being used by 1007 routing. OLSR makes these triggered floods optional, but as sending 1008 no triggered updates will raise problems in topology consistency, 1009 OLSRv2 receives a '?' in the loss response criterion. 1011 OLSR allows for specification of link quality, and also provides a 1012 'Willingness' metric to symbolize node cost, giving it a pass for 1013 both those criteria. 1015 "TBRPF" 1017 As a link state protocol where each node maintains a database of the 1018 entire network topology, TBRPF's routing table size scales with 1019 network size and density, leading to table sizes which are O(N * L) 1020 when a node receives disjoint link sets from its neighbors. This 1021 causes the protocol to fail the routing state criterion. The 1022 protocol's use of differential updates should allow both fast 1023 response time and incremental changes once the distributed database 1024 of links has been established. Differential updates are only used to 1025 reduce response time to changing network conditions, not to reduce 1026 the amount of topology information sent, since each node will 1027 periodically send their piece of the topology. As a result, TBRPF 1028 fails the control overhead criterion. However, its differential 1029 updates triggered by link failure do not immediately cause a global 1030 re-flooding of state (but only to affected routers) [RFC3684], 1031 leading to a pass for loss response. 1033 TBRPF has a flexible neighbor management layer which enables it to 1034 incorporate various types of link metrics into its routing decision 1035 by enabling a USE_METRIC flag [RFC3684]. As a result, it receives a 1036 pass for link cost. It also provides a mechanism whereby routers can 1037 maintain multiple link metrics to a single neighbor, some of which 1038 can be advertised by the neighbor router [RFC3684]. Although the RFC 1039 does not specify a policy for using these values, developing one 1040 could allow TBRPF to satisfy this requirement, leading to a ? for the 1041 node cost requirement. 1043 "RIP" 1045 RIP is a distance vector protocol: all routers maintain a route to 1046 all other routers. Routing table size is therefore O(N). However, 1047 if destinations are known apriori, table size can be reduced to O(D), 1048 resulting in a pass for routing state. While standard RIP requires 1049 each node broadcast a beacon per period, and that updates must be 1050 propagated by affected nodes, triggered RIP only sends updates when 1051 network conditions change in response to the data path, so RIP passes 1052 the control cost criterion. Loss triggers updates, only propagating 1053 if part of a best route, but even if the route is not actively being 1054 used, resulting in a fail for loss response. The rate of triggered 1055 updates is throttled, and these are only differential updates, yet 1056 this still doesn't account for other control traffic (or tie it to 1057 data rate) or prevent the triggered updates from being flooded along 1058 non-active paths. [RFC2453] 1060 RIP receives a ? for link cost because while current implementations 1061 focus on hop count and that is the metric used in [RFC2453], the RFC 1062 also mentions that more complex metrics such as differences in 1063 bandwidth and reliability could be used. However, the RFC also 1064 states that real-time metrics such as link-quality would create 1065 instability and the concept of node cost only appears as metrics 1066 assigned to external networks. While RIP has the concept of a 1067 network cost, it is insufficient to describe node properties and so 1068 RIP fails the node cost criterion.. 1070 "AODV" 1072 AODV table size is a function of the number of communicating pairs in 1073 the network, scaling with O(D). This is acceptable and so AODV 1074 passes the routing state criterion. As an on-demand protocol, AODV 1075 does not generate any traffic until data is sent, and so control 1076 traffic is correlated with the data and so it receives a pass for 1077 control traffic. When a broken link is detected, AODV will use a 1078 precursor list maintained for each destination to inform downstream 1079 routers (with a RERR) of the topology change. However, the RERR 1080 message is forwarded by all nodes that have a route that uses the 1081 broken link, even if the route is not currently active, leading to a 1082 fail for loss response [RFC3561]. 1084 AODV fails the link cost criterion because the only metric used is 1085 hop count, and this is hardcoded in the route table entry, according 1086 to the RFC [RFC3561]. It fails the node cost requirement because 1087 there is no way for a router to indicate its [lack of] willingness to 1088 route while still adhering to the RFC. 1090 "DYMO" 1092 The design of DYMO shares much with AODV, with some changes to remove 1093 precursor lists and compact various messages. It still passes the 1094 routing state criterion because it only maintains routes requested by 1095 RREQ messages, resulting in O(D) table size. Control traffic (RREQ, 1096 RREP, and RREQ) are still driven by data, and hence DYMO passes the 1097 control cost criterion. The DYMO specification places very few 1098 requirements on how nodes respond to route error RERR messages that 1099 denote a broken route. Therefore, while it is possible for a DYMO 1100 implementation to meet the loss response criterion, the specification 1101 is not clear on how to meet the criterion while still maintaining 1102 routes as link breaks . This leads to a ? in loss repsonse 1103 [I-D.ietf-manet-dymo]. 1105 DYMO indicates that the "distance" of a link can vary from 1-65535 1106 [I-D.ietf-manet-dymo], leading to a ? in link cost. While additional 1107 routing information can be added DYMO messages, there is no mention 1108 of node properties. There is great uncertainty whether DYMO can 1109 support them. While there are no direct mechanisms in DYMO that can 1110 meet this criterion, some research results suggest that indirect 1111 mechanisms, such as packet timing, could [chakeres.transparent]. 1112 Therefore, supporting node cost would require additional mechanisms 1113 or specifications, leading to a ? on the node cost criterion. 1115 "DSR" 1117 DSR performs on-demand route discovery, and source routing of 1118 packets. It maintains a source route for all destinations, and also 1119 a blacklist of all unidirectional neighbor links [RFC4728], leading 1120 to a total table size of O(D + L), failing the routing state 1121 criterion. Control traffic is completely data driven, and so DSR 1122 receives a pass for this criterion. Finally, a transmission failure 1123 only prompts an unreachable destination to be sent to the source of 1124 the message, passing the loss response criterion. 1126 DSR fails the link cost criterion because its source routes are 1127 advertised only in terms of hops, such that all advertised links are 1128 considered equivalent. DSR also fails the node cost criterion 1129 because a node has no way of indicating its willingness to serve as a 1130 router and forward messages. 1132 Appendix B. Logarithmic scaling of control cost 1134 To satisfy the control cost criterion, a protocol's control traffic 1135 communication rate must be bounded by the data rate, plus a small 1136 constant. That is, if there is a data rate R, the control rate must 1137 O(R) + e, where e is a very small constant (epsilon). Furthermore, 1138 the control rate may grow logarithmically with the size of the local 1139 neighborhood L. Note that this is a bound: it represents the most 1140 traffic a protocol may send, and good protocols may send much less. 1141 So the control rate is bounded by O(R log(L)) + e. 1143 The logarithmic factor comes from the fundamental limits of any 1144 protocol that maintains a communication rate. For example, consider 1145 e, the small constant rate of communication traffic allowed. Since 1146 this rate is communication, to maintain O(e), then only one in L 1147 nodes may transmit per time interval defined by e: that one node has 1148 a transmission, and all other nodes have a reception, which prevents 1149 them from transmitting. However, wireless networks are lossy. 1150 Suppose that the network has a 10% packet loss rate. Then if L=10, 1151 the expectation is that one of the nodes will drop the packet. Not 1152 hearing a transmission, it will think it can transmit. This will 1153 lead to 2 transmissions. If L=100, then one node will not hear the 1154 first two transmissions, and there will be 3. The number of 1155 transmissions, and the communication rate, will grow with O(log(L)). 1157 This logarithmic bound can be prevented through explicit coordination 1158 (e.g., leader election), but such approaches assumes state and 1159 control traffic to elect leaders. As a logarithmic factor in terms 1160 of density is not a large stumbling or major limitation, allowing the 1161 much greater protocol flexibility it enables is worth its small cost. 1163 Authors' Addresses 1165 Philip Levis 1166 Stanford University 1167 358 Gates Hall, Stanford University 1168 Stanford, CA 94305-9030 1169 USA 1171 Email: pal@cs.stanford.edu 1173 Arsalan Tavakoli 1174 UC Berkeley 1175 Soda Hall, UC Berkeley 1176 Berkeley, CA 94707 1177 USA 1179 Email: arsalan@eecs.berkeley.edu 1181 Stephen Dawson-Haggerty 1182 UC Berkeley 1183 Soda Hall, UC Berkeley 1184 Berkeley, CA 94707 1185 USA 1187 Email: stevedh@cs.berkeley.edu