idnits 2.17.1 draft-ietf-roll-urban-routing-reqs-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 18. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 938. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 949. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 956. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 962. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: With low computation power and scarce energy resources, U-LLNs nodes may not be able to resist any attack from high-power malicious nodes (e.g. laptops and strong radios). However, the amount of damage generated to the whole network SHOULD be commensurate with the number of nodes physically compromised. For example, an intruder taking control over a single node SHOULD not have total access to, or be able to completely deny service to the whole network. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 30, 2008) is 5772 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Networking Working Group M. Dohler, Ed. 3 Internet-Draft CTTC 4 Intended status: Informational T. Watteyne, Ed. 5 Expires: December 3, 2008 France Telecom R&D 6 T. Winter, Ed. 7 Eka Systems 8 June 30, 2008 10 Urban WSNs Routing Requirements in Low Power and Lossy Networks 11 draft-ietf-roll-urban-routing-reqs-01 13 Status of this Memo 15 By submitting this Internet-Draft, each author represents that any 16 applicable patent or other IPR claims of which he or she is aware 17 have been or will be disclosed, and any of which he or she becomes 18 aware will be disclosed, in accordance with Section 6 of BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on December 3, 2008. 38 Abstract 40 The application-specific routing requirements for Urban Low Power and 41 Lossy Networks (U-LLNs) are presented in this document. In the near 42 future, sensing and actuating nodes will be placed outdoors in urban 43 environments so as to improve the people's living conditions as well 44 as to monitor compliance with increasingly strict environmental laws. 45 These field nodes are expected to measure and report a wide gamut of 46 data, such as required in smart metering, waste disposal, 47 meteorological, pollution and allergy reporting applications. The 48 majority of these nodes is expected to communicate wirelessly which - 49 given the limited radio range and the large number of nodes - 50 requires the use of suitable routing protocols. The design of such 51 protocols will be mainly impacted by the limited resources of the 52 nodes (memory, processing power, battery, etc.) and the 53 particularities of the outdoors urban application scenarios. As 54 such, for a wireless ROLL solution to be useful, the protocol(s) 55 ought to be energy-efficient, scalable, and autonomous. This 56 documents aims to specify a set of requirements reflecting these and 57 further U-LLNs tailored characteristics. 59 Requirements Language 61 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 62 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 63 document are to be interpreted as described in RFC 2119 [RFC2119]. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 69 3. Overview of Urban Low Power Lossy Networks . . . . . . . . . . 5 70 3.1. Canonical Network Elements . . . . . . . . . . . . . . . . 5 71 3.1.1. Access Points . . . . . . . . . . . . . . . . . . . . 5 72 3.1.2. Repeaters . . . . . . . . . . . . . . . . . . . . . . 6 73 3.1.3. Actuators . . . . . . . . . . . . . . . . . . . . . . 6 74 3.1.4. Sensors . . . . . . . . . . . . . . . . . . . . . . . 6 75 3.2. Topology . . . . . . . . . . . . . . . . . . . . . . . . . 7 76 3.3. Resource Constraints . . . . . . . . . . . . . . . . . . . 7 77 3.4. Link Reliability . . . . . . . . . . . . . . . . . . . . . 8 78 4. Urban LLN Application Scenarios . . . . . . . . . . . . . . . 9 79 4.1. Deployment of Nodes . . . . . . . . . . . . . . . . . . . 9 80 4.2. Association and Disassociation/Disappearance of Nodes . . 10 81 4.3. Regular Measurement Reporting . . . . . . . . . . . . . . 11 82 4.4. Queried Measurement Reporting . . . . . . . . . . . . . . 11 83 4.5. Alert Reporting . . . . . . . . . . . . . . . . . . . . . 12 84 5. Traffic Pattern . . . . . . . . . . . . . . . . . . . . . . . 12 85 6. Requirements of Urban LLN Applications . . . . . . . . . . . . 14 86 6.1. Scalability . . . . . . . . . . . . . . . . . . . . . . . 14 87 6.2. Parameter Constrained Routing . . . . . . . . . . . . . . 14 88 6.3. Support of Autonomous and Alien Configuration . . . . . . 15 89 6.4. Support of Highly Directed Information Flows . . . . . . . 15 90 6.5. Support of Heterogeneous Field Devices . . . . . . . . . . 15 91 6.6. Support of Multicast, Anycast, and Implementation of 92 Groupcast . . . . . . . . . . . . . . . . . . . . . . . . 16 93 6.7. Network Dynamicity . . . . . . . . . . . . . . . . . . . . 16 94 6.8. Latency . . . . . . . . . . . . . . . . . . . . . . . . . 16 95 7. Security Considerations . . . . . . . . . . . . . . . . . . . 17 96 8. Open Issues . . . . . . . . . . . . . . . . . . . . . . . . . 19 97 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 98 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 19 99 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 19 100 11.1. Normative References . . . . . . . . . . . . . . . . . . . 19 101 11.2. Informative References . . . . . . . . . . . . . . . . . . 19 102 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 103 Intellectual Property and Copyright Statements . . . . . . . . . . 22 105 1. Introduction 107 This document details application-specific routing requirements for 108 Urban Low Power and Lossy Networks (U-LLNs). U-LLN use cases and 109 associated routing protocol requirements will be described. 111 Section 2 defines terminology useful in describing U-LLNs. 113 Section 3 provides an overview of U-LLN applications. 115 Section 4 describes a few typical use cases for U-LLN applications 116 exemplifying deployment problems and related routing issues. 118 Section 5 describes traffic flows that will be typical for U-LLN 119 applications. 121 Section 6 discusses the routing requirements for networks comprising 122 such constrained devices in a U-LLN environment. These requirements 123 may be overlapping requirements derived from other application- 124 specific requirements documents or as listed in 125 [I-D.culler-rl2n-routing-reqs]. 127 Section 7 provides an overview of security considerations of U-LLN 128 implementations. 130 2. Terminology 132 Access Point: The access point is an infrastructure device that 133 connects the low power and lossy network system to a backbone 134 network. 136 Actuator: a field device that moves or controls equipment 138 AMI: Advanced Metering Infrastructure, part of Smart Grid. 139 Encompasses smart-metering applications. 141 DA: Distribution Automation, part of Smart Grid. Encompasses 142 technologies for maintenance and management of electrical 143 distribution systems. 145 Field Device: physical device placed in the urban operating 146 environment. Field devices include sensors, actuators and 147 repeaters. 149 LLN: Low power and Lossy Network 151 ROLL: Routing over Low power and Lossy networks 153 Smart Grid: a broad class of applications to network and automate 154 utility infrastructure. 156 Schedule: An agreed execution, wake-up, transmission, reception, 157 etc., time-table between two or more field devices. 159 U-LLN: Urban LLN 161 3. Overview of Urban Low Power Lossy Networks 163 3.1. Canonical Network Elements 165 A U-LLN is understood to be a network composed of four key elements, 166 i.e. 168 1. access points, 170 2. repeaters, 172 3. actuators, and 174 4. sensors 176 which communicate wirelessly. 178 3.1.1. Access Points 180 The access point can be used as: 182 1. router to a wider infrastructure (e.g. Internet), 184 2. data sink (e.g. data collection & processing from sensors), and 186 3. data source (e.g. instructions towards actuators) 188 There can be several access points connected to the same U-LLN; 189 however, the number of access points is well below the amount of 190 sensing nodes. The access points are mainly static, i.e. fixed to a 191 random or pre- planned location, but can be nomadic, i.e. in form of 192 a walking supervisor. Access points may but generally do not suffer 193 from any form of (long-term) resource constraint, except that they 194 need to be small and sufficiently cheap. 196 3.1.2. Repeaters 198 Repeaters generally act as relays with the aim to close coverage and 199 routing gaps; examples of their use are: 201 1. prolong the U-LLN's lifetime, 203 2. balance nodes' energy depletion, 205 3. build advanced sensing infrastructures. 207 There can be several repeaters supporting the same U-LLN; however, 208 the number of repeaters is well below the amount of sensing nodes. 209 The repeaters are mainly static, i.e. fixed to a random or pre- 210 planned location. Repeaters may but generally do not suffer from any 211 form of (long-term) resource constraint, except that they need to be 212 small and sufficiently cheap. Repeaters differ from access points in 213 that they do not act as a data sink/source. They differ from 214 actuator and sensing nodes in that they neither control nor sense. 216 3.1.3. Actuators 218 Actuator nodes control urban devices upon being instructed by 219 signaling arriving from or being forwarded by the access point(s); 220 examples are street or traffic lights. The amount of actuator points 221 is well below the number of sensing nodes. Some sensing nodes may 222 include an actuator component, e.g. an electric meter node with 223 integrated support for remote service disconnect. Actuators are 224 capable to forward data. Actuators may generally be mobile but are 225 likely to be static in the majority of near-future roll-outs. 226 Similar to the access points, actuator nodes do not suffer from any 227 long-term resource constraints. 229 3.1.4. Sensors 231 Sensing nodes measure a wide gamut of physical data, including but 232 not limited to: 234 1. municipal consumption data, such as smart-metering of gas, water, 235 electricity, waste, etc; 237 2. meteorological data, such as temperature, pressure, humidity, sun 238 index, strength and direction of wind, etc; 240 3. pollution data, such as polluting gases (SO2, NOx, CO, Ozone), 241 heavy metals (e.g. Mercury), pH, radioactivity, etc; 243 4. ambient data, such as allergic elements (pollen, dust), 244 electromagnetic pollution, noise levels, etc. 246 A prominent example is a Smart Grid application which consists of a 247 city-wide network of smart meters and distribution monitoring 248 sensors. Smart meters in an urban Smart Grid application will 249 include electric, gas, and/or water meters typically administered by 250 one or multiple utility companies. These meters will be capable of 251 advanced sensing functionalities such as measuring quality of 252 service, providing granular interval data, or automating the 253 detection of alarm conditions. In addition they may be capable of 254 advanced interactive functionalities such as remote service 255 disconnect or remote demand reset. More advanced scenarios include 256 demand response systems for managing peak load, and distribution 257 automation systems to monitor the infrastructure which delivers 258 energy throughout the urban environment. Sensor nodes capable of 259 providing this type of functionality may sometimes be referred to as 260 Advanced Metering Infrastructure (AMI). 262 3.2. Topology 264 Whilst millions of sensing nodes may very well be deployed in an 265 urban area, they are likely to be associated to more than one network 266 where these networks may or may not communicate between one other. 267 The number of sensing nodes deployed in the urban environment in 268 support of some applications is expected to be in the order of 10^2- 269 10^7; this is still very large and unprecedented in current roll- 270 outs. The network MUST be capable of supporting the organization of 271 a large number of sensing nodes into regions containing on the order 272 of 10^2 to 10^4 sensing nodes each. 274 Deployment of nodes is likely to happen in batches, e.g. boxes of 275 hundreds to thousands of nodes arrive and are deployed. The location 276 of the nodes is random within given topological constraints, e.g. 277 placement along a road, river, or at individual residences. 279 3.3. Resource Constraints 281 The nodes are highly resource constrained, i.e. cheap hardware, low 282 memory and no infinite energy source. Different node powering 283 mechanisms are available, such as: 285 1. non-rechargeable battery; 287 2. rechargeable battery with regular recharging (e.g. sunlight); 289 3. rechargeable battery with irregular recharging (e.g. 290 opportunistic energy scavenging); 292 4. capacitive/inductive energy provision (e.g. active RFID); 294 5. always on (e.g. powered electricity meter). 296 In the case of a battery powered sensing node, the battery life-time 297 is usually in the order of 10-15 years, rendering network lifetime 298 maximization with battery-powered nodes beyond this lifespan useless. 300 The physical and electromagnetic distances between the four key 301 elements, i.e. sensors, actuators, repeaters and access points, can 302 generally be very large, i.e. from several hundreds of meters to one 303 kilometer. Not every field node is likely to reach the access point 304 in a single hop, thereby requiring suitable routing protocols which 305 manage the information flow in an energy-efficient manner. Sensor 306 nodes are capable of forwarding data. 308 3.4. Link Reliability 310 The links between the network elements are volatile due to the 311 following set of non-exclusive effects: 313 1. packet errors due to wireless channel effects; 315 2. packet errors due to medium access control; 317 3. packet errors due to interference from other systems; 319 4. link unavailability due to network dynamicity; etc. 321 The wireless channel causes the received power to drop below a given 322 threshold in a random fashion, thereby causing detection errors in 323 the receiving node. The underlying effects are path loss, shadowing 324 and fading. 326 Since the wireless medium is broadcast in nature, nodes in their 327 communication radios require suitable medium access control protocols 328 which are capable of resolving any arising contention. Some 329 available protocols may cause packets of neighbouring nodes to 330 collide and hence cause a link outage. 332 Furthermore, the outdoors deployment of U-LLNs also has implications 333 for the interference temperature and hence link reliability and range 334 if ISM bands are to be used. For instance, if the 2.4GHz ISM band is 335 used to facilitate communication between U-LLN nodes, then heavily 336 loaded WLAN hot-spots become a detrimental performance factor 337 jeopardizing the functioning of the U-LLN. 339 Finally, nodes appearing and disappearing causes dynamics in the 340 network which can yield link outages and changes of topologies. 342 4. Urban LLN Application Scenarios 344 Urban applications represent a special segment of LLNs with its 345 unique set of requirements. To facilitate the requirements 346 discussion in Section 4, this section lists a few typical but not 347 exhaustive deployment problems and usage cases of U-LLN. 349 4.1. Deployment of Nodes 351 Contrary to other LLN applications, deployment of nodes is likely to 352 happen in batches out of a box. Typically, hundreds to thousands of 353 nodes are being shipped by the manufacturer with pre-programmed 354 functionalities which are then rolled-out by a service provider or 355 subcontracted entities. Prior or after roll-out, the network needs 356 to be ramped-up. This initialization phase may include, among 357 others, allocation of addresses, (possibly hierarchical) roles in the 358 network, synchronization, determination of schedules, etc. 360 If initialization is performed prior to roll-out, all nodes are 361 likely to be in one another's 1-hop radio neighborhood. Pre- 362 programmed MAC and routing protocols may hence fail to function 363 properly, thereby wasting a large amount of energy. Whilst the major 364 burden will be on resolving MAC conflicts, any proposed U-LLN routing 365 protocol needs to cater for such a case. For instance, 366 0-configuration and network address allocation needs to be properly 367 supported, etc. 369 After roll-out, nodes will have a finite set of one-hop neighbors, 370 likely of low cardinality (in the order of 5- 10). However, some 371 nodes may be deployed in areas where there are hundreds of 372 neighboring devices. In the resulting topology there may be regions 373 where many (redundant) paths are possible through the network. Other 374 regions may be dependant on critical links to achieve connectivity 375 with the rest of the network. Any proposed LLN routing protocol 376 ought to support the autonomous organization and configuration of the 377 network at lowest possible energy cost [Lu2007], where autonomy is 378 understood to be the ability of the network to operate without 379 external influence. For example, nodes in urban sensor nodes SHOULD 380 be able to: 382 o Dynamically adapt to ever-changing conditions of communication 383 (possible degradation of QoS, variable nature of the traffic (real 384 time vs. non real time, sensed data vs. alerts, node mobility, a 385 combination thereof, etc.), 387 o Dynamically provision the service-specific (if not traffic- 388 specific) resources that will comply with the QoS and security 389 requirements of the service, 391 o Dynamically compute, select and possibly optimize the (multiple) 392 path(s) that will be used by the participating devices to forward 393 the traffic towards the actuators and/or the access point 394 according to the service-specific and traffic-specific QoS, 395 traffic engineering and security policies that will have to be 396 enforced at the scale of a routing domain (that is, a set of 397 networking devices administered by a globally unique entity), or a 398 region of such domain (e.g. a metropolitan area composed of 399 clusters of sensors). 401 The result of such organization SHOULD be that each node or set of 402 nodes is uniquely addressable so as to facilitate the set up of 403 schedules, etc. 405 The U-LLN routing protocol(s) MUST accommodate both unicast and 406 multicast forwarding schemes. The U-LLN routing protocol(s) SHOULD 407 support anycast forwarding schemes. Unless exceptionally needed, 408 broadcast forwarding schemes are not advised in urban sensor 409 networking environments. 411 4.2. Association and Disassociation/Disappearance of Nodes 413 After the initialization phase and possibly some operational time, 414 new nodes may be injected into the network as well as existing nodes 415 removed from the network. The former might be because a removed node 416 is replaced or denser readings/actuations are needed or routing 417 protocols report connectivity problems. The latter might be because 418 a node's battery is depleted, the node is removed for maintenance, 419 the node is stolen or accidentally destroyed, etc. Differentiation 420 SHOULD be made between node disappearance, where the node disappears 421 without prior notification, and user or node-initiated disassociation 422 ("phased-out"), where the node has enough time to inform the network 423 about its removal. 425 The protocol(s) hence SHOULD support the pinpointing of problematic 426 routing areas as well as an organization of the network which 427 facilitates reconfiguration in the case of association and 428 disassociation/disappearance of nodes at lowest possible energy and 429 delay. The latter may include the change of hierarchies, routing 430 paths, packet forwarding schedules, etc. Furthermore, to inform the 431 access point(s) of the node's arrival and association with the 432 network as well as freshly associated nodes about packet forwarding 433 schedules, roles, etc, appropriate (link state) updating mechanisms 434 SHOULD be supported. 436 4.3. Regular Measurement Reporting 438 The majority of sensing nodes will be configured to report their 439 readings on a regular basis. The frequency of data sensing and 440 reporting may be different but is generally expected to be fairly 441 low, i.e. in the range of once per hour, per day, etc. The ratio 442 between data sensing and reporting frequencies will determine the 443 memory and data aggregation capabilities of the nodes. Latency of an 444 end-to-end delivery and acknowledgements of a successful data 445 delivery may not be vital as sensing outages can be observed at the 446 access point(s) - when, for instance, there is no reading arriving 447 from a given sensor or cluster of sensors within a day. In this 448 case, a query can be launched to check upon the state and 449 availability of a sensing node or sensing cluster. 451 The protocol(s) hence MUST support a large number of highly 452 directional unicast flows from the sensing nodes or sensing clusters 453 towards the access point or highly directed multicast or anycast 454 flows from the nodes towards multiple access points. 456 Route computation and selection may depend on the transmitted 457 information, the frequency of reporting, the amount of energy 458 remaining in the nodes, the recharging pattern of energy-scavenged 459 nodes, etc. For instance, temperature readings could be reported 460 every hour via one set of battery-powered nodes, whereas air quality 461 indicators are reported only during daytime via nodes powered by 462 solar energy. More generally, entire routing areas may be avoided at 463 e.g. night but heavily used during the day when nodes are scavenging 464 from sunlight. 466 4.4. Queried Measurement Reporting 468 Occasionally, network external data queries can be launched by one or 469 several access points. For instance, it is desirable to know the 470 level of pollution at a specific point or along a given road in the 471 urban environment. The queries' rates of occurrence are not regular 472 but rather random, where heavy-tail distributions seem appropriate to 473 model their behavior. Queries do not necessarily need to be reported 474 back to the same access point from where the query was launched. 475 Round-trip times, i.e. from the launch of a query from an access 476 point towards the delivery of the measured data to an access point, 477 are of importance. However, they are not very stringent where 478 latencies SHOULD simply be sufficiently smaller than typical 479 reporting intervals; for instance, in the order of seconds or minute. 480 To facilitate the query process, U-LLN network devices SHOULD support 481 unicast and multicast routing capabilities. 483 The same approach is also applicable for schedule update, 484 provisioning of patches and upgrades, etc. In this case, however, 485 the provision of acknowledgements and the support of unicast, 486 multicast, and anycast are of importance. 488 4.5. Alert Reporting 490 Rarely, the sensing nodes will measure an event which classifies as 491 alarm where such a classification is typically done locally within 492 each node by means of a pre-programmed or prior diffused threshold. 493 Note that on approaching the alert threshold level, nodes may wish to 494 change their sensing and reporting cycles. An alarm is likely being 495 registered by a plurality of sensing nodes where the delivery of a 496 single alert message with its location of origin suffices in most 497 cases. One example of alert reporting is if the level of toxic gases 498 rises above a threshold, thereupon the sensing nodes in the vicinity 499 of this event report the danger. Another example of alert reporting 500 is when a recycling glass container - equipped with a sensor 501 measuring its level of occupancy - reports that the container is full 502 and hence needs to be emptied. 504 Routing within urban sensor networks SHOULD require the U-LLN nodes 505 to dynamically compute, select and install different paths towards a 506 same destination, depending on the nature of the traffic. From this 507 perspective, such nodes SHOULD inspect the contents of traffic 508 payload for making routing and forwarding decisions: for example, the 509 analysis of the traffic payload SHOULD be derived into aggregation 510 capabilities for the sake of forwarding efficiency. 512 Routes clearly need to be unicast (towards one access point) or 513 multicast (towards multiple access points). Delays and latencies are 514 important; however, again, deliveries within seconds SHOULD suffice 515 in most of the cases. 517 5. Traffic Pattern 519 Unlike traditional ad hoc networks, the information flow in U-LLNs is 520 highly directional. There are three main flows to be distinguished: 522 1. sensed information from the sensing nodes towards one or a subset 523 of the access point(s); 525 2. query requests from the access point(s) towards the sensing 526 nodes; 528 3. control information from the access point(s) towards the 529 actuators. 531 Some of the flows may need the reverse route for delivering 532 acknowledgements. Finally, in the future, some direct information 533 flows between field devices without access points may also occur. 535 Sensed data is likely to be highly correlated in space, time and 536 observed events; an example of the latter is when temperature 537 increase and humidity decrease as the day commences. Data may be 538 sensed and delivered at different rates with both rates being 539 typically fairly low, i.e. in the range of minutes, hours, days, etc. 540 Data may be delivered regularly according to a schedule or a regular 541 query; it may also be delivered irregularly after an externally 542 triggered query; it may also be triggered after a sudden network- 543 internal event or alert. Data delivery may trigger acknowledgements 544 or maintenance traffic in the reverse direction. The network hence 545 needs to be able to adjust to the varying activity duty cycles, as 546 well as to periodic and sporadic traffic. Also, sensed data ought to 547 be secured and locatable. 549 Some data delivery may have tight latency requirements, for example 550 in a case such as a live meter reading for customer service in a 551 smart-metering application, or in a case where a sensor reading 552 response must arrive within a certain time in order to be useful. 553 The network SHOULD take into consideration that different application 554 traffic may require different priorities when traversing the network, 555 and that some traffic may be more sensitive to latency. 557 An U-LLN SHOULD support occasional large scale traffic flows from 558 sensing nodes to access points, such as system-wide alerts. In the 559 example of an AMI U-LLN this could be in response to events such as a 560 city wide power outage. In this scenario all powered devices in a 561 large segment of the network may have lost power and are running off 562 of a temporary `last gasp' source such as a capacitor or small 563 battery. A node MUST be able to send its own alerts toward an access 564 point while continuing to forward traffic on behalf of other devices 565 who are also experiencing an alert condition. The network MUST be 566 able to manage this sudden large traffic flow. It may be useful for 567 the routing layer to collaborate with the application layer to 568 perform data aggregation, in order to reduce the total volume of a 569 large traffic flow, and make more efficient use of the limited energy 570 available. 572 An U-LLN may also need to support efficient large scale messaging to 573 groups of actuators. For example, an AMI U-LLN supporting a city- 574 wide demand response system will need to efficiently broadcast demand 575 response control information to a large subset of actuators in the 576 system. 578 Some scenarios will require internetworking between the U-LLN and 579 another network, such as a home network. For example, an AMI 580 application that implements a demand-response system may need to 581 forward traffic from a utility, across the U-LLN, into a home 582 automation network. A typical use case would be to inform a customer 583 of incentives to reduce demand during peaks, or to automatically 584 adjust the thermostat of customers who have enrolled in such a demand 585 management program. Subsequent traffic may be triggered to flow back 586 through the U-LLN to the utility. The network SHOULD support 587 internetworking, while giving attention to security implications of 588 interfacing, for example, a home network with a utility U-LLN. 590 6. Requirements of Urban LLN Applications 592 Urban low power and lossy network applications have a number of 593 specific requirements related to the set of operating conditions, as 594 exemplified in the previous section. 596 6.1. Scalability 598 The large and diverse measurement space of U-LLN nodes - coupled with 599 the typically large urban areas - will yield extremely large network 600 sizes. Current urban roll-outs are composed of sometimes more than a 601 hundred nodes; future roll-outs, however, may easily reach numbers in 602 the tens of thousands to millions. One of the utmost important LLN 603 routing protocol design criteria is hence scalability. 605 The routing protocol(s) MUST be scalable so as to accommodate a very 606 large and increasing number of nodes without deteriorating to-be- 607 specified performance parameters below to-be-specified thresholds. 608 The routing protocols(s) SHOULD support the organization of a large 609 number of nodes into regions of to-be-specified size. 611 6.2. Parameter Constrained Routing 613 Batteries in some nodes may deplete quicker than in others; the 614 existence of one node for the maintenance of a routing path may not 615 be as important as of another node; the battery scavenging methods 616 may recharge the battery at regular or irregular intervals; some 617 nodes may have a constant power source; some nodes may have a larger 618 memory and are hence be able to store more neighborhood information; 619 some nodes may have a stronger CPU and are hence able to perform more 620 sophisticated data aggregation methods; etc. 622 To this end, the routing protocol(s) MUST support parameter 623 constrained routing, where examples of such parameters (CPU, memory 624 size, battery level, etc.) have been given in the previous paragraph. 626 6.3. Support of Autonomous and Alien Configuration 628 With the large number of nodes, manually configuring and 629 troubleshooting each node is not efficient. The scale and the large 630 number of possible topologies that may be encountered in the U-LLN 631 encourages the development of automated management capabilities that 632 may (partly) rely upon self-organizing techniques. The network is 633 expected to self-organize and self-configure according to some prior 634 defined rules and protocols, as well as to support externally 635 triggered configurations (for instance through a commissioning tool 636 which may facilitate the organization of the network at a minimum 637 energy cost). 639 To this end, the routing protocol(s) MUST provide a set of features 640 including 0-configuration at network ramp-up, (network-internal) 641 self- organization and configuration due to topological changes, 642 ability to support (network-external) patches and configuration 643 updates. For the latter, the protocol(s) MUST support multi- and 644 any-cast addressing. The protocol(s) SHOULD also support the 645 formation and identification of groups of field devices in the 646 network. 648 6.4. Support of Highly Directed Information Flows 650 The reporting of the data readings by a large amount of spatially 651 dispersed nodes towards a few access points will lead to highly 652 directed information flows. For instance, a suitable addressing 653 scheme can be devised which facilitates the data flow. Also, as one 654 gets closer to the access point, the traffic concentration increases 655 which may lead to high load imbalances in node usage. 657 To this end, the routing protocol(s) SHOULD support and utilize the 658 fact of highly directed traffic flow to facilitate scalability and 659 parameter constrained routing. 661 6.5. Support of Heterogeneous Field Devices 663 The sheer amount of different field devices will unlikely be provided 664 by a single manufacturer. A heterogeneous roll-out with nodes using 665 different physical and medium access control layers is hence likely. 667 To mandate fully interoperable implementations, the routing 668 protocol(s) proposed in U-LLN MUST support different devices and 669 underlying technologies without compromising the operability and 670 energy efficiency of the network. 672 6.6. Support of Multicast, Anycast, and Implementation of Groupcast 674 Some urban sensing systems require low-level addressing of a group of 675 nodes in the same subnet, or for a node representative of a group of 676 nodes, without any prior creation of multicast groups, simply 677 carrying a list of recipients in the subnet 678 [I-D.brandt-roll-home-routing-reqs]. 680 Routing protocols activated in urban sensor networks MUST support 681 unicast (traffic is sent to a single field device), multicast 682 (traffic is sent to a set of devices that are subscribed to the same 683 multicast group), and anycast (where multiple field devices are 684 configured to accept traffic sent on a single IP anycast address) 685 transmission schemes [RFC4291] [RFC1546]. Routing protocols 686 activated in urban sensor networks SHOULD accommodate "groupcast" 687 forwarding schemes, where traffic is sent to a set of devices that 688 implicitly belong to the same group/cast. 690 The support of unicast, groupcast, multicast, and anycast also has an 691 implication on the addressing scheme but is beyond the scope of this 692 document that focuses on the routing requirements aspects. 694 Note: with IP multicast, signaling mechanisms are used by a receiver 695 to join a group and the sender does not know the receivers of the 696 group. What is required is the ability to address a group of 697 receivers known by the sender even if the receivers do not need to 698 know that they have been grouped by the sender (since requesting each 699 individual node to join a multicast group would be very energy- 700 consuming). 702 6.7. Network Dynamicity 704 Although mobility is assumed to be low in urban LLNs, network 705 dynamicity due to node association, disassociation and disappearance, 706 as well as long-term link perturbations is not negligible. This in 707 turn impacts re-organization and re-configuration convergence as well 708 as routing protocol convergence. 710 To this end, local network dynamics SHOULD NOT impact the entire 711 network to be re-organized or re-reconfigured; however, the network 712 SHOULD be locally optimized to cater for the encountered changes. 713 Convergence and route establishment times SHOULD be significantly 714 lower than the smallest reporting interval. 716 6.8. Latency 718 With the exception of alert reporting solutions and to a certain 719 extent queried reporting, U-LLN are delay tolerant as long as the 720 information arrives within a fraction of the smallest reporting 721 interval, e.g. a few seconds if reporting is done every 4 hours. 723 To this end, the routing protocol(s) SHOULD support minimum latency 724 for alert reporting and time-critical data queries. For regular data 725 reporting, it SHOULD support latencies not exceeding a fraction of 726 the smallest reporting interval. Due to the different latency 727 requirements, the routing protocol(s) SHOULD support the ability of 728 dealing with different latency requirements. The routing protocol(s) 729 SHOULD also support the ability to route according to different 730 metrics (one of which could e.g. be latency). 732 7. Security Considerations 734 As every network, U-LLNs are exposed to security threats that MUST be 735 addressed. The wireless and distributed nature of these networks 736 increases the spectrum of potential security threats. This is 737 further amplified by the resource constraints of the nodes, thereby 738 preventing resource intensive security approaches from being 739 deployed. A viable security approach SHOULD be sufficiently 740 lightweight that it may be implemented across all nodes in a U-LLN. 741 These issues require special attention during the design process, so 742 as to facilitate a commercially attractive deployment. 744 A secure communication in a wireless network encompasses three main 745 elements, i.e. confidentiality (encryption of data), integrity 746 (correctness of data), and authentication (legitimacy of data). 748 U-LLN networks SHOULD support mechanisms to preserve the 749 confidentiality of the traffic that they forward. The U-LLN network 750 SHOULD NOT prevent an application from employing additional 751 confidentiality mechanisms. 753 Authentication can e.g. be violated if external sources insert 754 incorrect data packets; integrity can e.g. be violated if nodes start 755 to break down and hence commence measuring and relaying data 756 incorrectly. Nonetheless, some sensor readings as well as the 757 actuator control signals need to be confidential. 759 The U-LLN network MUST deny all routing services to any node who has 760 not been authenticated to the U-LLN and authorized for the use of 761 routing services. 763 The U-LLN MUST be protected against attempts to inject false or 764 modified packets. For example, an attacker SHOULD be prevented from 765 manipulating or disabling the routing function by compromising 766 routing update messages. Moreover, it SHOULD NOT be possible to 767 coerce the network into routing packets which have been modified in 768 transit. To this end the routing protocol(s) MUST support message 769 integrity. 771 Further example security issues which may arise are the abnormal 772 behavior of nodes which exhibit an egoistic conduct, such as not 773 obeying network rules, or forwarding no or false packets. Other 774 important issues may arise in the context of Denial of Service (DoS) 775 attacks, malicious address space allocations, advertisement of 776 variable addresses, a wrong neighborhood, external attacks aimed at 777 injecting dummy traffic to drain the network power, etc. 779 The properties of self-configuration and self-organization which are 780 desirable in a U-LLN introduce additional security considerations. 781 Mechanisms MUST be in place to deny any rogue node which attempts to 782 take advantage of self-configuration and self-organization 783 procedures. Such attacks may attempt, for example, to cause denial 784 of service, drain the energy of power constrained devices, or to 785 hijack the routing mechanism. A node MUST authenticate itself to a 786 trusted node that is already associated with the U-LLN before any 787 self-configuration or self-organization is allowed to proceed. A 788 node that has already authenticated and associated with the U-LLN 789 MUST deny, to the maximum extent possible, the allocation of 790 resources to any unauthenticated peer. The routing protocol(s) MUST 791 deny service to any node which has not clearly established trust with 792 the U-LLN. 794 Consideration SHOULD be given to cases where the U-LLN may interface 795 with other networks such as a home network. The U-LLN SHOULD NOT 796 interface with any external network which has not established trust. 797 The U-LLN SHOULD be capable of limiting the resources granted in 798 support of an external network so as not to be vulnerable to denial 799 of service. 801 With low computation power and scarce energy resources, U-LLNs nodes 802 may not be able to resist any attack from high-power malicious nodes 803 (e.g. laptops and strong radios). However, the amount of damage 804 generated to the whole network SHOULD be commensurate with the number 805 of nodes physically compromised. For example, an intruder taking 806 control over a single node SHOULD not have total access to, or be 807 able to completely deny service to the whole network. 809 In general, the routing protocol(s) SHOULD support the implementation 810 of security best practices across the U-LLN. Such an implementation 811 ought to include defense against, for example, eavesdropping, replay, 812 message insertion, modification, and man-in-the-middle attacks. 814 The choice of the security solutions will have an impact onto routing 815 protocol(s). To this end, routing protocol(s) proposed in the 816 context of U-LLNs MUST support integrity measures and SHOULD support 817 confidentiality (security) measures. 819 8. Open Issues 821 Other items to be addressed in further revisions of this document 822 include: 824 o node mobility 826 9. IANA Considerations 828 This document makes no request of IANA. 830 10. Acknowledgements 832 The in-depth feedback of JP Vasseur, Cisco, and Jonathan Hui, Arch 833 Rock, is greatly appreciated. 835 11. References 837 11.1. Normative References 839 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 840 Requirement Levels", BCP 14, RFC 2119, March 1997. 842 11.2. Informative References 844 [I-D.brandt-roll-home-routing-reqs] 845 Brandt, A., "Home Automation Routing Requirement in Low 846 Power and Lossy Networks", 847 draft-brandt-roll-home-routing-reqs-01 (work in progress), 848 May 2008. 850 [I-D.culler-rl2n-routing-reqs] 851 Vasseur, J. and D. Cullerot, "Routing Requirements for Low 852 Power And Lossy Networks", 853 draft-culler-rl2n-routing-reqs-01 (work in progress), 854 July 2007. 856 [Lu2007] J.L. Lu, F. Valois, D. Barthel, M. Dohler, "FISCO: A Fully 857 Integrated Scheme of Self-Configuration and Self- 858 Organization for WSN", IEEE WCNC 2007, Hong Kong, China, 859 11-15 March 2007, pp. 3370-3375. 861 [RFC1546] Partridge, C., Mendez, T., and W. Milliken, "Host 862 Anycasting Service", RFC 1546, November 1993. 864 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 865 Architecture", RFC 4291, February 2006. 867 Authors' Addresses 869 Mischa Dohler (editor) 870 CTTC 871 Parc Mediterrani de la Tecnologia, Av. Canal Olimpic S/N 872 08860 Castelldefels, Barcelona 873 Spain 875 Email: mischa.dohler@cttc.es 877 Thomas Watteyne (editor) 878 France Telecom R&D 879 28 Chemin du Vieux Chene 880 38243 Meylan Cedex 881 France 883 Email: thomas.watteyne@orange-ftgroup.com 885 Tim Winter (editor) 886 Eka Systems 887 20201 Century Blvd. Suite 250 888 Germantown, MD 20874 889 USA 891 Email: tim.winter@ekasystems.com 893 Christian Jacquenet 894 France Telecom R&D 895 4 rue du Clos Courtel BP 91226 896 35512 Cesson Sevigne 897 France 899 Email: christian.jacquenet@orange-ftgroup.com 900 Giyyarpuram Madhusudan 901 France Telecom R&D 902 28 Chemin du Vieux Chene 903 38243 Meylan Cedex 904 France 906 Email: giyyarpuram.madhusudan@orange-ftgroup.com 908 Gabriel Chegaray 909 France Telecom R&D 910 28 Chemin du Vieux Chene 911 38243 Meylan Cedex 912 France 914 Email: gabriel.chegaray@orange-ftgroup.com 916 Dominique Barthel 917 France Telecom R&D 918 28 Chemin du Vieux Chene 919 38243 Meylan Cedex 920 France 922 Email: Dominique.Barthel@orange-ftgroup.com 924 Full Copyright Statement 926 Copyright (C) The IETF Trust (2008). 928 This document is subject to the rights, licenses and restrictions 929 contained in BCP 78, and except as set forth therein, the authors 930 retain all their rights. 932 This document and the information contained herein are provided on an 933 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 934 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 935 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 936 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 937 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 938 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 940 Intellectual Property 942 The IETF takes no position regarding the validity or scope of any 943 Intellectual Property Rights or other rights that might be claimed to 944 pertain to the implementation or use of the technology described in 945 this document or the extent to which any license under such rights 946 might or might not be available; nor does it represent that it has 947 made any independent effort to identify any such rights. Information 948 on the procedures with respect to rights in RFC documents can be 949 found in BCP 78 and BCP 79. 951 Copies of IPR disclosures made to the IETF Secretariat and any 952 assurances of licenses to be made available, or the result of an 953 attempt made to obtain a general license or permission for the use of 954 such proprietary rights by implementers or users of this 955 specification can be obtained from the IETF on-line IPR repository at 956 http://www.ietf.org/ipr. 958 The IETF invites any interested party to bring to its attention any 959 copyrights, patents or patent applications, or other proprietary 960 rights that may cover technology that may be required to implement 961 this standard. Please address the information to the IETF at 962 ietf-ipr@ietf.org.