idnits 2.17.1 draft-ietf-roll-urban-routing-reqs-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 30, 2009) is 5496 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-09) exists of draft-ietf-roll-building-routing-reqs-05 == Outdated reference: A later version (-11) exists of draft-ietf-roll-home-routing-reqs-06 == Outdated reference: A later version (-06) exists of draft-ietf-roll-indus-routing-reqs-04 == Outdated reference: A later version (-13) exists of draft-ietf-roll-terminology-00 Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Networking Working Group M. Dohler, Ed. 3 Internet-Draft CTTC 4 Intended status: Informational T. Watteyne, Ed. 5 Expires: October 1, 2009 CITI-Lab, INRIA A4RES 6 T. Winter, Ed. 7 Eka Systems 8 D. Barthel, Ed. 9 France Telecom R&D 10 March 30, 2009 12 Urban WSNs Routing Requirements in Low Power and Lossy Networks 13 draft-ietf-roll-urban-routing-reqs-05 15 Status of this Memo 17 This Internet-Draft is submitted to IETF in full conformance with the 18 provisions of BCP 78 and BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on October 1, 2009. 38 Copyright Notice 40 Copyright (c) 2009 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents in effect on the date of 45 publication of this document (http://trustee.ietf.org/license-info). 46 Please review these documents carefully, as they describe your rights 47 and restrictions with respect to this document. 49 Abstract 51 The application-specific routing requirements for Urban Low Power and 52 Lossy Networks (U-LLNs) are presented in this document. In the near 53 future, sensing and actuating nodes will be placed outdoors in urban 54 environments so as to improve the people's living conditions as well 55 as to monitor compliance with increasingly strict environmental laws. 56 These field nodes are expected to measure and report a wide gamut of 57 data, such as required in smart metering, waste disposal, 58 meteorological, pollution and allergy reporting applications. The 59 majority of these nodes is expected to communicate wirelessly over a 60 variety of links such as IEEE 802.15.4, Low power IEEE 802.11, IEEE 61 802.15.1 (Bluetooth), which given the limited radio range and the 62 large number of nodes requires the use of suitable routing protocols. 63 The design of such protocols will be mainly impacted by the limited 64 resources of the nodes (memory, processing power, battery, etc.) and 65 the particularities of the outdoor urban application scenarios. As 66 such, for a wireless Routing Over Low power and Lossy networks (ROLL) 67 solution to be useful, the protocol(s) ought to be energy-efficient, 68 scalable, and autonomous. This documents aims to specify a set of 69 IPv6 routing requirements reflecting these and further U-LLNs 70 tailored characteristics. 72 Requirements Language 74 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 75 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 76 document are to be interpreted as described in RFC 2119 [RFC2119]. 78 Table of Contents 80 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 81 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 82 3. Overview of Urban Low Power Lossy Networks . . . . . . . . . . 5 83 3.1. Canonical Network Elements . . . . . . . . . . . . . . . . 5 84 3.1.1. Sensors . . . . . . . . . . . . . . . . . . . . . . . 5 85 3.1.2. Actuators . . . . . . . . . . . . . . . . . . . . . . 6 86 3.1.3. Routers . . . . . . . . . . . . . . . . . . . . . . . 6 87 3.2. Topology . . . . . . . . . . . . . . . . . . . . . . . . . 7 88 3.3. Resource Constraints . . . . . . . . . . . . . . . . . . . 7 89 3.4. Link Reliability . . . . . . . . . . . . . . . . . . . . . 8 90 4. Urban LLN Application Scenarios . . . . . . . . . . . . . . . 9 91 4.1. Deployment of Nodes . . . . . . . . . . . . . . . . . . . 9 92 4.2. Association and Disassociation/Disappearance of Nodes . . 10 93 4.3. Regular Measurement Reporting . . . . . . . . . . . . . . 10 94 4.4. Queried Measurement Reporting . . . . . . . . . . . . . . 11 95 4.5. Alert Reporting . . . . . . . . . . . . . . . . . . . . . 11 96 5. Traffic Pattern . . . . . . . . . . . . . . . . . . . . . . . 12 97 6. Requirements of Urban LLN Applications . . . . . . . . . . . . 13 98 6.1. Scalability . . . . . . . . . . . . . . . . . . . . . . . 14 99 6.2. Parameter Constrained Routing . . . . . . . . . . . . . . 14 100 6.3. Support of Autonomous and Alien Configuration . . . . . . 15 101 6.4. Support of Highly Directed Information Flows . . . . . . . 15 102 6.5. Support of Multicast and Anycast . . . . . . . . . . . . . 16 103 6.6. Network Dynamicity . . . . . . . . . . . . . . . . . . . . 16 104 6.7. Latency . . . . . . . . . . . . . . . . . . . . . . . . . 17 105 7. Security Considerations . . . . . . . . . . . . . . . . . . . 17 106 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 107 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 19 108 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 19 109 10.1. Normative References . . . . . . . . . . . . . . . . . . . 19 110 10.2. Informative References . . . . . . . . . . . . . . . . . . 19 111 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 113 1. Introduction 115 This document details application-specific IPv6 routing requirements 116 for Urban Low Power and Lossy Networks (U-LLNs). Note that this 117 document details the set of IPv6 routing requirements for U-LLNs in 118 strict compliance with the layered IP architecture. U-LLN use cases 119 and associated routing protocol requirements will be described. 121 Section 2 defines terminology useful in describing U-LLNs. 123 Section 3 provides an overview of U-LLN applications. 125 Section 4 describes a few typical use cases for U-LLN applications 126 exemplifying deployment problems and related routing issues. 128 Section 5 describes traffic flows that will be typical for U-LLN 129 applications. 131 Section 6 discusses the routing requirements for networks comprising 132 such constrained devices in a U-LLN environment. These requirements 133 may be overlapping requirements derived from other application- 134 specific requirements documents [I-D.ietf-roll-home-routing-reqs] 135 [I-D.ietf-roll-indus-routing-reqs] 136 [I-D.ietf-roll-building-routing-reqs]. 138 Section 7 provides an overview of routing security considerations of 139 U-LLN implementations. 141 2. Terminology 143 The terminology used in this document is consistent with and 144 incorporates that described in `Terminology in Low power And Lossy 145 Networks' [I-D.ietf-roll-terminology]. This terminology is extended 146 in this document as follows: 148 Anycast: Addressing and Routing scheme for forwarding packets to at 149 least one of the "nearest" interfaces from a group, as 150 described in RFC4291 [RFC4291] and RFC1546 [RFC1546]. 152 Autonomous: Refers to the ability of a routing protocol to 153 independently function without requiring any external influence 154 or guidance. Includes self-configuration and self-organization 155 capabilities. 157 DoS: Denial of Service, a class of attack that attempts to cause 158 resource exhaustion to the detriment of a node or network. 160 ISM band: Industrial, Scientific and Medical band. This is a region 161 of radio spectrum where low power unlicensed devices may 162 generally be used, with specific guidance from an applicable 163 local radio spectrum authority. 165 U-LLN: Urban Low Power and Lossy network. 167 WLAN: Wireless Local Area Network. 169 3. Overview of Urban Low Power Lossy Networks 171 3.1. Canonical Network Elements 173 A U-LLN is understood to be a network composed of three key elements, 174 i.e. 176 1. sensors, 178 2. actuators, and 180 3. routers. 182 which communicate wirelessly. The aim of the following sections 183 (Section 3.1.1, Section 3.1.2, and Section 3.1.3) is to illustrate 184 the functional nature of a sensor, actuator and a router in this 185 context. That said, it must be understood that these functionalities 186 are not exclusive. A particular device may act as a simple router or 187 may alternatively be a router equipped with a sensing functionality, 188 in which case it will be seen as a "regular" router as far as routing 189 is concerned. 191 3.1.1. Sensors 193 Sensing nodes measure a wide gamut of physical data, including but 194 not limited to: 196 1. municipal consumption data, such as smart-metering of gas, water, 197 electricity, waste, etc; 199 2. meteorological data, such as temperature, pressure, humidity, UV 200 index, strength and direction of wind, etc; 202 3. pollution data, such as gases (SO2, NOx, CO, Ozone), heavy metals 203 (e.g. Mercury), pH, radioactivity, etc; 205 4. ambient data, such as allergic elements (pollen, dust), 206 electromagnetic pollution, noise levels, etc. 208 Sensor nodes run applications that typically gather the measurement 209 data and send it to data collection and processing application(s) on 210 other node(s) (often outside the U-LLN). 212 Sensor nodes are capable of forwarding data. Sensor nodes are 213 generally not mobile in the majority of near-future roll-outs. In 214 many anticipated roll-outs, sensor nodes may suffer from long-term 215 resource constraints. 217 A prominent example is a Smart Grid application which consists of a 218 city-wide network of smart meters and distribution monitoring 219 sensors. Smart meters in an urban Smart Grid application will 220 include electric, gas, and/or water meters typically administered by 221 one or multiple utility companies. These meters will be capable of 222 advanced sensing functionalities such as measuring the quality of 223 electrical service provided to a customer, providing granular 224 interval data, or automating the detection of alarm conditions. In 225 addition they may be capable of advanced interactive functionalities, 226 which may invoke an Actuator component, such as remote service 227 disconnect or remote demand reset. More advanced scenarios include 228 demand response systems for managing peak load, and distribution 229 automation systems to monitor the infrastructure which delivers 230 energy throughout the urban environment. Sensor nodes capable of 231 providing this type of functionality may sometimes be referred to as 232 Advanced Metering Infrastructure (AMI). 234 3.1.2. Actuators 236 Actuator nodes are capable of controlling urban devices; examples are 237 street or traffic lights. They run applications that receive 238 instructions from control applications on other nodes (possibly 239 outside the U-LLN). The amount of actuator points is well below the 240 number of sensing nodes. Some sensing nodes may include an actuator 241 component, e.g. an electric meter node with integrated support for 242 remote service disconnect. Actuators are capable of forwarding data. 243 Actuators are not likely to be mobile in the majority of near-future 244 roll-outs. Actuator nodes may also suffer from long-term resource 245 constraints, e.g. in the case where they are battery powered. 247 3.1.3. Routers 249 Routers generally act to close coverage and routing gaps within the 250 interior of the U-LLN; examples of their use are: 252 1. prolong the U-LLN's lifetime, 254 2. balance nodes' energy depletion, 256 3. build advanced sensing infrastructures. 258 There can be several routers supporting the same U-LLN; however, the 259 number of routers is well below the amount of sensing nodes. The 260 routers are generally not mobile, i.e. fixed to a random or pre- 261 planned location. Routers may but generally do not suffer from any 262 form of (long-term) resource constraint, except that they need to be 263 small and sufficiently cheap. Routers differ from actuator and 264 sensing nodes in that they neither control nor sense. That being 265 said, a sensing node or actuator may also be a router within the 266 U-LLN. 268 Some routers provide access to wider infrastructures, such as the 269 Internet, and are named Low power and lossy network Border Routers 270 (LBRs) in that context. 272 LBR nodes in particular may also run applications that communicate 273 with sensor and actuator nodes (e.g. collecting and processing data 274 from sensor applications, or sending instructions to actuator 275 applications). 277 3.2. Topology 279 Whilst millions of sensing nodes may very well be deployed in an 280 urban area, they are likely to be associated with more than one 281 network. These networks may or may not communicate between one 282 another. The number of sensing nodes deployed in the urban 283 environment in support of some applications is expected to be in the 284 order of 10^2 to 10^7; this is still very large and unprecedented in 285 current roll-outs. 287 Deployment of nodes is likely to happen in batches, e.g. boxes of 288 hundreds to thousands of nodes arrive and are deployed. The location 289 of the nodes is random within given topological constraints, e.g. 290 placement along a road, river, or at individual residences. 292 3.3. Resource Constraints 294 The nodes are highly resource constrained, i.e. cheap hardware, low 295 memory and no infinite energy source. Different node powering 296 mechanisms are available, such as: 298 1. non-rechargeable battery; 300 2. rechargeable battery with regular recharging (e.g. sunlight); 302 3. rechargeable battery with irregular recharging (e.g. 303 opportunistic energy scavenging); 305 4. capacitive/inductive energy provision (e.g. passive Radio 306 Frequency IDentification (RFID)); 308 5. always on (e.g. powered electricity meter). 310 In the case of a battery powered sensing node, the battery shelf life 311 is usually in the order of 10 to 15 years, rendering network lifetime 312 maximization with battery powered nodes beyond this lifespan useless. 314 The physical and electromagnetic distances between the three key 315 elements, i.e. sensors, actuators, and routers, can generally be very 316 large, i.e. from several hundreds of meters to one kilometer. Not 317 every field node is likely to reach the LBR in a single hop, thereby 318 requiring suitable routing protocols which manage the information 319 flow in an energy-efficient manner. 321 3.4. Link Reliability 323 The links between the network elements are volatile due to the 324 following set of non-exclusive effects: 326 1. packet errors due to wireless channel effects; 328 2. packet errors due to MAC (Medium Access Control) (e.g. 329 collision); 331 3. packet errors due to interference from other systems; 333 4. link unavailability due to network dynamicity; etc. 335 The wireless channel causes the received power to drop below a given 336 threshold in a random fashion, thereby causing detection errors in 337 the receiving node. The underlying effects are path loss, shadowing 338 and fading. 340 Since the wireless medium is broadcast in nature, nodes in their 341 communication radios require suitable medium access control protocols 342 which are capable of resolving any arising contention. Some 343 available protocols may not be able to prevent packets of neighboring 344 nodes from colliding, possibly leading to a high Packet Error Rate 345 (PER) and causing a link outage. 347 Furthermore, the outdoor deployment of U-LLNs also has implications 348 for the interference temperature and hence link reliability and range 349 if Industrial, Scientific and Medical (ISM) bands are to be used. 350 For instance, if the 2.4GHz ISM band is used to facilitate 351 communication between U-LLN nodes, then heavily loaded Wireless Local 352 Area Network (WLAN) hot-spots may become a detrimental performance 353 factor, leading to high PER and jeopardizing the functioning of the 354 U-LLN. 356 Finally, nodes appearing and disappearing causes dynamics in the 357 network which can yield link outages and changes of topologies. 359 4. Urban LLN Application Scenarios 361 Urban applications represent a special segment of LLNs with its 362 unique set of requirements. To facilitate the requirements 363 discussion in Section 6, this section lists a few typical but not 364 exhaustive deployment problems and usage cases of U-LLN. 366 4.1. Deployment of Nodes 368 Contrary to other LLN applications, deployment of nodes is likely to 369 happen in batches out of a box. Typically, hundreds to thousands of 370 nodes are being shipped by the manufacturer with pre-programmed 371 functionalities which are then rolled-out by a service provider or 372 subcontracted entities. Prior or after roll-out, the network needs 373 to be ramped-up. This initialization phase may include, among 374 others, allocation of addresses, (possibly hierarchical) roles in the 375 network, synchronization, determination of schedules, etc. 377 If initialization is performed prior to roll-out, all nodes are 378 likely to be in one another's 1-hop radio neighborhood. Pre- 379 programmed Media Access Control (MAC) and routing protocols may hence 380 fail to function properly, thereby wasting a large amount of energy. 381 Whilst the major burden will be on resolving MAC conflicts, any 382 proposed U-LLN routing protocol needs to cater for such a case. For 383 instance, 0-configuration and network address allocation needs to be 384 properly supported, etc. 386 After roll-out, nodes will have a finite set of one-hop neighbors, 387 likely of low cardinality (in the order of 5 to 10). However, some 388 nodes may be deployed in areas where there are hundreds of 389 neighboring devices. In the resulting topology there may be regions 390 where many (redundant) paths are possible through the network. Other 391 regions may be dependent on critical links to achieve connectivity 392 with the rest of the network. Any proposed LLN routing protocol 393 ought to support the autonomous self-organization and self- 394 configuration of the network at lowest possible energy cost [Lu2007], 395 where autonomy is understood to be the ability of the network to 396 operate without external influence. The result of such organization 397 should be that each node or set of nodes is uniquely addressable so 398 as to facilitate the set up of schedules, etc. 400 Unless exceptionally needed, broadcast forwarding schemes are not 401 advised in urban sensor networking environments. 403 4.2. Association and Disassociation/Disappearance of Nodes 405 After the initialization phase and possibly some operational time, 406 new nodes may be injected into the network as well as existing nodes 407 removed from the network. The former might be because a removed node 408 is replaced as part of maintenance, or new nodes are added because 409 more sensors for denser readings/actuations are needed, or because 410 routing protocols report connectivity problems. The latter might be 411 because a node's battery is depleted, the node is removed for 412 maintenance, the node is stolen or accidentally destroyed, etc. 414 The protocol(s) hence should be able to convey information about 415 malfunctioning nodes which may affect or jeopardize the overall 416 routing efficiency, so that self-organization and self-configuration 417 capabilities of the sensor network might be solicited to facilitate 418 the appropriate reconfiguration. This information may e.g. include 419 exact or relative geographical position, etc. The reconfiguration 420 may include the change of hierarchies, routing paths, packet 421 forwarding schedules, etc. Furthermore, to inform the LBR(s) of the 422 node's arrival and association with the network as well as freshly 423 associated nodes about packet forwarding schedules, roles, etc, 424 appropriate updating mechanisms should be supported. 426 4.3. Regular Measurement Reporting 428 The majority of sensing nodes will be configured to report their 429 readings on a regular basis. The frequency of data sensing and 430 reporting may be different but is generally expected to be fairly 431 low, i.e. in the range of once per hour, per day, etc. The ratio 432 between data sensing and reporting frequencies will determine the 433 memory and data aggregation capabilities of the nodes. Latency of an 434 end-to-end delivery and acknowledgements of a successful data 435 delivery may not be vital as sensing outages can be observed at data 436 collection applications - when, for instance, there is no reading 437 arriving from a given sensor or cluster of sensors within a day. In 438 this case, a query can be launched to check upon the state and 439 availability of a sensing node or sensing cluster. 441 It is not uncommon to gather data on a few servers located outside of 442 the U-LLN. In such cases, a large number of highly directional 443 unicast flows from the sensing nodes or sensing clusters are likely 444 to transit through a LBR. Thus, the protocol(s) should be optimized 445 to support a large number of unicast flows from the sensing nodes or 446 sensing clusters towards a LBR, or highly directed multicast or 447 anycast flows from the nodes towards multiple LBRs. 449 Route computation and selection may depend on the transmitted 450 information, the frequency of reporting, the amount of energy 451 remaining in the nodes, the recharging pattern of energy-scavenged 452 nodes, etc. For instance, temperature readings could be reported 453 every hour via one set of battery powered nodes, whereas air quality 454 indicators are reported only during daytime via nodes powered by 455 solar energy. More generally, entire routing areas may be avoided 456 (e.g. at night) but heavily used during the day when nodes are 457 scavenging from sunlight. 459 4.4. Queried Measurement Reporting 461 Occasionally, network external data queries can be launched by one or 462 several applications. For instance, it is desirable to know the 463 level of pollution at a specific point or along a given road in the 464 urban environment. The queries' rates of occurrence are not regular 465 but rather random, where heavy-tail distributions seem appropriate to 466 model their behavior. Queries do not necessarily need to be reported 467 back to the same node from where the query was launched. Round-trip 468 times, i.e. from the launch of a query from a node until the delivery 469 of the measured data to a node, are of importance. However, they are 470 not very stringent where latencies should simply be sufficiently 471 smaller than typical reporting intervals; for instance, in the order 472 of seconds or minute. The routing protocol(s) should consider the 473 selection of paths with appropriate (e.g. latency) metrics to support 474 queried measurement reporting. To facilitate the query process, 475 U-LLN network devices should support unicast and multicast routing 476 capabilities. 478 The same approach is also applicable for schedule update, 479 provisioning of patches and upgrades, etc. In this case, however, 480 the provision of acknowledgements and the support of unicast, 481 multicast, and anycast are of importance. 483 4.5. Alert Reporting 485 Rarely, the sensing nodes will measure an event which classifies as 486 alarm where such a classification is typically done locally within 487 each node by means of a pre-programmed or prior diffused threshold. 488 Note that on approaching the alert threshold level, nodes may wish to 489 change their sensing and reporting cycles. An alarm is likely being 490 registered by a plurality of sensing nodes where the delivery of a 491 single alert message with its location of origin suffices in most, 492 but not all, cases. One example of alert reporting is if the level 493 of toxic gases rises above a threshold, thereupon the sensing nodes 494 in the vicinity of this event report the danger. Another example of 495 alert reporting is when a recycling glass container - equipped with a 496 sensor measuring its level of occupancy - reports that the container 497 is full and hence needs to be emptied. 499 Routes clearly need to be unicast (towards one LBR) or multicast 500 (towards multiple LBRs). Delays and latencies are important; 501 however, for a U-LLN deployed in support of a typical application, 502 deliveries within seconds should suffice in most of the cases. 504 5. Traffic Pattern 506 Unlike traditional ad hoc networks, the information flow in U-LLNs is 507 highly directional. There are three main flows to be distinguished: 509 1. sensed information from the sensing nodes to applications outside 510 the U-LLN, going through one or a subset of the LBR(s); 512 2. query requests from applications outside the U-LLN, going through 513 the LBR(s) towards the sensing nodes; 515 3. control information from applications outside the U-LLN, going 516 through the LBR(s) towards the actuators. 518 Some of the flows may need the reverse route for delivering 519 acknowledgements. Finally, in the future, some direct information 520 flows between field devices without LBRs may also occur. 522 Sensed data is likely to be highly correlated in space, time and 523 observed events; an example of the latter is when temperature 524 increase and humidity decrease as the day commences. Data may be 525 sensed and delivered at different rates with both rates being 526 typically fairly low, i.e. in the range of minutes, hours, days, etc. 527 Data may be delivered regularly according to a schedule or a regular 528 query; it may also be delivered irregularly after an externally 529 triggered query; it may also be triggered after a sudden network- 530 internal event or alert. Schedules may be driven by, for example, a 531 smart-metering application where data is expected to be delivered 532 every hour, or an environmental monitoring application where a 533 battery powered node is expected to report its status at a specific 534 time once a day. Data delivery may trigger acknowledgements or 535 maintenance traffic in the reverse direction. The network hence 536 needs to be able to adjust to the varying activity duty cycles, as 537 well as to periodic and sporadic traffic. Also, sensed data ought to 538 be secured and locatable. 540 Some data delivery may have tight latency requirements, for example 541 in a case such as a live meter reading for customer service in a 542 smart-metering application, or in a case where a sensor reading 543 response must arrive within a certain time in order to be useful. 544 The network should take into consideration that different application 545 traffic may require different priorities in the selection of a route 546 when traversing the network, and that some traffic may be more 547 sensitive to latency. 549 An U-LLN should support occasional large scale traffic flows from 550 sensing nodes through LBRs (to nodes outside the U-LLN), such as 551 system-wide alerts. In the example of an AMI U-LLN this could be in 552 response to events such as a city wide power outage. In this 553 scenario all powered devices in a large segment of the network may 554 have lost power and are running off of a temporary `last gasp' source 555 such as a capacitor or small battery. A node must be able to send 556 its own alerts toward an LBR while continuing to forward traffic on 557 behalf of other devices who are also experiencing an alert condition. 558 The network needs to be able to manage this sudden large traffic 559 flow. 561 An U-LLN may also need to support efficient large scale messaging to 562 groups of actuators. For example, an AMI U-LLN supporting a city- 563 wide demand response system will need to efficiently broadcast demand 564 response control information to a large subset of actuators in the 565 system. 567 Some scenarios will require internetworking between the U-LLN and 568 another network, such as a home network. For example, an AMI 569 application that implements a demand-response system may need to 570 forward traffic from a utility, across the U-LLN, into a home 571 automation network. A typical use case would be to inform a customer 572 of incentives to reduce demand during peaks, or to automatically 573 adjust the thermostat of customers who have enrolled in such a demand 574 management program. Subsequent traffic may be triggered to flow back 575 through the U-LLN to the utility. 577 6. Requirements of Urban LLN Applications 579 Urban low power and lossy network applications have a number of 580 specific requirements related to the set of operating conditions, as 581 exemplified in the previous sections. 583 6.1. Scalability 585 The large and diverse measurement space of U-LLN nodes - coupled with 586 the typically large urban areas - will yield extremely large network 587 sizes. Current urban roll-outs are composed of sometimes more than 588 one hundred nodes; future roll-outs, however, may easily reach 589 numbers in the tens of thousands to millions. One of the utmost 590 important LLN routing protocol design criteria is hence scalability. 592 The routing protocol(s) MUST be capable of supporting the 593 organization of a large number of sensing nodes into regions 594 containing on the order of 10^2 to 10^4 sensing nodes each. 596 The routing protocol(s) MUST be scalable so as to accommodate a very 597 large and increasing number of nodes without deteriorating selected 598 performance parameters below configurable thresholds. The routing 599 protocols(s) SHOULD support the organization of a large number of 600 nodes into regions of configurable size. 602 6.2. Parameter Constrained Routing 604 Batteries in some nodes may deplete quicker than in others; the 605 existence of one node for the maintenance of a routing path may not 606 be as important as of another node; the battery scavenging methods 607 may recharge the battery at regular or irregular intervals; some 608 nodes may have a constant power source; some nodes may have a larger 609 memory and are hence be able to store more neighborhood information; 610 some nodes may have a stronger CPU and are hence able to perform more 611 sophisticated data aggregation methods; etc. 613 To this end, the routing protocol(s) MUST support parameter 614 constrained routing, where examples of such parameters (CPU, memory 615 size, battery level, etc.) have been given in the previous paragraph. 616 In other words the routing protocol MUST be able to advertise node 617 capabilities that will be exclusively used by the routing protocol 618 engine for routing decision. For the sake of example, such 619 capability could be related to the node capability itself (e.g. 620 remaining power) or some application that could influence routing 621 (e.g. capability to aggregate data). 623 Routing within urban sensor networks SHOULD require the U-LLN nodes 624 to dynamically compute, select and install different paths towards a 625 same destination, depending on the nature of the traffic. Such 626 functionality in support of, for example, data aggregation, may imply 627 to use some mechanisms to mark/tag the traffic for appropriate 628 routing decision using the IPv6 packet format (e.g. use of DSCP, Flow 629 Label) based on an upper layer marking decision. From this 630 perspective, such nodes MAY use node capabilities (e.g. to act as an 631 aggregator) in conjunction with the anycast endpoints and packet 632 marking to route the traffic. 634 6.3. Support of Autonomous and Alien Configuration 636 With the large number of nodes, manually configuring and 637 troubleshooting each node is not efficient. The scale and the large 638 number of possible topologies that may be encountered in the U-LLN 639 encourages the development of automated management capabilities that 640 may (partly) rely upon self-organizing techniques. The network is 641 expected to self-organize and self-configure according to some prior 642 defined rules and protocols, as well as to support externally 643 triggered configurations (for instance through a commissioning tool 644 which may facilitate the organization of the network at a minimum 645 energy cost). 647 To this end, the routing protocol(s) MUST provide a set of features 648 including 0-configuration at network ramp-up, (network-internal) 649 self- organization and configuration due to topological changes, and 650 the ability to support (network-external) patches and configuration 651 updates. For the latter, the protocol(s) MUST support multi- and 652 any-cast addressing. The protocol(s) SHOULD also support the 653 formation and identification of groups of field devices in the 654 network. 656 The routing protocol(s) SHOULD be able to dynamically adapt, e.g. 657 through the application of appropriate routing metrics, to ever- 658 changing conditions of communication (possible degradation of QoS, 659 variable nature of the traffic (real time vs. non real time, sensed 660 data vs. alerts), node mobility, a combination thereof, etc.) 662 The routing protocol(s) SHOULD be able to dynamically compute, select 663 and possibly optimize the (multiple) path(s) that will be used by the 664 participating devices to forward the traffic towards the actuators 665 and/or a LBR according to the service-specific and traffic-specific 666 QoS, traffic engineering and routing security policies that will have 667 to be enforced at the scale of a routing domain (that is, a set of 668 networking devices administered by a globally unique entity), or a 669 region of such domain (e.g. a metropolitan area composed of clusters 670 of sensors). 672 6.4. Support of Highly Directed Information Flows 674 As pointed out in section Section 4.3, it is not uncommon to gather 675 data on a few servers located outside of the U-LLN. In this case, 676 the reporting of the data readings by a large amount of spatially 677 dispersed nodes towards a few LBRs will lead to highly directed 678 information flows. For instance, a suitable addressing scheme can be 679 devised which facilitates the data flow. Also, as one gets closer to 680 the LBR, the traffic concentration increases which may lead to high 681 load imbalances in node usage. 683 To this end, the routing protocol(s) SHOULD support and utilize the 684 fact of a large number of highly directed traffic flows to facilitate 685 scalability and parameter constrained routing. 687 The routing protocol MUST be able to accommodate traffic bursts by 688 dynamically computing and selecting multiple paths towards the same 689 destination. 691 6.5. Support of Multicast and Anycast 693 Routing protocols activated in urban sensor networks MUST support 694 unicast (traffic is sent to a single field device), multicast 695 (traffic is sent to a set of devices that are subscribed to the same 696 multicast group), and anycast (where multiple field devices are 697 configured to accept traffic sent on a single IP anycast address) 698 transmission schemes. 700 The support of unicast, multicast, and anycast also has an 701 implication on the addressing scheme but is beyond the scope of this 702 document that focuses on the routing requirements aspects. 704 Some urban sensing systems may require low-level addressing of a 705 group of nodes in the same subnet, or for a node representative of a 706 group of nodes, without any prior creation of multicast groups. Such 707 addressing schemes, where a sender can form an addressable group 708 receivers, are not currently supported by IPv6, and not further 709 discussed in this specification [I-D.ietf-roll-home-routing-reqs]. 711 The network SHOULD support internetworking when identical protocols 712 are used, while giving attention to routing security implications of 713 interfacing, for example, a home network with a utility U-LLN. The 714 network may support the ability to interact with another network 715 using a different protocol, for example by supporting route 716 redistribution. 718 6.6. Network Dynamicity 720 Although mobility is assumed to be low in urban LLNs, network 721 dynamicity due to node association, disassociation and disappearance, 722 as well as long-term link perturbations is not negligible. This in 723 turn impacts reorganization and reconfiguration convergence as well 724 as routing protocol convergence. 726 To this end, local network dynamics SHOULD NOT impact the entire 727 network to be re-organized or re-reconfigured; however, the network 728 SHOULD be locally optimized to cater for the encountered changes. 729 The routing protocol(s) SHOULD support appropriate mechanisms in 730 order to be informed of the association, disassociation, and 731 disappearance of nodes. The routing protocol(s) SHOULD support 732 appropriate updating mechanisms in order to be informed of changes in 733 connectivity. The routing protocol(s) SHOULD use this information to 734 initiate protocol specific mechanisms for reorganization and 735 reconfiguration as necessary to maintain overall routing efficiency. 736 Convergence and route establishment times SHOULD be significantly 737 lower than the smallest reporting interval. 739 Differentiation SHOULD be made between node disappearance, where the 740 node disappears without prior notification, and user or node- 741 initiated disassociation ("phased-out"), where the node has enough 742 time to inform the network about its pending removal. 744 6.7. Latency 746 With the exception of alert reporting solutions and to a certain 747 extent queried reporting, U-LLNs are delay tolerant as long as the 748 information arrives within a fraction of the smallest reporting 749 interval, e.g. a few seconds if reporting is done every 4 hours. 751 The routing protocol(s) SHOULD also support the ability to route 752 according to different metrics (one of which could e.g. be latency). 754 7. Security Considerations 756 As every network, U-LLNs are exposed to routing security threats that 757 need to be addressed. The wireless and distributed nature of these 758 networks increases the spectrum of potential routing security 759 threats. This is further amplified by the resource constraints of 760 the nodes, thereby preventing resource intensive routing security 761 approaches from being deployed. A viable routing security approach 762 SHOULD be sufficiently lightweight that it may be implemented across 763 all nodes in a U-LLN. These issues require special attention during 764 the design process, so as to facilitate a commercially attractive 765 deployment. 767 The U-LLN network MUST deny any node who has not been authenticated 768 to the U-LLN and authorized to participate to the routing decision 769 process. 771 An attacker SHOULD be prevented from manipulating or disabling the 772 routing function, for example by compromising routing control 773 messages. To this end the routing protocol(s) MUST support message 774 integrity. 776 Further example routing security issues which may arise are the 777 abnormal behavior of nodes which exhibit an egoistic conduct, such as 778 not obeying network rules, or forwarding no or false packets. Other 779 important issues may arise in the context of Denial of Service (DoS) 780 attacks, malicious address space allocations, advertisement of 781 variable addresses, a wrong neighborhood, etc. The routing 782 protocol(s) SHOULD support defense against DoS attacks and other 783 attempts to maliciously or inadvertently cause the routing 784 protocol(s) mechanisms to over consume the limited resources of LLN 785 nodes, e.g. by constructing forwarding loops or causing excessive 786 routing protocol overhead traffic, etc. 788 The properties of self-configuration and self-organization which are 789 desirable in a U-LLN introduce additional routing security 790 considerations. Mechanisms MUST be in place to deny any node which 791 attempts to take malicious advantage of self-configuration and self- 792 organization procedures. Such attacks may attempt, for example, to 793 cause DoS, drain the energy of power constrained devices, or to 794 hijack the routing mechanism. A node MUST authenticate itself to a 795 trusted node that is already associated with the U-LLN before the 796 former can take part in self-configuration or self-organization. A 797 node that has already authenticated and associated with the U-LLN 798 MUST deny, to the maximum extent possible, the allocation of 799 resources to any unauthenticated peer. The routing protocol(s) MUST 800 deny service to any node which has not clearly established trust with 801 the U-LLN. 803 Consideration SHOULD be given to cases where the U-LLN may interface 804 with other networks such as a home network. The U-LLN SHOULD NOT 805 interface with any external network which has not established trust. 806 The U-LLN SHOULD be capable of limiting the resources granted in 807 support of an external network so as not to be vulnerable to DoS. 809 With low computation power and scarce energy resources, U-LLNs nodes 810 may not be able to resist any attack from high-power malicious nodes 811 (e.g. laptops and strong radios). However, the amount of damage 812 generated to the whole network SHOULD be commensurate with the number 813 of nodes physically compromised. For example, an intruder taking 814 control over a single node SHOULD NOT be able to completely deny 815 service to the whole network. 817 In general, the routing protocol(s) SHOULD support the implementation 818 of routing security best practices across the U-LLN. Such an 819 implementation ought to include defense against, for example, 820 eavesdropping, replay, message insertion, modification, and man-in- 821 the-middle attacks. 823 The choice of the routing security solutions will have an impact onto 824 routing protocol(s). To this end, routing protocol(s) proposed in 825 the context of U-LLNs MUST support authentication and integrity 826 measures and SHOULD support confidentiality (routing security) 827 measures. 829 8. IANA Considerations 831 This document makes no request of IANA. 833 9. Acknowledgements 835 The in-depth feedback of JP Vasseur, Jonathan Hui, Iain Calder, and 836 Pasi Eronen is greatly appreciated. 838 10. References 840 10.1. Normative References 842 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 843 Requirement Levels", BCP 14, RFC 2119, March 1997. 845 10.2. Informative References 847 [I-D.ietf-roll-building-routing-reqs] 848 Martocci, J., Riou, N., Mil, P., and W. Vermeylen, 849 "Building Automation Routing Requirements in Low Power and 850 Lossy Networks", draft-ietf-roll-building-routing-reqs-05 851 (work in progress), February 2009. 853 [I-D.ietf-roll-home-routing-reqs] 854 Brandt, A., Buron, J., and G. Porcu, "Home Automation 855 Routing Requirements in Low Power and Lossy Networks", 856 draft-ietf-roll-home-routing-reqs-06 (work in progress), 857 November 2008. 859 [I-D.ietf-roll-indus-routing-reqs] 860 Networks, D., Thubert, P., Dwars, S., and T. Phinney, 861 "Industrial Routing Requirements in Low Power and Lossy 862 Networks", draft-ietf-roll-indus-routing-reqs-04 (work in 863 progress), January 2009. 865 [I-D.ietf-roll-terminology] 866 Vasseur, J., "Terminology in Low power And Lossy 867 Networks", draft-ietf-roll-terminology-00 (work in 868 progress), October 2008. 870 [Lu2007] J.L. Lu, F. Valois, D. Barthel, M. Dohler, "FISCO: A Fully 871 Integrated Scheme of Self-Configuration and Self- 872 Organization for WSN", IEEE WCNC 2007, Hong Kong, China, 873 11-15 March 2007, pp. 3370-3375. 875 [RFC1546] Partridge, C., Mendez, T., and W. Milliken, "Host 876 Anycasting Service", RFC 1546, November 1993. 878 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 879 Architecture", RFC 4291, February 2006. 881 Authors' Addresses 883 Mischa Dohler (editor) 884 CTTC 885 Parc Mediterrani de la Tecnologia, Av. Canal Olimpic S/N 886 08860 Castelldefels, Barcelona 887 Spain 889 Email: mischa.dohler@cttc.es 891 Thomas Watteyne (editor) 892 CITI-Lab, INSA-Lyon, INRIA A4RES 893 21 avenue Jean Capelle 894 69621 Lyon 895 France 897 Email: thomas.watteyne@ieee.org 899 Tim Winter (editor) 900 Eka Systems 901 20201 Century Blvd. Suite 250 902 Germantown, MD 20874 903 USA 905 Email: tim.winter@ekasystems.com 906 Dominique Barthel (editor) 907 France Telecom R&D 908 28 Chemin du Vieux Chene 909 38243 Meylan Cedex 910 France 912 Email: Dominique.Barthel@orange-ftgroup.com 914 Christian Jacquenet 915 France Telecom R&D 916 4 rue du Clos Courtel BP 91226 917 35512 Cesson Sevigne 918 France 920 Email: christian.jacquenet@orange-ftgroup.com 922 Giyyarpuram Madhusudan 923 France Telecom R&D 924 28 Chemin du Vieux Chene 925 38243 Meylan Cedex 926 France 928 Email: giyyarpuram.madhusudan@orange-ftgroup.com 930 Gabriel Chegaray 931 France Telecom R&D 932 28 Chemin du Vieux Chene 933 38243 Meylan Cedex 934 France 936 Email: gabriel.chegaray@orange-ftgroup.com