idnits 2.17.1 draft-ietf-roll-urban-routing-reqs-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? -- It seems you're using the 'non-IETF stream' Licence Notice instead Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (January 08, 2009) is 5584 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-roll-home-routing-reqs-06 == Outdated reference: A later version (-06) exists of draft-ietf-roll-indus-routing-reqs-03 == Outdated reference: A later version (-13) exists of draft-ietf-roll-terminology-00 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Networking Working Group M. Dohler, Ed. 3 Internet-Draft CTTC 4 Intended status: Informational T. Watteyne, Ed. 5 Expires: July 12, 2009 CITI-Lab, INRIA A4RES 6 T. Winter, Ed. 7 Eka Systems 8 D. Barthel, Ed. 9 France Telecom R&D 10 January 08, 2009 12 Urban WSNs Routing Requirements in Low Power and Lossy Networks 13 draft-ietf-roll-urban-routing-reqs-03 15 Status of this Memo 17 This Internet-Draft is submitted to IETF in full conformance with the 18 provisions of BCP 78 and BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on July 12, 2009. 38 Copyright Notice 40 Copyright (c) 2009 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. 50 Abstract 52 The application-specific routing requirements for Urban Low Power and 53 Lossy Networks (U-LLNs) are presented in this document. In the near 54 future, sensing and actuating nodes will be placed outdoors in urban 55 environments so as to improve the people's living conditions as well 56 as to monitor compliance with increasingly strict environmental laws. 57 These field nodes are expected to measure and report a wide gamut of 58 data, such as required in smart metering, waste disposal, 59 meteorological, pollution and allergy reporting applications. The 60 majority of these nodes is expected to communicate wirelessly which - 61 given the limited radio range and the large number of nodes - 62 requires the use of suitable routing protocols. The design of such 63 protocols will be mainly impacted by the limited resources of the 64 nodes (memory, processing power, battery, etc.) and the 65 particularities of the outdoor urban application scenarios. As such, 66 for a wireless Routing Over Low power and Lossy networks (ROLL) 67 solution to be useful, the protocol(s) ought to be energy-efficient, 68 scalable, and autonomous. This documents aims to specify a set of 69 requirements reflecting these and further U-LLNs tailored 70 characteristics. 72 Requirements Language 74 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 75 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 76 document are to be interpreted as described in RFC 2119 [RFC2119]. 78 Table of Contents 80 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 81 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 82 3. Overview of Urban Low Power Lossy Networks . . . . . . . . . . 5 83 3.1. Canonical Network Elements . . . . . . . . . . . . . . . . 5 84 3.1.1. Sensors . . . . . . . . . . . . . . . . . . . . . . . 5 85 3.1.2. Actuators . . . . . . . . . . . . . . . . . . . . . . 6 86 3.1.3. Routers . . . . . . . . . . . . . . . . . . . . . . . 6 87 3.2. Topology . . . . . . . . . . . . . . . . . . . . . . . . . 7 88 3.3. Resource Constraints . . . . . . . . . . . . . . . . . . . 7 89 3.4. Link Reliability . . . . . . . . . . . . . . . . . . . . . 8 90 4. Urban LLN Application Scenarios . . . . . . . . . . . . . . . 8 91 4.1. Deployment of Nodes . . . . . . . . . . . . . . . . . . . 9 92 4.2. Association and Disassociation/Disappearance of Nodes . . 9 93 4.3. Regular Measurement Reporting . . . . . . . . . . . . . . 10 94 4.4. Queried Measurement Reporting . . . . . . . . . . . . . . 10 95 4.5. Alert Reporting . . . . . . . . . . . . . . . . . . . . . 11 96 5. Traffic Pattern . . . . . . . . . . . . . . . . . . . . . . . 11 97 6. Requirements of Urban LLN Applications . . . . . . . . . . . . 13 98 6.1. Scalability . . . . . . . . . . . . . . . . . . . . . . . 13 99 6.2. Parameter Constrained Routing . . . . . . . . . . . . . . 13 100 6.3. Support of Autonomous and Alien Configuration . . . . . . 14 101 6.4. Support of Highly Directed Information Flows . . . . . . . 15 102 6.5. Support of Multicast, Anycast, and Implementation of 103 Groupcast . . . . . . . . . . . . . . . . . . . . . . . . 15 104 6.6. Network Dynamicity . . . . . . . . . . . . . . . . . . . . 16 105 6.7. Latency . . . . . . . . . . . . . . . . . . . . . . . . . 16 106 7. Security Considerations . . . . . . . . . . . . . . . . . . . 17 107 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 108 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 18 109 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 18 110 10.1. Normative References . . . . . . . . . . . . . . . . . . . 18 111 10.2. Informative References . . . . . . . . . . . . . . . . . . 19 112 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 19 114 1. Introduction 116 This document details application-specific routing requirements for 117 Urban Low Power and Lossy Networks (U-LLNs). U-LLN use cases and 118 associated routing protocol requirements will be described. 120 Section 2 defines terminology useful in describing U-LLNs. 122 Section 3 provides an overview of U-LLN applications. 124 Section 4 describes a few typical use cases for U-LLN applications 125 exemplifying deployment problems and related routing issues. 127 Section 5 describes traffic flows that will be typical for U-LLN 128 applications. 130 Section 6 discusses the routing requirements for networks comprising 131 such constrained devices in a U-LLN environment. These requirements 132 may be overlapping requirements derived from other application- 133 specific requirements documents [I-D.ietf-roll-home-routing-reqs] 134 [I-D.ietf-roll-indus-routing-reqs] 135 [I-D.martocci-roll-building-routing-reqs]. 137 Section 7 provides an overview of routing security considerations of 138 U-LLN implementations. 140 2. Terminology 142 The terminology used in this document is consistent with and 143 incorporates that described in `Terminology in Low power And Lossy 144 Networks' [I-D.ietf-roll-terminology]. This terminology is extended 145 in this document as follows: 147 Anycast: Addressing and Routing scheme for forwarding packets to at 148 least one of the "nearest" interfaces from a group, as 149 described in RFC4291 [RFC4291] and RFC1546 [RFC1546]. 151 Autonomous: Refers to the ability of a routing protocol to 152 independently function without requiring any external influence 153 or guidance. Includes self-configuration and self-organization 154 capabilities. 156 DoS Denial of Service, a class of attack that attempts to cause 157 resource exhaustion to the detriment of a node or network. 159 ISM band: Industrial, Scientific and Medical band. This is a region 160 of radio spectrum where low power unlicensed devices may 161 generally be used, with specific guidance from an applicable 162 local radio spectrum authority. 164 U-LLN: Urban Low Power and Lossy network. 166 WLAN: Wireless Local Area Network. 168 3. Overview of Urban Low Power Lossy Networks 170 3.1. Canonical Network Elements 172 A U-LLN is understood to be a network composed of three key elements, 173 i.e. 175 1. sensors, 177 2. actuators, and 179 3. routers. 181 which communicate wirelessly. 183 3.1.1. Sensors 185 Sensing nodes measure a wide gamut of physical data, including but 186 not limited to: 188 1. municipal consumption data, such as smart-metering of gas, water, 189 electricity, waste, etc; 191 2. meteorological data, such as temperature, pressure, humidity, UV 192 index, strength and direction of wind, etc; 194 3. pollution data, such as gases (SO2, NOx, CO, Ozone), heavy metals 195 (e.g. Mercury), pH, radioactivity, etc; 197 4. ambient data, such as allergic elements (pollen, dust), 198 electromagnetic pollution, noise levels, etc. 200 Sensor nodes are capable of forwarding data. Sensor nodes are 201 generally not mobile in the majority of near-future roll-outs. In 202 many anticipated roll-outs, sensor nodes may suffer from long-term 203 resource constraints. 205 A prominent example is a Smart Grid application which consists of a 206 city-wide network of smart meters and distribution monitoring 207 sensors. Smart meters in an urban Smart Grid application will 208 include electric, gas, and/or water meters typically administered by 209 one or multiple utility companies. These meters will be capable of 210 advanced sensing functionalities such as measuring the quality of 211 electrical service provided to a customer, providing granular 212 interval data, or automating the detection of alarm conditions. In 213 addition they may be capable of advanced interactive functionalities, 214 which may invoke an Actuator component, such as remote service 215 disconnect or remote demand reset. More advanced scenarios include 216 demand response systems for managing peak load, and distribution 217 automation systems to monitor the infrastructure which delivers 218 energy throughout the urban environment. Sensor nodes capable of 219 providing this type of functionality may sometimes be referred to as 220 Advanced Metering Infrastructure (AMI). 222 3.1.2. Actuators 224 Actuator nodes control urban devices upon being instructed by 225 signaling traffic; examples are street or traffic lights. The amount 226 of actuator points is well below the number of sensing nodes. Some 227 sensing nodes may include an actuator component, e.g. an electric 228 meter node with integrated support for remote service disconnect. 229 Actuators are capable of forwarding data. Actuators are not likely 230 to be mobile in the majority of near-future roll-outs. Actuator 231 nodes may also suffer from long-term resource constraints, e.g. in 232 the case where they are battery powered. 234 3.1.3. Routers 236 Routers generally act to close coverage and routing gaps within the 237 interior of the U-LLN; examples of their use are: 239 1. prolong the U-LLN's lifetime, 241 2. balance nodes' energy depletion, 243 3. build advanced sensing infrastructures. 245 There can be several routers supporting the same U-LLN; however, the 246 number of routers is well below the amount of sensing nodes. The 247 routers are generally not mobile, i.e. fixed to a random or pre- 248 planned location. Routers may but generally do not suffer from any 249 form of (long-term) resource constraint, except that they need to be 250 small and sufficiently cheap. Routers differ from actuator and 251 sensing nodes in that they neither control nor sense. 253 Some routers provide access to wider infrastructures, such as the 254 Internet, and are named Low power and lossy network Border Routers 255 (LBRs) in that context. LBR routers also serve as data sinks (e.g. 256 they collect and process data from sensors) and sources (e.g. they 257 forward instructions to actuators). 259 3.2. Topology 261 Whilst millions of sensing nodes may very well be deployed in an 262 urban area, they are likely to be associated with more than one 263 network. These networks may or may not communicate between one 264 another. The number of sensing nodes deployed in the urban 265 environment in support of some applications is expected to be in the 266 order of 10^2 to 10^7; this is still very large and unprecedented in 267 current roll-outs. 269 Deployment of nodes is likely to happen in batches, e.g. boxes of 270 hundreds to thousands of nodes arrive and are deployed. The location 271 of the nodes is random within given topological constraints, e.g. 272 placement along a road, river, or at individual residences. 274 3.3. Resource Constraints 276 The nodes are highly resource constrained, i.e. cheap hardware, low 277 memory and no infinite energy source. Different node powering 278 mechanisms are available, such as: 280 1. non-rechargeable battery; 282 2. rechargeable battery with regular recharging (e.g. sunlight); 284 3. rechargeable battery with irregular recharging (e.g. 285 opportunistic energy scavenging); 287 4. capacitive/inductive energy provision (e.g. passive Radio 288 Frequency IDentification (RFID)); 290 5. always on (e.g. powered electricity meter). 292 In the case of a battery powered sensing node, the battery shelf life 293 is usually in the order of 10 to 15 years, rendering network lifetime 294 maximization with battery powered nodes beyond this lifespan useless. 296 The physical and electromagnetic distances between the three key 297 elements, i.e. sensors, actuators, and routers, can generally be very 298 large, i.e. from several hundreds of meters to one kilometer. Not 299 every field node is likely to reach the LBR in a single hop, thereby 300 requiring suitable routing protocols which manage the information 301 flow in an energy-efficient manner. 303 3.4. Link Reliability 305 The links between the network elements are volatile due to the 306 following set of non-exclusive effects: 308 1. packet errors due to wireless channel effects; 310 2. packet errors due to MAC (Medium Access Control) (e.g. 311 collision); 313 3. packet errors due to interference from other systems; 315 4. link unavailability due to network dynamicity; etc. 317 The wireless channel causes the received power to drop below a given 318 threshold in a random fashion, thereby causing detection errors in 319 the receiving node. The underlying effects are path loss, shadowing 320 and fading. 322 Since the wireless medium is broadcast in nature, nodes in their 323 communication radios require suitable medium access control protocols 324 which are capable of resolving any arising contention. Some 325 available protocols may not be able to prevent packets of neighboring 326 nodes from colliding, possibly leading to a high Packet Error Rate 327 (PER) and causing a link outage. 329 Furthermore, the outdoor deployment of U-LLNs also has implications 330 for the interference temperature and hence link reliability and range 331 if Industrial, Scientific and Medical (ISM) bands are to be used. 332 For instance, if the 2.4GHz ISM band is used to facilitate 333 communication between U-LLN nodes, then heavily loaded Wireless Local 334 Area Network (WLAN) hot-spots may become a detrimental performance 335 factor, leading to high PER and jeopardizing the functioning of the 336 U-LLN. 338 Finally, nodes appearing and disappearing causes dynamics in the 339 network which can yield link outages and changes of topologies. 341 4. Urban LLN Application Scenarios 343 Urban applications represent a special segment of LLNs with its 344 unique set of requirements. To facilitate the requirements 345 discussion in Section 6, this section lists a few typical but not 346 exhaustive deployment problems and usage cases of U-LLN. 348 4.1. Deployment of Nodes 350 Contrary to other LLN applications, deployment of nodes is likely to 351 happen in batches out of a box. Typically, hundreds to thousands of 352 nodes are being shipped by the manufacturer with pre-programmed 353 functionalities which are then rolled-out by a service provider or 354 subcontracted entities. Prior or after roll-out, the network needs 355 to be ramped-up. This initialization phase may include, among 356 others, allocation of addresses, (possibly hierarchical) roles in the 357 network, synchronization, determination of schedules, etc. 359 If initialization is performed prior to roll-out, all nodes are 360 likely to be in one another's 1-hop radio neighborhood. Pre- 361 programmed Media Access Control (MAC) and routing protocols may hence 362 fail to function properly, thereby wasting a large amount of energy. 363 Whilst the major burden will be on resolving MAC conflicts, any 364 proposed U-LLN routing protocol needs to cater for such a case. For 365 instance, 0-configuration and network address allocation needs to be 366 properly supported, etc. 368 After roll-out, nodes will have a finite set of one-hop neighbors, 369 likely of low cardinality (in the order of 5 to 10). However, some 370 nodes may be deployed in areas where there are hundreds of 371 neighboring devices. In the resulting topology there may be regions 372 where many (redundant) paths are possible through the network. Other 373 regions may be dependent on critical links to achieve connectivity 374 with the rest of the network. Any proposed LLN routing protocol 375 ought to support the autonomous self-organization and self- 376 configuration of the network at lowest possible energy cost [Lu2007], 377 where autonomy is understood to be the ability of the network to 378 operate without external influence. The result of such organization 379 should be that each node or set of nodes is uniquely addressable so 380 as to facilitate the set up of schedules, etc. 382 Unless exceptionally needed, broadcast forwarding schemes are not 383 advised in urban sensor networking environments. 385 4.2. Association and Disassociation/Disappearance of Nodes 387 After the initialization phase and possibly some operational time, 388 new nodes may be injected into the network as well as existing nodes 389 removed from the network. The former might be because a removed node 390 is replaced as part of maintenance, or new nodes are added because 391 more sensors for denser readings/actuations are needed, or because 392 routing protocols report connectivity problems. The latter might be 393 because a node's battery is depleted, the node is removed for 394 maintenance, the node is stolen or accidentally destroyed, etc. 396 The protocol(s) hence should be able to convey information about 397 malfunctioning nodes which may affect or jeopardize the overall 398 routing efficiency, so that self-organization and self-configuration 399 capabilities of the sensor network might be solicited to facilitate 400 the appropriate reconfiguration. This information may e.g. include 401 exact or relative geographical position, etc. The reconfiguration 402 may include the change of hierarchies, routing paths, packet 403 forwarding schedules, etc. Furthermore, to inform the LBR(s) of the 404 node's arrival and association with the network as well as freshly 405 associated nodes about packet forwarding schedules, roles, etc, 406 appropriate updating mechanisms should be supported. 408 4.3. Regular Measurement Reporting 410 The majority of sensing nodes will be configured to report their 411 readings on a regular basis. The frequency of data sensing and 412 reporting may be different but is generally expected to be fairly 413 low, i.e. in the range of once per hour, per day, etc. The ratio 414 between data sensing and reporting frequencies will determine the 415 memory and data aggregation capabilities of the nodes. Latency of an 416 end-to-end delivery and acknowledgements of a successful data 417 delivery may not be vital as sensing outages can be observed at the 418 LBR(s) - when, for instance, there is no reading arriving from a 419 given sensor or cluster of sensors within a day. In this case, a 420 query can be launched to check upon the state and availability of a 421 sensing node or sensing cluster. 423 The protocol(s) hence should be optimized to support a large number 424 of highly directional unicast flows from the sensing nodes or sensing 425 clusters towards a LBR, or highly directed multicast or anycast flows 426 from the nodes towards multiple LBRs. 428 Route computation and selection may depend on the transmitted 429 information, the frequency of reporting, the amount of energy 430 remaining in the nodes, the recharging pattern of energy-scavenged 431 nodes, etc. For instance, temperature readings could be reported 432 every hour via one set of battery powered nodes, whereas air quality 433 indicators are reported only during daytime via nodes powered by 434 solar energy. More generally, entire routing areas may be avoided 435 (e.g. at night) but heavily used during the day when nodes are 436 scavenging from sunlight. 438 4.4. Queried Measurement Reporting 440 Occasionally, network external data queries can be launched by one or 441 several LBRs. For instance, it is desirable to know the level of 442 pollution at a specific point or along a given road in the urban 443 environment. The queries' rates of occurrence are not regular but 444 rather random, where heavy-tail distributions seem appropriate to 445 model their behavior. Queries do not necessarily need to be reported 446 back to the same LBR from where the query was launched. Round-trip 447 times, i.e. from the launch of a query from an LBR towards the 448 delivery of the measured data to an LBR, are of importance. However, 449 they are not very stringent where latencies should simply be 450 sufficiently smaller than typical reporting intervals; for instance, 451 in the order of seconds or minute. The routing protocol(s) should 452 consider the selection of paths with appropriate (e.g. latency) 453 metrics to support queried measurement reporting. To facilitate the 454 query process, U-LLN network devices should support unicast and 455 multicast routing capabilities. 457 The same approach is also applicable for schedule update, 458 provisioning of patches and upgrades, etc. In this case, however, 459 the provision of acknowledgements and the support of unicast, 460 multicast, and anycast are of importance. 462 4.5. Alert Reporting 464 Rarely, the sensing nodes will measure an event which classifies as 465 alarm where such a classification is typically done locally within 466 each node by means of a pre-programmed or prior diffused threshold. 467 Note that on approaching the alert threshold level, nodes may wish to 468 change their sensing and reporting cycles. An alarm is likely being 469 registered by a plurality of sensing nodes where the delivery of a 470 single alert message with its location of origin suffices in most, 471 but not all, cases. One example of alert reporting is if the level 472 of toxic gases rises above a threshold, thereupon the sensing nodes 473 in the vicinity of this event report the danger. Another example of 474 alert reporting is when a recycling glass container - equipped with a 475 sensor measuring its level of occupancy - reports that the container 476 is full and hence needs to be emptied. 478 Routes clearly need to be unicast (towards one LBR) or multicast 479 (towards multiple LBRs). Delays and latencies are important; 480 however, again, deliveries within seconds should suffice in most of 481 the cases. 483 5. Traffic Pattern 485 Unlike traditional ad hoc networks, the information flow in U-LLNs is 486 highly directional. There are three main flows to be distinguished: 488 1. sensed information from the sensing nodes towards one or a subset 489 of the LBR(s); 491 2. query requests from the LBR(s) towards the sensing nodes; 493 3. control information from the LBR(s) towards the actuators. 495 Some of the flows may need the reverse route for delivering 496 acknowledgements. Finally, in the future, some direct information 497 flows between field devices without LBRs may also occur. 499 Sensed data is likely to be highly correlated in space, time and 500 observed events; an example of the latter is when temperature 501 increase and humidity decrease as the day commences. Data may be 502 sensed and delivered at different rates with both rates being 503 typically fairly low, i.e. in the range of minutes, hours, days, etc. 504 Data may be delivered regularly according to a schedule or a regular 505 query; it may also be delivered irregularly after an externally 506 triggered query; it may also be triggered after a sudden network- 507 internal event or alert. Schedules may be driven by, for example, a 508 smart-metering application where data is expected to be delivered 509 every hour, or an environmental monitoring application where a 510 battery powered node is expected to report its status at a specific 511 time once a day. Data delivery may trigger acknowledgements or 512 maintenance traffic in the reverse direction. The network hence 513 needs to be able to adjust to the varying activity duty cycles, as 514 well as to periodic and sporadic traffic. Also, sensed data ought to 515 be secured and locatable. 517 Some data delivery may have tight latency requirements, for example 518 in a case such as a live meter reading for customer service in a 519 smart-metering application, or in a case where a sensor reading 520 response must arrive within a certain time in order to be useful. 521 The network should take into consideration that different application 522 traffic may require different priorities in the selection of a route 523 when traversing the network, and that some traffic may be more 524 sensitive to latency. 526 An U-LLN should support occasional large scale traffic flows from 527 sensing nodes to LBRs, such as system-wide alerts. In the example of 528 an AMI U-LLN this could be in response to events such as a city wide 529 power outage. In this scenario all powered devices in a large 530 segment of the network may have lost power and are running off of a 531 temporary `last gasp' source such as a capacitor or small battery. A 532 node must be able to send its own alerts toward an LBR while 533 continuing to forward traffic on behalf of other devices who are also 534 experiencing an alert condition. The network needs to be able to 535 manage this sudden large traffic flow. It may be useful for the 536 routing layer to collaborate with the application layer to perform 537 data aggregation, in order to reduce the total volume of a large 538 traffic flow, and make more efficient use of the limited energy 539 available. 541 An U-LLN may also need to support efficient large scale messaging to 542 groups of actuators. For example, an AMI U-LLN supporting a city- 543 wide demand response system will need to efficiently broadcast demand 544 response control information to a large subset of actuators in the 545 system. 547 Some scenarios will require internetworking between the U-LLN and 548 another network, such as a home network. For example, an AMI 549 application that implements a demand-response system may need to 550 forward traffic from a utility, across the U-LLN, into a home 551 automation network. A typical use case would be to inform a customer 552 of incentives to reduce demand during peaks, or to automatically 553 adjust the thermostat of customers who have enrolled in such a demand 554 management program. Subsequent traffic may be triggered to flow back 555 through the U-LLN to the utility. 557 6. Requirements of Urban LLN Applications 559 Urban low power and lossy network applications have a number of 560 specific requirements related to the set of operating conditions, as 561 exemplified in the previous sections. 563 6.1. Scalability 565 The large and diverse measurement space of U-LLN nodes - coupled with 566 the typically large urban areas - will yield extremely large network 567 sizes. Current urban roll-outs are composed of sometimes more than 568 one hundred nodes; future roll-outs, however, may easily reach 569 numbers in the tens of thousands to millions. One of the utmost 570 important LLN routing protocol design criteria is hence scalability. 572 The routing protocol(s) MUST be capable of supporting the 573 organization of a large number of sensing nodes into regions 574 containing on the order of 10^2 to 10^4 sensing nodes each. 576 The routing protocol(s) MUST be scalable so as to accommodate a very 577 large and increasing number of nodes without deteriorating selected 578 performance parameters below configurable thresholds. The routing 579 protocols(s) SHOULD support the organization of a large number of 580 nodes into regions of configurable size. 582 6.2. Parameter Constrained Routing 584 Batteries in some nodes may deplete quicker than in others; the 585 existence of one node for the maintenance of a routing path may not 586 be as important as of another node; the battery scavenging methods 587 may recharge the battery at regular or irregular intervals; some 588 nodes may have a constant power source; some nodes may have a larger 589 memory and are hence be able to store more neighborhood information; 590 some nodes may have a stronger CPU and are hence able to perform more 591 sophisticated data aggregation methods; etc. 593 To this end, the routing protocol(s) MUST support parameter 594 constrained routing, where examples of such parameters (CPU, memory 595 size, battery level, etc.) have been given in the previous paragraph. 597 Routing within urban sensor networks SHOULD require the U-LLN nodes 598 to dynamically compute, select and install different paths towards a 599 same destination, depending on the nature of the traffic. Such 600 functionality in support of, for example, data aggregation, may imply 601 collaboration between the routing function, the forwarding function, 602 and the application. From this perspective, such nodes MAY, for 603 example, identify anycast endpoints, identify traffic flows, or 604 inspect the contents of traffic payload to inform routing and 605 forwarding decisions. 607 6.3. Support of Autonomous and Alien Configuration 609 With the large number of nodes, manually configuring and 610 troubleshooting each node is not efficient. The scale and the large 611 number of possible topologies that may be encountered in the U-LLN 612 encourages the development of automated management capabilities that 613 may (partly) rely upon self-organizing techniques. The network is 614 expected to self-organize and self-configure according to some prior 615 defined rules and protocols, as well as to support externally 616 triggered configurations (for instance through a commissioning tool 617 which may facilitate the organization of the network at a minimum 618 energy cost). 620 To this end, the routing protocol(s) MUST provide a set of features 621 including 0-configuration at network ramp-up, (network-internal) 622 self- organization and configuration due to topological changes, and 623 the ability to support (network-external) patches and configuration 624 updates. For the latter, the protocol(s) MUST support multi- and 625 any-cast addressing. The protocol(s) SHOULD also support the 626 formation and identification of groups of field devices in the 627 network. 629 The routing protocol(s) SHOULD be able to dynamically adapt, e.g. 630 through the application of appropriate routing metrics, to ever- 631 changing conditions of communication (possible degradation of QoS, 632 variable nature of the traffic (real time vs. non real time, sensed 633 data vs. alerts), node mobility, a combination thereof, etc.) 634 The routing protocol(s) SHOULD be able to dynamically compute, select 635 and possibly optimize the (multiple) path(s) that will be used by the 636 participating devices to forward the traffic towards the actuators 637 and/or a LBR according to the service-specific and traffic-specific 638 QoS, traffic engineering and routing security policies that will have 639 to be enforced at the scale of a routing domain (that is, a set of 640 networking devices administered by a globally unique entity), or a 641 region of such domain (e.g. a metropolitan area composed of clusters 642 of sensors). 644 6.4. Support of Highly Directed Information Flows 646 The reporting of the data readings by a large amount of spatially 647 dispersed nodes towards a few LBRs will lead to highly directed 648 information flows. For instance, a suitable addressing scheme can be 649 devised which facilitates the data flow. Also, as one gets closer to 650 the LBR, the traffic concentration increases which may lead to high 651 load imbalances in node usage. 653 To this end, the routing protocol(s) SHOULD support and utilize the 654 fact of a large number of highly directed traffic flows to facilitate 655 scalability and parameter constrained routing. 657 The routing protocol MUST be able to accommodate traffic bursts by 658 dynamically computing and selecting multiple paths towards the same 659 destination. 661 6.5. Support of Multicast, Anycast, and Implementation of Groupcast 663 Some urban sensing systems require low-level addressing of a group of 664 nodes in the same subnet, or for a node representative of a group of 665 nodes, without any prior creation of multicast groups, simply 666 carrying a list of recipients in the subnet 667 [I-D.ietf-roll-home-routing-reqs]. 669 Routing protocols activated in urban sensor networks MUST support 670 unicast (traffic is sent to a single field device), multicast 671 (traffic is sent to a set of devices that are subscribed to the same 672 multicast group), and anycast (where multiple field devices are 673 configured to accept traffic sent on a single IP anycast address) 674 transmission schemes. 676 With IP multicast, signaling mechanisms are used by a receiver to 677 join a group and the sender does not know the receivers of the group. 678 Groupcast allows a sender the ability to address a group of receivers 679 known by the sender even if the receivers do not know that they have 680 been grouped by the sender (since requesting each individual node to 681 join a multicast group would be very energy- consuming). Routing 682 protocols activated in urban sensor networks SHOULD accommodate such 683 "groupcast" forwarding schemes. 685 The support of unicast, groupcast, multicast, and anycast also has an 686 implication on the addressing scheme but is beyond the scope of this 687 document that focuses on the routing requirements aspects. 689 The network SHOULD support internetworking when identical protocols 690 are used, while giving attention to routing security implications of 691 interfacing, for example, a home network with a utility U-LLN. The 692 network may support the ability to interact with another network 693 using a different protocol, for example by supporting route 694 redistribution. 696 6.6. Network Dynamicity 698 Although mobility is assumed to be low in urban LLNs, network 699 dynamicity due to node association, disassociation and disappearance, 700 as well as long-term link perturbations is not negligible. This in 701 turn impacts reorganization and reconfiguration convergence as well 702 as routing protocol convergence. 704 To this end, local network dynamics SHOULD NOT impact the entire 705 network to be re-organized or re-reconfigured; however, the network 706 SHOULD be locally optimized to cater for the encountered changes. 707 The routing protocol(s) SHOULD support appropriate mechanisms in 708 order to be informed of the association, disassociation, and 709 disappearance of nodes. The routing protocol(s) SHOULD support 710 appropriate updating mechanisms in order to be informed of changes in 711 connectivity. The routing protocol(s) SHOULD use this information to 712 initiate protocol specific mechanisms for reorganization and 713 reconfiguration as necessary to maintain overall routing efficiency. 714 Convergence and route establishment times SHOULD be significantly 715 lower than the smallest reporting interval. 717 Differentiation SHOULD be made between node disappearance, where the 718 node disappears without prior notification, and user or node- 719 initiated disassociation ("phased-out"), where the node has enough 720 time to inform the network about its pending removal. 722 6.7. Latency 724 With the exception of alert reporting solutions and to a certain 725 extent queried reporting, U-LLNs are delay tolerant as long as the 726 information arrives within a fraction of the smallest reporting 727 interval, e.g. a few seconds if reporting is done every 4 hours. 729 The routing protocol(s) SHOULD also support the ability to route 730 according to different metrics (one of which could e.g. be latency). 732 7. Security Considerations 734 As every network, U-LLNs are exposed to routing security threats that 735 need to be addressed. The wireless and distributed nature of these 736 networks increases the spectrum of potential routing security 737 threats. This is further amplified by the resource constraints of 738 the nodes, thereby preventing resource intensive routing security 739 approaches from being deployed. A viable routing security approach 740 SHOULD be sufficiently lightweight that it may be implemented across 741 all nodes in a U-LLN. These issues require special attention during 742 the design process, so as to facilitate a commercially attractive 743 deployment. 745 The U-LLN network MUST deny all routing services to any node who has 746 not been authenticated to the U-LLN and authorized for the use of 747 routing services. 749 An attacker SHOULD be prevented from manipulating or disabling the 750 routing function, for example by compromising routing control 751 messages. To this end the routing protocol(s) MUST support message 752 integrity. 754 Further example routing security issues which may arise are the 755 abnormal behavior of nodes which exhibit an egoistic conduct, such as 756 not obeying network rules, or forwarding no or false packets. Other 757 important issues may arise in the context of Denial of Service (DoS) 758 attacks, malicious address space allocations, advertisement of 759 variable addresses, a wrong neighborhood, etc. The routing 760 protocol(s) SHOULD support defense against DoS attacks and other 761 attempts to maliciously or inadvertently cause the routing 762 protocol(s) mechanisms to over consume the limited resources of LLN 763 nodes, e.g. by constructing forwarding loops or causing excessive 764 routing protocol overhead traffic, etc. 766 The properties of self-configuration and self-organization which are 767 desirable in a U-LLN introduce additional routing security 768 considerations. Mechanisms MUST be in place to deny any node which 769 attempts to take malicious advantage of self-configuration and self- 770 organization procedures. Such attacks may attempt, for example, to 771 cause DoS, drain the energy of power constrained devices, or to 772 hijack the routing mechanism. A node MUST authenticate itself to a 773 trusted node that is already associated with the U-LLN before the 774 former can take part in self-configuration or self-organization. A 775 node that has already authenticated and associated with the U-LLN 776 MUST deny, to the maximum extent possible, the allocation of 777 resources to any unauthenticated peer. The routing protocol(s) MUST 778 deny service to any node which has not clearly established trust with 779 the U-LLN. 781 Consideration SHOULD be given to cases where the U-LLN may interface 782 with other networks such as a home network. The U-LLN SHOULD NOT 783 interface with any external network which has not established trust. 784 The U-LLN SHOULD be capable of limiting the resources granted in 785 support of an external network so as not to be vulnerable to DoS. 787 With low computation power and scarce energy resources, U-LLNs nodes 788 may not be able to resist any attack from high-power malicious nodes 789 (e.g. laptops and strong radios). However, the amount of damage 790 generated to the whole network SHOULD be commensurate with the number 791 of nodes physically compromised. For example, an intruder taking 792 control over a single node SHOULD NOT be able to completely deny 793 service to the whole network. 795 In general, the routing protocol(s) SHOULD support the implementation 796 of routing security best practices across the U-LLN. Such an 797 implementation ought to include defense against, for example, 798 eavesdropping, replay, message insertion, modification, and man-in- 799 the-middle attacks. 801 The choice of the routing security solutions will have an impact onto 802 routing protocol(s). To this end, routing protocol(s) proposed in 803 the context of U-LLNs MUST support authentication and integrity 804 measures and SHOULD support confidentiality (routing security) 805 measures. 807 8. IANA Considerations 809 This document makes no request of IANA. 811 9. Acknowledgements 813 The in-depth feedback of JP Vasseur, Jonathan Hui, and Iain Calder is 814 greatly appreciated. 816 10. References 818 10.1. Normative References 820 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 821 Requirement Levels", BCP 14, RFC 2119, March 1997. 823 10.2. Informative References 825 [I-D.ietf-roll-home-routing-reqs] 826 Brandt, A., Buron, J., and G. Porcu, "Home Automation 827 Routing Requirements in Low Power and Lossy Networks", 828 draft-ietf-roll-home-routing-reqs-06 (work in progress), 829 November 2008. 831 [I-D.ietf-roll-indus-routing-reqs] 832 Networks, D., Thubert, P., Dwars, S., and T. Phinney, 833 "Industrial Routing Requirements in Low Power and Lossy 834 Networks", draft-ietf-roll-indus-routing-reqs-03 (work in 835 progress), December 2008. 837 [I-D.ietf-roll-terminology] 838 Vasseur, J., "Terminology in Low power And Lossy 839 Networks", draft-ietf-roll-terminology-00 (work in 840 progress), October 2008. 842 [I-D.martocci-roll-building-routing-reqs] 843 Martocci, J., Riou, N., Mil, P., and W. Vermeylen, 844 "Building Automation Routing Requirements in Low Power and 845 Lossy Networks", 846 draft-martocci-roll-building-routing-reqs-01 (work in 847 progress), October 2008. 849 [Lu2007] J.L. Lu, F. Valois, D. Barthel, M. Dohler, "FISCO: A Fully 850 Integrated Scheme of Self-Configuration and Self- 851 Organization for WSN", IEEE WCNC 2007, Hong Kong, China, 852 11-15 March 2007, pp. 3370-3375. 854 [RFC1546] Partridge, C., Mendez, T., and W. Milliken, "Host 855 Anycasting Service", RFC 1546, November 1993. 857 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 858 Architecture", RFC 4291, February 2006. 860 Authors' Addresses 862 Mischa Dohler (editor) 863 CTTC 864 Parc Mediterrani de la Tecnologia, Av. Canal Olimpic S/N 865 08860 Castelldefels, Barcelona 866 Spain 868 Email: mischa.dohler@cttc.es 869 Thomas Watteyne (editor) 870 CITI-Lab, INSA-Lyon, INRIA A4RES 871 21 avenue Jean Capelle 872 69621 Lyon 873 France 875 Email: thomas.watteyne@ieee.org 877 Tim Winter (editor) 878 Eka Systems 879 20201 Century Blvd. Suite 250 880 Germantown, MD 20874 881 USA 883 Email: tim.winter@ekasystems.com 885 Dominique Barthel (editor) 886 France Telecom R&D 887 28 Chemin du Vieux Chene 888 38243 Meylan Cedex 889 France 891 Email: Dominique.Barthel@orange-ftgroup.com 893 Christian Jacquenet 894 France Telecom R&D 895 4 rue du Clos Courtel BP 91226 896 35512 Cesson Sevigne 897 France 899 Email: christian.jacquenet@orange-ftgroup.com 901 Giyyarpuram Madhusudan 902 France Telecom R&D 903 28 Chemin du Vieux Chene 904 38243 Meylan Cedex 905 France 907 Email: giyyarpuram.madhusudan@orange-ftgroup.com 908 Gabriel Chegaray 909 France Telecom R&D 910 28 Chemin du Vieux Chene 911 38243 Meylan Cedex 912 France 914 Email: gabriel.chegaray@orange-ftgroup.com