idnits 2.17.1 draft-ietf-roll-urban-routing-reqs-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 20. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 929. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 940. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 947. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 953. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: With low computation power and scarce energy resources, U-LLNs nodes may not be able to resist any attack from high-power malicious nodes (e.g. laptops and strong radios). However, the amount of damage generated to the whole network SHOULD be commensurate with the number of nodes physically compromised. For example, an intruder taking control over a single node SHOULD not have total access to, or be able to completely deny service to the whole network. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 20, 2008) is 5665 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-roll-home-routing-reqs-03 == Outdated reference: A later version (-06) exists of draft-ietf-roll-indus-routing-reqs-01 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Networking Working Group M. Dohler, Ed. 3 Internet-Draft CTTC 4 Intended status: Informational T. Watteyne, Ed. 5 Expires: April 23, 2009 CITI-Lab, INRIA A4RES 6 T. Winter, Ed. 7 Eka Systems 8 D. Barthel, Ed. 9 France Telecom R&D 10 October 20, 2008 12 Urban WSNs Routing Requirements in Low Power and Lossy Networks 13 draft-ietf-roll-urban-routing-reqs-02 15 Status of this Memo 17 By submitting this Internet-Draft, each author represents that any 18 applicable patent or other IPR claims of which he or she is aware 19 have been or will be disclosed, and any of which he or she becomes 20 aware will be disclosed, in accordance with Section 6 of BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF), its areas, and its working groups. Note that 24 other groups may also distribute working documents as Internet- 25 Drafts. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 The list of current Internet-Drafts can be accessed at 33 http://www.ietf.org/ietf/1id-abstracts.txt. 35 The list of Internet-Draft Shadow Directories can be accessed at 36 http://www.ietf.org/shadow.html. 38 This Internet-Draft will expire on April 23, 2009. 40 Abstract 42 The application-specific routing requirements for Urban Low Power and 43 Lossy Networks (U-LLNs) are presented in this document. In the near 44 future, sensing and actuating nodes will be placed outdoors in urban 45 environments so as to improve the people's living conditions as well 46 as to monitor compliance with increasingly strict environmental laws. 47 These field nodes are expected to measure and report a wide gamut of 48 data, such as required in smart metering, waste disposal, 49 meteorological, pollution and allergy reporting applications. The 50 majority of these nodes is expected to communicate wirelessly which - 51 given the limited radio range and the large number of nodes - 52 requires the use of suitable routing protocols. The design of such 53 protocols will be mainly impacted by the limited resources of the 54 nodes (memory, processing power, battery, etc.) and the 55 particularities of the outdoor urban application scenarios. As such, 56 for a wireless Routing Over Low power and Lossy networks (ROLL) 57 solution to be useful, the protocol(s) ought to be energy-efficient, 58 scalable, and autonomous. This documents aims to specify a set of 59 requirements reflecting these and further U-LLNs tailored 60 characteristics. 62 Requirements Language 64 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 65 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 66 document are to be interpreted as described in RFC 2119 [RFC2119]. 68 Table of Contents 70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 71 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 72 3. Overview of Urban Low Power Lossy Networks . . . . . . . . . . 5 73 3.1. Canonical Network Elements . . . . . . . . . . . . . . . . 5 74 3.1.1. Sensors . . . . . . . . . . . . . . . . . . . . . . . 5 75 3.1.2. Actuators . . . . . . . . . . . . . . . . . . . . . . 6 76 3.1.3. Routers . . . . . . . . . . . . . . . . . . . . . . . 6 77 3.2. Topology . . . . . . . . . . . . . . . . . . . . . . . . . 7 78 3.3. Resource Constraints . . . . . . . . . . . . . . . . . . . 7 79 3.4. Link Reliability . . . . . . . . . . . . . . . . . . . . . 7 80 4. Urban LLN Application Scenarios . . . . . . . . . . . . . . . 8 81 4.1. Deployment of Nodes . . . . . . . . . . . . . . . . . . . 8 82 4.2. Association and Disassociation/Disappearance of Nodes . . 9 83 4.3. Regular Measurement Reporting . . . . . . . . . . . . . . 10 84 4.4. Queried Measurement Reporting . . . . . . . . . . . . . . 10 85 4.5. Alert Reporting . . . . . . . . . . . . . . . . . . . . . 11 86 5. Traffic Pattern . . . . . . . . . . . . . . . . . . . . . . . 11 87 6. Requirements of Urban LLN Applications . . . . . . . . . . . . 13 88 6.1. Scalability . . . . . . . . . . . . . . . . . . . . . . . 13 89 6.2. Parameter Constrained Routing . . . . . . . . . . . . . . 13 90 6.3. Support of Autonomous and Alien Configuration . . . . . . 14 91 6.4. Support of Highly Directed Information Flows . . . . . . . 15 92 6.5. Support of Multicast, Anycast, and Implementation of 93 Groupcast . . . . . . . . . . . . . . . . . . . . . . . . 15 94 6.6. Network Dynamicity . . . . . . . . . . . . . . . . . . . . 16 95 6.7. Latency . . . . . . . . . . . . . . . . . . . . . . . . . 16 96 7. Security Considerations . . . . . . . . . . . . . . . . . . . 16 97 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 98 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 18 99 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 19 100 10.1. Normative References . . . . . . . . . . . . . . . . . . . 19 101 10.2. Informative References . . . . . . . . . . . . . . . . . . 19 102 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 103 Intellectual Property and Copyright Statements . . . . . . . . . . 22 105 1. Introduction 107 This document details application-specific routing requirements for 108 Urban Low Power and Lossy Networks (U-LLNs). U-LLN use cases and 109 associated routing protocol requirements will be described. 111 Section 2 defines terminology useful in describing U-LLNs. 113 Section 3 provides an overview of U-LLN applications. 115 Section 4 describes a few typical use cases for U-LLN applications 116 exemplifying deployment problems and related routing issues. 118 Section 5 describes traffic flows that will be typical for U-LLN 119 applications. 121 Section 6 discusses the routing requirements for networks comprising 122 such constrained devices in a U-LLN environment. These requirements 123 may be overlapping requirements derived from other application- 124 specific requirements documents [I-D.ietf-roll-home-routing-reqs] 125 [I-D.ietf-roll-indus-routing-reqs] 126 [I-D.martocci-roll-building-routing-reqs]. 128 Section 7 provides an overview of routing security considerations of 129 U-LLN implementations. 131 2. Terminology 133 The terminology used in this document is consistent with and 134 incorporates that described in `Terminology in Low power And Lossy 135 Networks' [I-D.vasseur-roll-terminology]. This terminology is 136 extended in this document as follows: 138 Anycast: Addressing and Routing scheme for forwarding packets to at 139 least one of the "nearest" interfaces from a group, as 140 described in RFC4291 [RFC4291] and RFC1546 [RFC1546]. 142 Autonomous: Refers to the ability of a routing protocol to 143 independently function without requiring any external influence 144 or guidance. Includes self-configuration and self-organization 145 capabilities. 147 ISM band: Industrial, Scientific and Medical band. This is a region 148 of radio spectrum where low power unlicensed devices may 149 generally be used, with specific guidance from an applicable 150 local radio spectrum authority. 152 U-LLN: Urban Low Power and Lossy network. 154 WLAN: Wireless Local Area Network. 156 3. Overview of Urban Low Power Lossy Networks 158 3.1. Canonical Network Elements 160 A U-LLN is understood to be a network composed of three key elements, 161 i.e. 163 1. sensors, 165 2. actuators, and 167 3. routers. 169 which communicate wirelessly. 171 3.1.1. Sensors 173 Sensing nodes measure a wide gamut of physical data, including but 174 not limited to: 176 1. municipal consumption data, such as smart-metering of gas, water, 177 electricity, waste, etc; 179 2. meteorological data, such as temperature, pressure, humidity, UV 180 index, strength and direction of wind, etc; 182 3. pollution data, such as gases (SO2, NOx, CO, Ozone), heavy metals 183 (e.g. Mercury), pH, radioactivity, etc; 185 4. ambient data, such as allergic elements (pollen, dust), 186 electromagnetic pollution, noise levels, etc. 188 Sensor nodes are capable of forwarding data. Sensor nodes are 189 generally not mobile in the majority of near-future roll-outs. In 190 many anticipated roll-outs, sensor nodes may suffer from long-term 191 resource constraints. 193 A prominent example is a Smart Grid application which consists of a 194 city-wide network of smart meters and distribution monitoring 195 sensors. Smart meters in an urban Smart Grid application will 196 include electric, gas, and/or water meters typically administered by 197 one or multiple utility companies. These meters will be capable of 198 advanced sensing functionalities such as measuring the quality of 199 electrical service provided to a customer, providing granular 200 interval data, or automating the detection of alarm conditions. In 201 addition they may be capable of advanced interactive functionalities, 202 which may invoke an Actuator component, such as remote service 203 disconnect or remote demand reset. More advanced scenarios include 204 demand response systems for managing peak load, and distribution 205 automation systems to monitor the infrastructure which delivers 206 energy throughout the urban environment. Sensor nodes capable of 207 providing this type of functionality may sometimes be referred to as 208 Advanced Metering Infrastructure (AMI). 210 3.1.2. Actuators 212 Actuator nodes control urban devices upon being instructed by 213 signaling traffic; examples are street or traffic lights. The amount 214 of actuator points is well below the number of sensing nodes. Some 215 sensing nodes may include an actuator component, e.g. an electric 216 meter node with integrated support for remote service disconnect. 217 Actuators are capable of forwarding data. Actuators are not likely 218 to be mobile in the majority of near-future roll-outs. Actuator 219 nodes may also suffer from long-term resource constraints, e.g. in 220 the case where they are battery powered. 222 3.1.3. Routers 224 Routers generally act to close coverage and routing gaps within the 225 interior of the U-LLN; examples of their use are: 227 1. prolong the U-LLN's lifetime, 229 2. balance nodes' energy depletion, 231 3. build advanced sensing infrastructures. 233 There can be several routers supporting the same U-LLN; however, the 234 number of routers is well below the amount of sensing nodes. The 235 routers are generally not mobile, i.e. fixed to a random or pre- 236 planned location. Routers may but generally do not suffer from any 237 form of (long-term) resource constraint, except that they need to be 238 small and sufficiently cheap. Routers differ from actuator and 239 sensing nodes in that they neither control nor sense. 241 Some routers provide access to wider infrastructures, such as the 242 Internet, and are named Low power and lossy network Border Routers 243 (LBRs) in that context. LBR routers also serve as data sinks (e.g. 244 they collect and process data from sensors) and sources (e.g. they 245 forward instructions to actuators). 247 3.2. Topology 249 Whilst millions of sensing nodes may very well be deployed in an 250 urban area, they are likely to be associated with more than one 251 network. These networks may or may not communicate between one 252 another. The number of sensing nodes deployed in the urban 253 environment in support of some applications is expected to be in the 254 order of 10^2 to 10^7; this is still very large and unprecedented in 255 current roll-outs. 257 Deployment of nodes is likely to happen in batches, e.g. boxes of 258 hundreds to thousands of nodes arrive and are deployed. The location 259 of the nodes is random within given topological constraints, e.g. 260 placement along a road, river, or at individual residences. 262 3.3. Resource Constraints 264 The nodes are highly resource constrained, i.e. cheap hardware, low 265 memory and no infinite energy source. Different node powering 266 mechanisms are available, such as: 268 1. non-rechargeable battery; 270 2. rechargeable battery with regular recharging (e.g. sunlight); 272 3. rechargeable battery with irregular recharging (e.g. 273 opportunistic energy scavenging); 275 4. capacitive/inductive energy provision (e.g. passive Radio 276 Frequency IDentification (RFID)); 278 5. always on (e.g. powered electricity meter). 280 In the case of a battery powered sensing node, the battery shelf life 281 is usually in the order of 10 to 15 years, rendering network lifetime 282 maximization with battery powered nodes beyond this lifespan useless. 284 The physical and electromagnetic distances between the three key 285 elements, i.e. sensors, actuators, and routers, can generally be very 286 large, i.e. from several hundreds of meters to one kilometer. Not 287 every field node is likely to reach the LBR in a single hop, thereby 288 requiring suitable routing protocols which manage the information 289 flow in an energy-efficient manner. 291 3.4. Link Reliability 293 The links between the network elements are volatile due to the 294 following set of non-exclusive effects: 296 1. packet errors due to wireless channel effects; 298 2. packet errors due to MAC (Medium Access Control) (e.g. 299 collision); 301 3. packet errors due to interference from other systems; 303 4. link unavailability due to network dynamicity; etc. 305 The wireless channel causes the received power to drop below a given 306 threshold in a random fashion, thereby causing detection errors in 307 the receiving node. The underlying effects are path loss, shadowing 308 and fading. 310 Since the wireless medium is broadcast in nature, nodes in their 311 communication radios require suitable medium access control protocols 312 which are capable of resolving any arising contention. Some 313 available protocols may not be able to prevent packets of neighboring 314 nodes from colliding, possibly leading to a high Packet Error Rate 315 (PER) and causing a link outage. 317 Furthermore, the outdoor deployment of U-LLNs also has implications 318 for the interference temperature and hence link reliability and range 319 if Industrial, Scientific and Medical (ISM) bands are to be used. 320 For instance, if the 2.4GHz ISM band is used to facilitate 321 communication between U-LLN nodes, then heavily loaded Wireless Local 322 Area Network (WLAN) hot-spots may become a detrimental performance 323 factor, leading to high PER and jeopardizing the functioning of the 324 U-LLN. 326 Finally, nodes appearing and disappearing causes dynamics in the 327 network which can yield link outages and changes of topologies. 329 4. Urban LLN Application Scenarios 331 Urban applications represent a special segment of LLNs with its 332 unique set of requirements. To facilitate the requirements 333 discussion in Section 6, this section lists a few typical but not 334 exhaustive deployment problems and usage cases of U-LLN. 336 4.1. Deployment of Nodes 338 Contrary to other LLN applications, deployment of nodes is likely to 339 happen in batches out of a box. Typically, hundreds to thousands of 340 nodes are being shipped by the manufacturer with pre-programmed 341 functionalities which are then rolled-out by a service provider or 342 subcontracted entities. Prior or after roll-out, the network needs 343 to be ramped-up. This initialization phase may include, among 344 others, allocation of addresses, (possibly hierarchical) roles in the 345 network, synchronization, determination of schedules, etc. 347 If initialization is performed prior to roll-out, all nodes are 348 likely to be in one another's 1-hop radio neighborhood. Pre- 349 programmed Media Access Control (MAC) and routing protocols may hence 350 fail to function properly, thereby wasting a large amount of energy. 351 Whilst the major burden will be on resolving MAC conflicts, any 352 proposed U-LLN routing protocol needs to cater for such a case. For 353 instance, 0-configuration and network address allocation needs to be 354 properly supported, etc. 356 After roll-out, nodes will have a finite set of one-hop neighbors, 357 likely of low cardinality (in the order of 5 to 10). However, some 358 nodes may be deployed in areas where there are hundreds of 359 neighboring devices. In the resulting topology there may be regions 360 where many (redundant) paths are possible through the network. Other 361 regions may be dependent on critical links to achieve connectivity 362 with the rest of the network. Any proposed LLN routing protocol 363 ought to support the autonomous self-organization and self- 364 configuration of the network at lowest possible energy cost [Lu2007], 365 where autonomy is understood to be the ability of the network to 366 operate without external influence. The result of such organization 367 should be that each node or set of nodes is uniquely addressable so 368 as to facilitate the set up of schedules, etc. 370 Unless exceptionally needed, broadcast forwarding schemes are not 371 advised in urban sensor networking environments. 373 4.2. Association and Disassociation/Disappearance of Nodes 375 After the initialization phase and possibly some operational time, 376 new nodes may be injected into the network as well as existing nodes 377 removed from the network. The former might be because a removed node 378 is replaced as part of maintenance, or new nodes are added because 379 more sensors for denser readings/actuations are needed, or because 380 routing protocols report connectivity problems. The latter might be 381 because a node's battery is depleted, the node is removed for 382 maintenance, the node is stolen or accidentally destroyed, etc. 384 The protocol(s) hence should be able to convey information about 385 malfunctioning nodes which may affect or jeopardize the overall 386 routing efficiency, so that self-organization and self-configuration 387 capabilities of the sensor network might be solicited to facilitate 388 the appropriate reconfiguration. This information may e.g. include 389 exact or relative geographical position, etc. The reconfiguration 390 may include the change of hierarchies, routing paths, packet 391 forwarding schedules, etc. Furthermore, to inform the LBR(s) of the 392 node's arrival and association with the network as well as freshly 393 associated nodes about packet forwarding schedules, roles, etc, 394 appropriate updating mechanisms should be supported. 396 4.3. Regular Measurement Reporting 398 The majority of sensing nodes will be configured to report their 399 readings on a regular basis. The frequency of data sensing and 400 reporting may be different but is generally expected to be fairly 401 low, i.e. in the range of once per hour, per day, etc. The ratio 402 between data sensing and reporting frequencies will determine the 403 memory and data aggregation capabilities of the nodes. Latency of an 404 end-to-end delivery and acknowledgements of a successful data 405 delivery may not be vital as sensing outages can be observed at the 406 LBR(s) - when, for instance, there is no reading arriving from a 407 given sensor or cluster of sensors within a day. In this case, a 408 query can be launched to check upon the state and availability of a 409 sensing node or sensing cluster. 411 The protocol(s) hence should be optimized to support a large number 412 of highly directional unicast flows from the sensing nodes or sensing 413 clusters towards a LBR, or highly directed multicast or anycast flows 414 from the nodes towards multiple LBRs. 416 Route computation and selection may depend on the transmitted 417 information, the frequency of reporting, the amount of energy 418 remaining in the nodes, the recharging pattern of energy-scavenged 419 nodes, etc. For instance, temperature readings could be reported 420 every hour via one set of battery powered nodes, whereas air quality 421 indicators are reported only during daytime via nodes powered by 422 solar energy. More generally, entire routing areas may be avoided 423 (e.g. at night) but heavily used during the day when nodes are 424 scavenging from sunlight. 426 4.4. Queried Measurement Reporting 428 Occasionally, network external data queries can be launched by one or 429 several LBRs. For instance, it is desirable to know the level of 430 pollution at a specific point or along a given road in the urban 431 environment. The queries' rates of occurrence are not regular but 432 rather random, where heavy-tail distributions seem appropriate to 433 model their behavior. Queries do not necessarily need to be reported 434 back to the same LBR from where the query was launched. Round-trip 435 times, i.e. from the launch of a query from an LBR towards the 436 delivery of the measured data to an LBR, are of importance. However, 437 they are not very stringent where latencies should simply be 438 sufficiently smaller than typical reporting intervals; for instance, 439 in the order of seconds or minute. The routing protocol(s) should 440 consider the selection of paths with appropriate (e.g. latency) 441 metrics to support queried measurement reporting. To facilitate the 442 query process, U-LLN network devices should support unicast and 443 multicast routing capabilities. 445 The same approach is also applicable for schedule update, 446 provisioning of patches and upgrades, etc. In this case, however, 447 the provision of acknowledgements and the support of unicast, 448 multicast, and anycast are of importance. 450 4.5. Alert Reporting 452 Rarely, the sensing nodes will measure an event which classifies as 453 alarm where such a classification is typically done locally within 454 each node by means of a pre-programmed or prior diffused threshold. 455 Note that on approaching the alert threshold level, nodes may wish to 456 change their sensing and reporting cycles. An alarm is likely being 457 registered by a plurality of sensing nodes where the delivery of a 458 single alert message with its location of origin suffices in most, 459 but not all, cases. One example of alert reporting is if the level 460 of toxic gases rises above a threshold, thereupon the sensing nodes 461 in the vicinity of this event report the danger. Another example of 462 alert reporting is when a recycling glass container - equipped with a 463 sensor measuring its level of occupancy - reports that the container 464 is full and hence needs to be emptied. 466 Routes clearly need to be unicast (towards one LBR) or multicast 467 (towards multiple LBRs). Delays and latencies are important; 468 however, again, deliveries within seconds should suffice in most of 469 the cases. 471 5. Traffic Pattern 473 Unlike traditional ad hoc networks, the information flow in U-LLNs is 474 highly directional. There are three main flows to be distinguished: 476 1. sensed information from the sensing nodes towards one or a subset 477 of the LBR(s); 479 2. query requests from the LBR(s) towards the sensing nodes; 481 3. control information from the LBR(s) towards the actuators. 483 Some of the flows may need the reverse route for delivering 484 acknowledgements. Finally, in the future, some direct information 485 flows between field devices without LBRs may also occur. 487 Sensed data is likely to be highly correlated in space, time and 488 observed events; an example of the latter is when temperature 489 increase and humidity decrease as the day commences. Data may be 490 sensed and delivered at different rates with both rates being 491 typically fairly low, i.e. in the range of minutes, hours, days, etc. 492 Data may be delivered regularly according to a schedule or a regular 493 query; it may also be delivered irregularly after an externally 494 triggered query; it may also be triggered after a sudden network- 495 internal event or alert. Schedules may be driven by, for example, a 496 smart-metering application where data is expected to be delivered 497 every hour, or an environmental monitoring application where a 498 battery powered node is expected to report its status at a specific 499 time once a day. Data delivery may trigger acknowledgements or 500 maintenance traffic in the reverse direction. The network hence 501 needs to be able to adjust to the varying activity duty cycles, as 502 well as to periodic and sporadic traffic. Also, sensed data ought to 503 be secured and locatable. 505 Some data delivery may have tight latency requirements, for example 506 in a case such as a live meter reading for customer service in a 507 smart-metering application, or in a case where a sensor reading 508 response must arrive within a certain time in order to be useful. 509 The network should take into consideration that different application 510 traffic may require different priorities in the selection of a route 511 when traversing the network, and that some traffic may be more 512 sensitive to latency. 514 An U-LLN should support occasional large scale traffic flows from 515 sensing nodes to LBRs, such as system-wide alerts. In the example of 516 an AMI U-LLN this could be in response to events such as a city wide 517 power outage. In this scenario all powered devices in a large 518 segment of the network may have lost power and are running off of a 519 temporary `last gasp' source such as a capacitor or small battery. A 520 node must be able to send its own alerts toward an LBR while 521 continuing to forward traffic on behalf of other devices who are also 522 experiencing an alert condition. The network needs to be able to 523 manage this sudden large traffic flow. It may be useful for the 524 routing layer to collaborate with the application layer to perform 525 data aggregation, in order to reduce the total volume of a large 526 traffic flow, and make more efficient use of the limited energy 527 available. 529 An U-LLN may also need to support efficient large scale messaging to 530 groups of actuators. For example, an AMI U-LLN supporting a city- 531 wide demand response system will need to efficiently broadcast demand 532 response control information to a large subset of actuators in the 533 system. 535 Some scenarios will require internetworking between the U-LLN and 536 another network, such as a home network. For example, an AMI 537 application that implements a demand-response system may need to 538 forward traffic from a utility, across the U-LLN, into a home 539 automation network. A typical use case would be to inform a customer 540 of incentives to reduce demand during peaks, or to automatically 541 adjust the thermostat of customers who have enrolled in such a demand 542 management program. Subsequent traffic may be triggered to flow back 543 through the U-LLN to the utility. 545 6. Requirements of Urban LLN Applications 547 Urban low power and lossy network applications have a number of 548 specific requirements related to the set of operating conditions, as 549 exemplified in the previous sections. 551 6.1. Scalability 553 The large and diverse measurement space of U-LLN nodes - coupled with 554 the typically large urban areas - will yield extremely large network 555 sizes. Current urban roll-outs are composed of sometimes more than 556 one hundred nodes; future roll-outs, however, may easily reach 557 numbers in the tens of thousands to millions. One of the utmost 558 important LLN routing protocol design criteria is hence scalability. 560 The routing protocol(s) MUST be capable of supporting the 561 organization of a large number of sensing nodes into regions 562 containing on the order of 10^2 to 10^4 sensing nodes each. 564 The routing protocol(s) MUST be scalable so as to accommodate a very 565 large and increasing number of nodes without deteriorating selected 566 performance parameters below configurable thresholds. The routing 567 protocols(s) SHOULD support the organization of a large number of 568 nodes into regions of configurable size. 570 6.2. Parameter Constrained Routing 572 Batteries in some nodes may deplete quicker than in others; the 573 existence of one node for the maintenance of a routing path may not 574 be as important as of another node; the battery scavenging methods 575 may recharge the battery at regular or irregular intervals; some 576 nodes may have a constant power source; some nodes may have a larger 577 memory and are hence be able to store more neighborhood information; 578 some nodes may have a stronger CPU and are hence able to perform more 579 sophisticated data aggregation methods; etc. 581 To this end, the routing protocol(s) MUST support parameter 582 constrained routing, where examples of such parameters (CPU, memory 583 size, battery level, etc.) have been given in the previous paragraph. 585 Routing within urban sensor networks SHOULD require the U-LLN nodes 586 to dynamically compute, select and install different paths towards a 587 same destination, depending on the nature of the traffic. From this 588 perspective, such nodes SHOULD inspect the contents of traffic 589 payload for making routing and forwarding decisions: for example, the 590 analysis of traffic payload should encourage the enforcement of 591 forwarding policies based upon aggregation capabilities for the sake 592 of efficiency. 594 6.3. Support of Autonomous and Alien Configuration 596 With the large number of nodes, manually configuring and 597 troubleshooting each node is not efficient. The scale and the large 598 number of possible topologies that may be encountered in the U-LLN 599 encourages the development of automated management capabilities that 600 may (partly) rely upon self-organizing techniques. The network is 601 expected to self-organize and self-configure according to some prior 602 defined rules and protocols, as well as to support externally 603 triggered configurations (for instance through a commissioning tool 604 which may facilitate the organization of the network at a minimum 605 energy cost). 607 To this end, the routing protocol(s) MUST provide a set of features 608 including 0-configuration at network ramp-up, (network-internal) 609 self- organization and configuration due to topological changes, and 610 the ability to support (network-external) patches and configuration 611 updates. For the latter, the protocol(s) MUST support multi- and 612 any-cast addressing. The protocol(s) SHOULD also support the 613 formation and identification of groups of field devices in the 614 network. 616 The routing protocol(s) SHOULD be able to dynamically adapt, e.g. 617 through the application of appropriate routing metrics, to ever- 618 changing conditions of communication (possible degradation of QoS, 619 variable nature of the traffic (real time vs. non real time, sensed 620 data vs. alerts), node mobility, a combination thereof, etc.) 622 The routing protocol(s) SHOULD be able to dynamically compute, select 623 and possibly optimize the (multiple) path(s) that will be used by the 624 participating devices to forward the traffic towards the actuators 625 and/or a LBR according to the service-specific and traffic-specific 626 QoS, traffic engineering and routing security policies that will have 627 to be enforced at the scale of a routing domain (that is, a set of 628 networking devices administered by a globally unique entity), or a 629 region of such domain (e.g. a metropolitan area composed of clusters 630 of sensors). 632 6.4. Support of Highly Directed Information Flows 634 The reporting of the data readings by a large amount of spatially 635 dispersed nodes towards a few LBRs will lead to highly directed 636 information flows. For instance, a suitable addressing scheme can be 637 devised which facilitates the data flow. Also, as one gets closer to 638 the LBR, the traffic concentration increases which may lead to high 639 load imbalances in node usage. 641 To this end, the routing protocol(s) SHOULD support and utilize the 642 fact of a large number of highly directed traffic flows to facilitate 643 scalability and parameter constrained routing. 645 The routing protocol MUST be able to accommodate traffic bursts by 646 dynamically computing and selecting multiple paths towards the same 647 destination. 649 6.5. Support of Multicast, Anycast, and Implementation of Groupcast 651 Some urban sensing systems require low-level addressing of a group of 652 nodes in the same subnet, or for a node representative of a group of 653 nodes, without any prior creation of multicast groups, simply 654 carrying a list of recipients in the subnet 655 [I-D.ietf-roll-home-routing-reqs]. 657 Routing protocols activated in urban sensor networks MUST support 658 unicast (traffic is sent to a single field device), multicast 659 (traffic is sent to a set of devices that are subscribed to the same 660 multicast group), and anycast (where multiple field devices are 661 configured to accept traffic sent on a single IP anycast address) 662 transmission schemes. Routing protocols activated in urban sensor 663 networks SHOULD accommodate "groupcast" forwarding schemes, where 664 traffic is sent to a set of devices that implicitly belong to the 665 same group/cast. 667 The support of unicast, groupcast, multicast, and anycast also has an 668 implication on the addressing scheme but is beyond the scope of this 669 document that focuses on the routing requirements aspects. 671 Note: with IP multicast, signaling mechanisms are used by a receiver 672 to join a group and the sender does not know the receivers of the 673 group. What is required is the ability to address a group of 674 receivers known by the sender even if the receivers do not need to 675 know that they have been grouped by the sender (since requesting each 676 individual node to join a multicast group would be very energy- 677 consuming). 679 The network SHOULD support internetworking when identical protocols 680 are used, while giving attention to routing security implications of 681 interfacing, for example, a home network with a utility U-LLN. The 682 network may support the ability to interact with another network 683 using a different protocol, for example by supporting route 684 redistribution. 686 6.6. Network Dynamicity 688 Although mobility is assumed to be low in urban LLNs, network 689 dynamicity due to node association, disassociation and disappearance, 690 as well as long-term link perturbations is not negligible. This in 691 turn impacts reorganization and reconfiguration convergence as well 692 as routing protocol convergence. 694 To this end, local network dynamics SHOULD NOT impact the entire 695 network to be re-organized or re-reconfigured; however, the network 696 SHOULD be locally optimized to cater for the encountered changes. 697 The routing protocol(s) SHOULD support appropriate mechanisms in 698 order to be informed of the association, disassociation, and 699 disappearance of nodes. The routing protocol(s) SHOULD support 700 appropriate updating mechanisms in order to be informed of changes in 701 connectivity. The routing protocol(s) SHOULD use this information to 702 initiate protocol specific mechanisms for reorganization and 703 reconfiguration as necessary to maintain overall routing efficiency. 704 Convergence and route establishment times SHOULD be significantly 705 lower than the smallest reporting interval. 707 Differentiation SHOULD be made between node disappearance, where the 708 node disappears without prior notification, and user or node- 709 initiated disassociation ("phased-out"), where the node has enough 710 time to inform the network about its pending removal. 712 6.7. Latency 714 With the exception of alert reporting solutions and to a certain 715 extent queried reporting, U-LLNs are delay tolerant as long as the 716 information arrives within a fraction of the smallest reporting 717 interval, e.g. a few seconds if reporting is done every 4 hours. 719 The routing protocol(s) SHOULD also support the ability to route 720 according to different metrics (one of which could e.g. be latency). 722 7. Security Considerations 724 As every network, U-LLNs are exposed to routing security threats that 725 need to be addressed. The wireless and distributed nature of these 726 networks increases the spectrum of potential routing security 727 threats. This is further amplified by the resource constraints of 728 the nodes, thereby preventing resource intensive routing security 729 approaches from being deployed. A viable routing security approach 730 SHOULD be sufficiently lightweight that it may be implemented across 731 all nodes in a U-LLN. These issues require special attention during 732 the design process, so as to facilitate a commercially attractive 733 deployment. 735 A secure communication in a wireless network encompasses three main 736 elements, i.e. confidentiality (encryption of data), integrity 737 (correctness of data), and authentication (legitimacy of data). 739 Authentication can e.g. be violated if external sources insert 740 incorrect data packets; integrity can e.g. be violated if nodes start 741 to break down and hence commence measuring and relaying data 742 incorrectly. Nonetheless, some sensor readings as well as the 743 actuator control signals need to be confidential. 745 The U-LLN network MUST deny all routing services to any node who has 746 not been authenticated to the U-LLN and authorized for the use of 747 routing services. 749 The U-LLN MUST be protected against attempts to inject false or 750 modified packets. For example, an attacker SHOULD be prevented from 751 manipulating or disabling the routing function by compromising 752 routing update messages. Moreover, it SHOULD NOT be possible to 753 coerce the network into routing packets which have been modified in 754 transit. To this end the routing protocol(s) MUST support message 755 integrity. 757 Further example routing security issues which may arise are the 758 abnormal behavior of nodes which exhibit an egoistic conduct, such as 759 not obeying network rules, or forwarding no or false packets. Other 760 important issues may arise in the context of Denial of Service (DoS) 761 attacks, malicious address space allocations, advertisement of 762 variable addresses, a wrong neighborhood, external attacks aimed at 763 injecting dummy traffic to drain the network power, etc. 765 The properties of self-configuration and self-organization which are 766 desirable in a U-LLN introduce additional routing security 767 considerations. Mechanisms MUST be in place to deny any rogue node 768 which attempts to take advantage of self-configuration and self- 769 organization procedures. Such attacks may attempt, for example, to 770 cause denial of service, drain the energy of power constrained 771 devices, or to hijack the routing mechanism. A node MUST 772 authenticate itself to a trusted node that is already associated with 773 the U-LLN before any self-configuration or self-organization is 774 allowed to proceed. A node that has already authenticated and 775 associated with the U-LLN MUST deny, to the maximum extent possible, 776 the allocation of resources to any unauthenticated peer. The routing 777 protocol(s) MUST deny service to any node which has not clearly 778 established trust with the U-LLN. 780 Consideration SHOULD be given to cases where the U-LLN may interface 781 with other networks such as a home network. The U-LLN SHOULD NOT 782 interface with any external network which has not established trust. 783 The U-LLN SHOULD be capable of limiting the resources granted in 784 support of an external network so as not to be vulnerable to denial 785 of service. 787 With low computation power and scarce energy resources, U-LLNs nodes 788 may not be able to resist any attack from high-power malicious nodes 789 (e.g. laptops and strong radios). However, the amount of damage 790 generated to the whole network SHOULD be commensurate with the number 791 of nodes physically compromised. For example, an intruder taking 792 control over a single node SHOULD not have total access to, or be 793 able to completely deny service to the whole network. 795 In general, the routing protocol(s) SHOULD support the implementation 796 of routing security best practices across the U-LLN. Such an 797 implementation ought to include defense against, for example, 798 eavesdropping, replay, message insertion, modification, and man-in- 799 the-middle attacks. 801 The choice of the routing security solutions will have an impact onto 802 routing protocol(s). To this end, routing protocol(s) proposed in 803 the context of U-LLNs MUST support integrity measures and SHOULD 804 support confidentiality (routing security) measures. 806 8. IANA Considerations 808 This document makes no request of IANA. 810 9. Acknowledgements 812 The in-depth feedback of JP Vasseur, Cisco, Jonathan Hui, Arch Rock, 813 and Iain Calder is greatly appreciated. 815 10. References 816 10.1. Normative References 818 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 819 Requirement Levels", BCP 14, RFC 2119, March 1997. 821 10.2. Informative References 823 [I-D.ietf-roll-home-routing-reqs] 824 Brandt, A., Buron, J., and G. Porcu, "Home Automation 825 Routing Requirement in Low Power and Lossy Networks", 826 draft-ietf-roll-home-routing-reqs-03 (work in progress), 827 September 2008. 829 [I-D.ietf-roll-indus-routing-reqs] 830 Networks, D., Thubert, P., Dwars, S., and T. Phinney, 831 "Industrial Routing Requirements in Low Power and Lossy 832 Networks", draft-ietf-roll-indus-routing-reqs-01 (work in 833 progress), July 2008. 835 [I-D.martocci-roll-building-routing-reqs] 836 Martocci, J., Riou, N., Mil, P., and W. Vermeylen, 837 "Building Automation Routing Requirements in Low Power and 838 Lossy Networks", 839 draft-martocci-roll-building-routing-reqs-01 (work in 840 progress), October 2008. 842 [I-D.vasseur-roll-terminology] 843 Vasseur, J., "Terminology in Low power And Lossy 844 Networks", draft-vasseur-roll-terminology-02 (work in 845 progress), September 2008. 847 [Lu2007] J.L. Lu, F. Valois, D. Barthel, M. Dohler, "FISCO: A Fully 848 Integrated Scheme of Self-Configuration and Self- 849 Organization for WSN", IEEE WCNC 2007, Hong Kong, China, 850 11-15 March 2007, pp. 3370-3375. 852 [RFC1546] Partridge, C., Mendez, T., and W. Milliken, "Host 853 Anycasting Service", RFC 1546, November 1993. 855 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 856 Architecture", RFC 4291, February 2006. 858 Authors' Addresses 860 Mischa Dohler (editor) 861 CTTC 862 Parc Mediterrani de la Tecnologia, Av. Canal Olimpic S/N 863 08860 Castelldefels, Barcelona 864 Spain 866 Email: mischa.dohler@cttc.es 868 Thomas Watteyne (editor) 869 CITI-Lab, INSA-Lyon, INRIA A4RES 870 21 avenue Jean Capelle 871 69621 Lyon 872 France 874 Email: thomas.watteyne@ieee.org 876 Tim Winter (editor) 877 Eka Systems 878 20201 Century Blvd. Suite 250 879 Germantown, MD 20874 880 USA 882 Email: tim.winter@ekasystems.com 884 Dominique Barthel (editor) 885 France Telecom R&D 886 28 Chemin du Vieux Chene 887 38243 Meylan Cedex 888 France 890 Email: Dominique.Barthel@orange-ftgroup.com 892 Christian Jacquenet 893 France Telecom R&D 894 4 rue du Clos Courtel BP 91226 895 35512 Cesson Sevigne 896 France 898 Email: christian.jacquenet@orange-ftgroup.com 899 Giyyarpuram Madhusudan 900 France Telecom R&D 901 28 Chemin du Vieux Chene 902 38243 Meylan Cedex 903 France 905 Email: giyyarpuram.madhusudan@orange-ftgroup.com 907 Gabriel Chegaray 908 France Telecom R&D 909 28 Chemin du Vieux Chene 910 38243 Meylan Cedex 911 France 913 Email: gabriel.chegaray@orange-ftgroup.com 915 Full Copyright Statement 917 Copyright (C) The IETF Trust (2008). 919 This document is subject to the rights, licenses and restrictions 920 contained in BCP 78, and except as set forth therein, the authors 921 retain all their rights. 923 This document and the information contained herein are provided on an 924 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 925 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 926 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 927 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 928 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 929 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 931 Intellectual Property 933 The IETF takes no position regarding the validity or scope of any 934 Intellectual Property Rights or other rights that might be claimed to 935 pertain to the implementation or use of the technology described in 936 this document or the extent to which any license under such rights 937 might or might not be available; nor does it represent that it has 938 made any independent effort to identify any such rights. Information 939 on the procedures with respect to rights in RFC documents can be 940 found in BCP 78 and BCP 79. 942 Copies of IPR disclosures made to the IETF Secretariat and any 943 assurances of licenses to be made available, or the result of an 944 attempt made to obtain a general license or permission for the use of 945 such proprietary rights by implementers or users of this 946 specification can be obtained from the IETF on-line IPR repository at 947 http://www.ietf.org/ipr. 949 The IETF invites any interested party to bring to its attention any 950 copyrights, patents or patent applications, or other proprietary 951 rights that may cover technology that may be required to implement 952 this standard. Please address the information to the IETF at 953 ietf-ipr@ietf.org.