idnits 2.17.1 draft-ietf-roll-indus-routing-reqs-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 5, 2009) is 5411 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'HART' is mentioned on line 1170, but not defined == Outdated reference: A later version (-13) exists of draft-ietf-roll-terminology-01 == Outdated reference: A later version (-02) exists of draft-tsao-roll-security-framework-00 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Networking Working Group K. Pister, Ed. 3 Internet-Draft Dust Networks 4 Intended status: Informational P. Thubert, Ed. 5 Expires: December 7, 2009 Cisco Systems 6 S. Dwars 7 Shell 8 T. Phinney 9 June 5, 2009 11 Industrial Routing Requirements in Low Power and Lossy Networks 12 draft-ietf-roll-indus-routing-reqs-06 14 Status of this Memo 16 This Internet-Draft is submitted to IETF in full conformance with the 17 provisions of BCP 78 and BCP 79. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six months 25 and may be updated, replaced, or obsoleted by other documents at any 26 time. It is inappropriate to use Internet-Drafts as reference 27 material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt. 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html. 35 This Internet-Draft will expire on December 7, 2009. 37 Copyright Notice 39 Copyright (c) 2009 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents in effect on the date of 44 publication of this document (http://trustee.ietf.org/license-info). 45 Please review these documents carefully, as they describe your rights 46 and restrictions with respect to this document. 48 Abstract 50 The wide deployment of lower cost wireless devices will significantly 51 improve the productivity and safety of the plants while increasing 52 the efficiency of the plant workers by extending the information set 53 available about the plant operations. The aim of this document is to 54 analyze the functional requirements for a routing protocol used in 55 industrial Low power and Lossy Networks (LLN) of field devices. 57 Requirements Language 59 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 60 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 61 document are to be interpreted as described in RFC 2119 [RFC2119]. 63 Table of Contents 65 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 66 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 67 3. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 3.1. Applications and Traffic Patterns . . . . . . . . . . . . 5 69 3.2. Network Topology of Industrial Applications . . . . . . . 8 70 3.2.1. The Physical Topology . . . . . . . . . . . . . . . . 10 71 3.2.2. Logical Topologies . . . . . . . . . . . . . . . . . . 12 72 4. Requirements related to Traffic Characteristics . . . . . . . 13 73 4.1. Service Requirements . . . . . . . . . . . . . . . . . . . 14 74 4.2. Configurable Application Requirement . . . . . . . . . . . 15 75 4.3. Different Routes for Different Flows . . . . . . . . . . . 15 76 5. Reliability Requirements . . . . . . . . . . . . . . . . . . . 16 77 6. Device-Aware Routing Requirements . . . . . . . . . . . . . . 18 78 7. Broadcast/Multicast requirements . . . . . . . . . . . . . . . 19 79 8. Protocol Performance requirements . . . . . . . . . . . . . . 20 80 9. Mobility requirements . . . . . . . . . . . . . . . . . . . . 20 81 10. Manageability requirements . . . . . . . . . . . . . . . . . . 21 82 11. Antagonistic requirements . . . . . . . . . . . . . . . . . . 22 83 12. Security Considerations . . . . . . . . . . . . . . . . . . . 23 84 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 85 14. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 25 86 15. References . . . . . . . . . . . . . . . . . . . . . . . . . . 25 87 15.1. Normative References . . . . . . . . . . . . . . . . . . . 25 88 15.2. Informative References . . . . . . . . . . . . . . . . . . 25 89 15.3. External Informative References . . . . . . . . . . . . . 25 90 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 25 92 1. Introduction 94 Information Technology (IT) is already, and increasingly will be 95 applied to industrial Control Technology (CT) in application areas 96 where those IT technologies can be constrained sufficiently by 97 Service Level Agreements (SLA) or other modest change that they are 98 able to meet the operational needs of industrial CT. When that 99 happens, the CT benefits from the large intellectual, experiential 100 and training investment that has already occurred in those IT 101 precursors. One can conclude that future reuse of additional IT 102 protocols for industrial CT will continue to occur due to the 103 significant intellectual, experiential and training economies which 104 result from that reuse. 106 Following that logic, many vendors are already extending or replacing 107 their local field-bus technology with Ethernet and IP-based 108 solutions. Examples of this evolution include CIP EtherNet/IP, 109 Modbus/TCP, Foundation Fieldbus HSE, PROFInet and Invensys/Foxboro 110 FOXnet. At the same time, wireless, low power field devices are 111 being introduced that facilitate a significant increase in the amount 112 of information which industrial users can collect and the number of 113 control points that can be remotely managed. 115 IPv6 appears as a core technology at the conjunction of both trends, 116 as illustrated by the current [ISA100.11a] industrial Wireless Sensor 117 Networking (WSN) specification, where layers 1-4 technologies 118 developed for purposes other than industrial CT -- IEEE 802.15.4 PHY 119 and MAC, 6LoWPAN and IPv6, and UDP - are adapted to industrial CT 120 use. But due to the lack of open standards for routing in Low power 121 and Lossy Networks (LLN), even ISA100.11a leaves the routing 122 operation to proprietary methods. 124 The aim of this document is to analyze the requirements from the 125 industrial environment for a routing protocol in Low power and Lossy 126 Networks (LLN) based on IP version 6 to power the next generation of 127 Control Technology. 129 2. Terminology 131 This document employes terminology defined in the ROLL terminology 132 document [I-D.ietf-roll-terminology]. This document also refers to 133 industrial standards: 135 HART: "Highway Addressable Remote Transducer", a group of 136 specifications for industrial process and control devices 137 administered by the HART Foundation (see [HART]). The latest version 138 for the specifications is HART7 which includes the additions for 139 WirelessHART. 141 ISA: "International Society of Automation". ISA is an ANSI 142 accredited standards-making society. ISA100 is an ISA committee 143 whose charter includes defining a family of standards for industrial 144 automation. [ISA100.11a] is a working group within ISA100 that is 145 working on a standard for monitoring and non-critical process control 146 applications. 148 3. Overview 150 Wireless, low-power field devices enable industrial users to 151 significantly increase the amount of information collected and the 152 number of control points that can be remotely managed. The 153 deployment of these wireless devices will significantly improve the 154 productivity and safety of the plants while increasing the efficiency 155 of the plant workers. IPv6 is perceived as a key technology to 156 provide the scalability and interoperability that are required in 157 that space and is being more and more present in standards and 158 products under development and early deployments. 160 Cable is perceived as a more proven, safer techhnology, and existing, 161 operational deployments are very stable in time. For these reasons, 162 it is not expected that wireless will replace wire in any foreseeable 163 future; the consensus in the industrial space is rather that wireless 164 will tremendously augment the scope and benefits of automation by 165 enabling the control of devices that were not connected in the past 166 for reasons of cost and/or deployment complexities. But for LLN to 167 be adopted in the industrial environment, the wireless network needs 168 to have three qualities: low power, high reliability, and easy 169 installation and maintenance. The routing protocol used for low 170 power and lossy networks (LLN) is important to fulfilling these 171 goals. 173 Industrial automation is segmented into two distinct application 174 spaces, known as "process" or "process control" and "discrete 175 manufacturing" or "factory automation". In industrial process 176 control, the product is typically a fluid (oil, gas, chemicals ...). 177 In factory automation or discrete manufacturing, the products are 178 individual elements (screws, cars, dolls). While there is some 179 overlap of products and systems between these two segments, they are 180 surprisingly separate communities. The specifications targeting 181 industrial process control tend to have more tolerance for network 182 latency than what is needed for factory automation. 184 Irrespective of this different 'process' and 'discrete' plant nature 185 both plant types will have similar needs for automating the 186 collection of data that used to be collected manually, or was not 187 collected before. Examples are wireless sensors that report the 188 state of a fuse, report the state of a luminary, HVAC status, report 189 vibration levels on pumps, report man-down, and so on. 191 Other novel application arenas that equally apply to both 'process' 192 and 'discrete' involve mobile sensors that roam in and out of plants, 193 such as active sensor tags on containers or vehicles. 195 Some if not all of these applications will need to be served by the 196 same low power and lossy wireless network technology. This may mean 197 several disconnected, autonomous LLN networks connecting to multiple 198 hosts, but sharing the same ether. Interconnecting such networks, if 199 only to supervise channel and priority allocations, or to fully 200 synchronize, or to share path capacity within a set of physical 201 network components may be desired, or may not be desired for 202 practical reasons, such as e.g. cyber security concerns in relation 203 to plant safety and integrity. 205 All application spaces desire battery operated networks of hundreds 206 of sensors and actuators communicating with LLN access points. In an 207 oil refinery, the total number of devices might exceed one million, 208 but the devices will be clustered into smaller networks that in most 209 cases interconnect and report to an existing plant network 210 infrastructure. 212 Existing wired sensor networks in this space typically use 213 communication protocols with low data rates, from 1,200 baud (e.g. 214 wired HART) to the one to two hundred Kbps range for most of the 215 others. The existing protocols are often master/slave with command/ 216 response. 218 3.1. Applications and Traffic Patterns 220 The industrial market classifies process applications into three 221 broad categories and six classes. 223 o Safety 225 * Class 0: Emergency action - Always a critical function 227 o Control 229 * Class 1: Closed loop regulatory control - Often a critical 230 function 232 * Class 2: Closed loop supervisory control - Usually non-critical 233 function 235 * Class 3: Open loop control - Operator takes action and controls 236 the actuator (human in the loop) 238 o Monitoring 240 * Class 4: Alerting - Short-term operational effect (for example 241 event-based maintenance) 243 * Class 5: Logging and downloading / uploading - No immediate 244 operational consequence (e.g., history collection, sequence-of- 245 events, preventive maintenance) 247 Safety critical functions effect the basic safety integrity of the 248 plant. These normally dormant functions kick in only when process 249 control systems, or their operators, have failed. By design and by 250 regular interval inspection, they have a well-understood probability 251 of failure on demand in the range of typically once per 10-1000 252 years. 254 In-time deliveries of messages becomes more relevant as the class 255 number decreases. 257 Note that for a control application, the jitter is just as important 258 as latency and has a potential of destabilizing control algorithms. 260 Industrial users are interested in deploying wireless networks for 261 the monitoring classes 4 and 5, and in the non-critical portions of 262 classes 3 through 2. 264 Classes 4 and 5 also include asset monitoring and tracking which 265 include equipment monitoring and are essentially separate from 266 process monitoring. An example of equipment monitoring is the 267 recording of motor vibrations to detect bearing wear. However, 268 similar sensors detecting excessive vibration levels could be used as 269 safeguarding loops that immediately initiate a trip, and thus end up 270 being class 0. 272 In the near future, most LLN systems in industrial automation 273 environments will be for low frequency data collection. Packets 274 containing samples will be generated continuously, and 90% of the 275 market is covered by packet rates of between 1/s and 1/hour, with the 276 average under 1/min. In industrial process, these sensors include 277 temperature, pressure, fluid flow, tank level, and corrosion. Some 278 sensors are bursty, such as vibration monitors that may generate and 279 transmit tens of kilo-bytes (hundreds to thousands of packets) of 280 time-series data at reporting rates of minutes to days. 282 Almost all of these sensors will have built-in microprocessors that 283 may detect alarm conditions. Time-critical alarm packets are 284 expected to be granted a lower latency than periodic sensor data 285 streams. 287 Some devices will transmit a log file every day, again with typically 288 tens of Kbytes of data. For these applications there is very little 289 "downstream" traffic coming from the LLN access point and traveling 290 to particular sensors. During diagnostics, however, a technician may 291 be investigating a fault from a control room and expect to have "low" 292 latency (human tolerable) in a command/response mode. 294 Low-rate control, often with a "human in the loop" (also referred to 295 as "open loop"), is implemented via communication to a control room 296 because that's where the human in the loop will be. The sensor data 297 makes its way through the LLN access point to the centralized 298 controller where it is processed, the operator sees the information 299 and takes action, and the control information is then sent out to the 300 actuator node in the network. 302 In the future, it is envisioned that some open loop processes will be 303 automated (closed loop) and packets will flow over local loops and 304 not involve the LLN access point. These closed loop controls for 305 non-critical applications will be implemented on LLNs. Non-critical 306 closed loop applications have a latency requirement that can be as 307 low as 100 ms but many control loops are tolerant of latencies above 308 1 s. 310 More likely though is that loops will be closed in the field 311 entirely, and in such a case, having wireless links within the 312 control loop does not usually present actual value. Most control 313 loops have sensors and actuators within such proximity that a wire 314 between them remains the most sensible option from an economic point 315 of view. This 'control in the field' architecture is already common 316 practice with wired field busses. An 'upstream' wireless link would 317 only be used to influence the in-field controller settings, and to 318 occasionally capture diagnostics. Even though the link back to a 319 control room might be a wireless, this architecture reduces the tight 320 latency and availability requirements for the wireless links. 322 Closing loops in the field: 324 o does not prevent the same loop from being closed through a remote 325 multi-variable controller during some modes of operation, while 326 being closed directly in the field during other modes of operation 327 (e.g., fallback, or when timing is more critical) 329 o does not imply that the loop will be closed with a wired 330 connection, or that the wired connection is more energy efficient 331 even when it exists as an alternate to the wireless connection. 333 A realistic future scenario is for a field device with a battery or 334 ultra-capacitor power storage to have both wireless and unpowered 335 wired communications capability (e.g., galvanically isolated RS-485), 336 where the wireless communication is more flexible and, for local loop 337 operation, more energy efficient, and the wired communication 338 capability serves as a backup interconnect among the loop elements, 339 but without a wired connection back to the operations center 340 blockhouse. In other words, the loop elements are interconnected 341 through wiring to a nearby junction box, but the 2 km home-run link 342 from the junction box to the control center does not exist. 344 When wireless communication conditions are good, devices use wireless 345 for loop interconnect, and either one wireless device reports alarms 346 and other status to the control center for all elements of the loop 347 or each element reports independently. When wireless communications 348 are sporadic, the loop interconnect uses the self-powered 349 galvanically-isolated RS-485 link and one of the devices with good 350 wireless communications to the control center serves as a router for 351 those devices which are unable to contact the control center 352 directly. 354 The above approach is particularly attractive for large storage tanks 355 in tank farms, where devices may not all have good wireless 356 visibility of the control center, and where a home run cable from the 357 tank to the control center is undesirable due to the electro- 358 potential differences between the tank location and the distant 359 control center that arise during lightning storms. 361 In fast control, tens of milliseconds of latency is typical. In many 362 of these systems, if a packet does not arrive within the specified 363 interval, the system enters an emergency shutdown state, often with 364 substantial financial repercussions. For a one-second control loop 365 in a system with a mean-time between shutdowns target of 30 years, 366 the latency requirement implies nine 9s of reliability. Given such 367 exposure, given the intrinsic vulnerability of wireless link 368 availability, and given the emergence of control in the field 369 architectures, most users tend not to aim for fast closed loop 370 control with wireless links within that fast loop. 372 3.2. Network Topology of Industrial Applications 374 Although network topology is difficult to generalize, the majority of 375 existing applications can be met by networks of 10 to 200 field 376 devices and maximum number of hops of twenty. It is assumed that the 377 field devices themselves will provide routing capability for the 378 network, and additional repeaters/routers will not be required in 379 most cases. 381 For the vast majority of industrial applications, the traffic is 382 mostly composed of real time publish/subscribe sensor data also 383 referred to as buffered, from the field devices over a LLN towards 384 one or more sinks. Increasingly over time, these sinks will be a 385 part of a backbone but today they are often fragmented and isolated. 387 The wireless sensor network is a LLN of field devices for which two 388 logical roles are defined, the field routers and the non routing 389 devices. It is acceptable and even probable that the repartition of 390 the roles across the field devices change over time to balance the 391 cost of the forwarding operation amongst the nodes. 393 In order to scale a control network in terms of density, one possible 394 architecture is to deploy a backbone as a canopy that aggregates 395 multiple smaller LLNs. The backbone is a high-speed infrastructure 396 network that may interconnect multiple WSNs through backbone routers. 397 Infrastructure devices can be connected to the backbone. A gateway / 398 manager that interconnects the backbone to the plant network of the 399 corporate network can be viewed as collapsing the backbone and the 400 infrastructure devices into a single device that operates all the 401 required logical roles. The backbone is likely to become an option 402 in the industrial network. 404 Typically, such backbones interconnect to the 'legacy' wired plant 405 infrastructure, the plant network, also known as the 'Process Control 406 Domain', the PCD. These plant automation networks are domain wise 407 segregated from the office network or office domain (OD), which in 408 itself is typically segregated from the Internet. 410 Sinks for LLN sensor data reside on both the plant network PCD, the 411 business network OD, and on the Internet. Applications close to 412 existing plant automation, such as wired process control and 413 monitoring systems running on fieldbusses, that require high 414 availability and low latencies, and that are managed by 'Control and 415 Automation' departments typically reside on the PCD. Other 416 applications such as automated corrosion monitoring, cathodic 417 protection voltage verification, or machine condition (vibration) 418 monitoring where one sample per week is considered over sampling, 419 would more likely deliver their sensor readings in the office domain. 420 Such applications are 'owned' by e.g. maintenance departments. 422 Yet other applications like third party maintained luminaries, or 423 vendor managed inventory systems, where a supplier of chemicals needs 424 access to tank level readings at his customer's site, will be best 425 served with direct Internet connectivity all the way to its sensor at 426 his customer's site. Temporary 'Babysitting sensors' deployed for 427 just a few days, say during startup or troubleshooting or for ad-hoc 428 measurement campaigns for R and D purposes are other examples where 429 Internet would be the domain where wireless sensor data shall land, 430 and other domains such as office and plant should preferably be 431 circumvented if quick deployment without potentially impacting plant 432 safety integrity is required. 434 This multiple domain multiple applications connectivity creates a 435 significant challenge. Many different applications will all share 436 the same medium, the ether, within the fence, preferably sharing the 437 same frequency bands, and preferably sharing the same protocols, 438 preferably synchronized to optimize co-existence challenges, yet 439 logically segregated to avoid creation of intolerable short cuts 440 between existing wired domains. 442 Given this challenge, LLN networks are best to be treated as all 443 sitting on yet another segregated domain, segregated from all other 444 wired domains where conventional security is organized by perimeter. 445 Moving away from the traditional perimeter security mindset means 446 moving towards stronger end-device identity authentication, so that 447 LLN access points can split the various wireless data streams and 448 interconnect back to the appropriate domain pending identity and 449 trust established by the gateways in the authenticity of message 450 originators. 452 Similar considerations are to be given to how multiple applications 453 may or may not be allowed to share routing devices and their 454 potentially redundant bandwidth within the network. Challenges here 455 are to balance available capacity, required latencies, expected 456 priorities, and last but not least available (battery) energy within 457 the routing devices. 459 3.2.1. The Physical Topology 461 There is no specific physical topology for an industrial process 462 control network. 464 One extreme example is a multi-square-kilometer refinery where 465 isolated tanks, some of them with power but most with no backbone 466 connectivity, compose a farm that spans over of the surface of the 467 plant. A few hundred field devices are deployed to ensure the global 468 coverage using a wireless self-forming self-healing mesh network that 469 might be 5 to 10 hops across. Local feedback loops and mobile 470 workers tend to be only one or two hops. The backbone is in the 471 refinery proper, many hops away. Even there, powered infrastructure 472 is also typically several hops away. In that case, hopping to/from 473 the powered infrastructure may often be more costly than the direct 474 route. 476 In the opposite extreme case, the backbone network spans all the 477 nodes and most nodes are in direct sight of one or more backbone 478 router. Most communication between field devices and infrastructure 479 devices as well as field device to field device occurs across the 480 backbone. From afar, this model resembles the WIFI ESS (Extended 481 Service Set). But from a layer 3 perspective, the issues are the 482 default (backbone) router selection and the routing inside the 483 backbone whereas the radio hop towards the field device is in fact a 484 simple local delivery. 486 ---------+---------------------------- 487 | Plant Network 488 | 489 +-----+ 490 | | Gateway M : Mobile device 491 | | o : Field device 492 +-----+ 493 | 494 | Backbone 495 +--------------------+------------------+ 496 | | | 497 +-----+ +-----+ +-----+ 498 | | Backbone | | Backbone | | Backbone 499 | | router | | router | | router 500 +-----+ +-----+ +-----+ 501 o o o o o o o o o o o o o 502 o o o o o o o o o o o o o o o o o o 503 o o o o o o o o o o o M o o o o o 504 o o M o o o o o o o o o o o o o 505 o o o o o o o o o 506 o o o o o 507 LLN 509 Figure 1: Backbone-based Physical Topology 511 An intermediate case is illustrated in Figure 1 with a backbone that 512 spans the Wireless Sensor Network in such a fashion that any WSN node 513 is only a few wireless hops away from the nearest Backbone Router. 514 WSN nodes are expected to organize into self-forming self-healing 515 self-optimizing logical topologies that enable leveraging the 516 backbone when it is most efficient to do so. 518 It must be noted that the routing function is expected to be so 519 simple that any field device could assume the role of a router, 520 depending on the self-discovery of the topology and the power status 521 of the neighbors. On the other hand, only devices equipped with the 522 appropriate hardware and software combination could assume the role 523 of an end point for a given purpose, such as sensor or actuator. 525 3.2.2. Logical Topologies 527 Most of the traffic over the LLN is publish/subscribe of sensor data 528 from the field device towards a sink that can be a backbone router, a 529 gateway, or a controller/manager. The destination of the sensor data 530 is an Infrastructure devices that sits on the backbone and is 531 reachable via one or more backbone router. 533 For security, reliability, availability or serviceability reasons, it 534 is often required that the logical topologies are not physically 535 congruent over the radio network, that is they form logical 536 partitions of the LLN. For instance, a routing topology that is set 537 up for control should be isolated from a topology that reports the 538 temperature and the status of the vents, if that second topology has 539 lesser constraints for the security policy. This isolation might be 540 implemented as Virtual LANs and Virtual Routing Tables in shared 541 nodes in the backbone, but correspond effectively to physical nodes 542 in the wireless network. 544 Since publishing the data is the raison d'etre for most of the 545 sensors, in some cases it makes sense to build proactively a set of 546 routes between the sensors and one or more backbone router and 547 maintain those routes at all time. Also, because of the lossy nature 548 of the network, the routing in place should attempt to propose 549 multiple paths in the form of Directed Acyclic Graphs oriented 550 towards the destination. 552 In contrast with the general requirement of maintaining default 553 routes towards the sinks, the need for field device to field device 554 connectivity is very specific and rare, though the traffic associated 555 might be of foremost importance. Field device to field device routes 556 are often the most critical, optimized and well-maintained routes. A 557 class 0 control loop requires guaranteed delivery and extremely tight 558 response times. Both the respect of criteria in the route 559 computation and the quality of the maintenance of the route are 560 critical for the field devices operation. Typically, a control loop 561 will be using a dedicated direct wire that has very different 562 capabilities, cost and constraints than the wireless medium, with the 563 need to use a wireless path as a back up route only in case of loss 564 of the wired path. 566 Considering that though each field device to field device route 567 computation has specific constraints in terms of latency and 568 availability it can be expected that the shortest path possible will 569 often be selected and that this path will be routed inside the LLN as 570 opposed to via the backbone. It can also be noted that the lifetimes 571 of the routes might range from minutes for a mobile workers to tens 572 of years for a command and control closed loop. Finally, time- 573 varying user requirements for latency and bandwidth will change the 574 constraints on the routes, which might either trigger a constrained 575 route recomputation, a reprovisioning of the underlying L2 protocols, 576 or both in that order. For instance, a wireless worker may initiate 577 a bulk transfer to configure or diagnose a field device. A level 578 sensor device may need to perform a calibration and send a bulk file 579 to a plant. 581 4. Requirements related to Traffic Characteristics 583 [ISA100.11a] selected IPv6 as its Network Layer for a number of 584 reasons, including the huge address space and the large potential 585 size of a subnet, which can range up to 10K nodes in a plant 586 deployment. In the ISA100 model, industrial applications fall into 587 four large service categories: 589 1. Periodic data (aka buffered). Data that is generated 590 periodically and has a well understood data bandwidth 591 requirement, both deterministic and predictable. Timely delivery 592 of such data is often the core function of a wireless sensor 593 network and permanent resources are assigned to ensure that the 594 required bandwidth stays available. Buffered data usually 595 exhibits a short time to live, and the newer reading obsoletes 596 the previous. In some cases, alarms are low priority information 597 that gets repeated over and over. The end-to-end latency of this 598 data is not as important as the regularity with which the data is 599 presented to the plant application. 601 2. Event data. This category includes alarms and aperiodic data 602 reports with bursty data bandwidth requirements. In certain 603 cases, alarms are critical and require a priority service from 604 the network. 606 3. Client/Server. Many industrial applications are based on a 607 client/server model and implement a command response protocol. 608 The data bandwidth required is often bursty. The acceptable 609 round-trip latency for some legacy systems was based on the time 610 to send tens of bytes over a 1200 baud link. Hundreds of 611 milliseconds is typical. This type of request is statistically 612 multiplexed over the LLN and cost-based fair-share best-effort 613 service is usually expected. 615 4. Bulk transfer. Bulk transfers involve the transmission of blocks 616 of data in multiple packets where temporary resources are 617 assigned to meet a transaction time constraint. Transient 618 resources are assigned for a limited time (related to file size 619 and data rate) to meet the bulk transfers service requirements. 621 4.1. Service Requirements 623 The following service parameters can affect routing decisions in a 624 resource-constrained network: 626 o Data bandwidth - the bandwidth might be allocated permanently or 627 for a period of time to a specific flow that usually exhibits well 628 defined properties of burstiness and throughput. Some bandwidth 629 will also be statistically shared between flows in a best effort 630 fashion. 632 o Latency - the time taken for the data to transit the network from 633 the source to the destination. This may be expressed in terms of 634 a deadline for delivery. Most monitoring latencies will be in 635 seconds to minutes. 637 o Transmission phase - process applications can be synchronized to 638 wall clock time and require coordinated transmissions. A common 639 coordination frequency is 4 Hz (250 ms). 641 o Service contract type - revocation priority. LLNs have limited 642 network resources that can vary with time. This means the system 643 can become fully subscribed or even over subscribed. System 644 policies determine how resources are allocated when resources are 645 over subscribed. The choices are blocking and graceful 646 degradation. 648 o Transmission priority - the means by which limited resources 649 within field devices are allocated across multiple services. For 650 transmissions, a device has to select which packet in its queue 651 will be sent at the next transmission opportunity. Packet 652 priority is used as one criterion for selecting the next packet. 653 For reception, a device has to decide how to store a received 654 packet. The field devices are memory constrained and receive 655 buffers may become full. Packet priority is used to select which 656 packets are stored or discarded. 658 The routing protocol MUST also support different metric types for 659 each link used to compute the path according to some objective 660 function (e.g. minimize latency) depending on the nature of the 661 traffic. 663 For these reasons, the ROLL routing infrastructure is REQUIRED to 664 compute and update constrained routes on demand, and it can be 665 expected that this model will become more prevalent for field device 666 to field device connectivity as well as for some field device to 667 Infrastructure devices over time. 669 Industrial application data flows between field devices are not 670 necessarily symmetric. In particular, asymmetrical cost and 671 unidirectional routes are common for published data and alerts, which 672 represent the most part of the sensor traffic. The routing protocol 673 MUST be able to compute a set of unidirectional routes with 674 potentially different costs that are composed of one or more non- 675 congruent paths. 677 As multiple paths are set up and a variety of flows traverse the 678 network towards a same destination, for instance a node acting as a 679 sink for the LLN, the use of an additional marking/tagging mechanism 680 based on an upper layer information will be REQUIRED for intermediate 681 routers to discriminate the flows and perform the appropriate routing 682 decision using only the content of the IPv6 packet (e.g. use of DSCP, 683 Flow Label). 685 4.2. Configurable Application Requirement 687 Time-varying user requirements for latency and bandwidth may require 688 changes in the provisioning of the underlying L2 protocols. A 689 technician may initiate a query/response session or bulk transfer to 690 diagnose or configure a field device. A level sensor device may need 691 to perform a calibration and send a bulk file to a plant. The 692 routing protocol MUST support the ability to recompute paths based on 693 Network Layer abstractions of the underlying link attributes/metric 694 that may change dynamically. 696 4.3. Different Routes for Different Flows 698 Because different services categories have different service 699 requirements, it is often desirable to have different routes for 700 different data flows between the same two endpoints. For example, 701 alarm or periodic data from A to Z may require path diversity with 702 specific latency and reliability. A file transfer between A and Z 703 may not need path diversity. The routing algorithm MUST be able to 704 generate different routes with different characteristics (e.g. 705 Optimized according to different cost, etc...). 707 Dynanic or configured states of links and nodes influence the 708 capability of a given path to fulfill operational requirements such 709 as stability, battery cost or latency. Constraints such as battery 710 lifetime derive from the application itself, and because industrial 711 applications data flows are typically well-defined and well- 712 controlled, it is usually possible to estimate the battery 713 consumption of a router for a given topology. 715 The routing protocol MUST support the ability to (re)compute paths 716 based on Network Layer abstractions of upper layer constraints to 717 maintain the level of operation within required parameters. Such 718 information MAY be advertised by the routing protocol as metrics that 719 enable routing algorithms to establish appropriate paths that fit the 720 upper layer constraints. 722 The handling of an IPv6 packet by the Network Layer operates on the 723 standard properties and the settings of the IPv6 packet header 724 fields. These fields include the 3-tuple of the Flow Label and the 725 Source and Destination Address that can be used to identify a flow 726 and the Traffic Class octet that can be used to influence the Per Hop 727 Behavior in intermediate routers. 729 An application MAY choose how to set those fields for each packet or 730 for streams of packets, and the routing protocol specification SHOULD 731 state how different field settings will be handled to perform 732 different routing decisions. 734 5. Reliability Requirements 736 LLN reliability constitutes several unrelated aspects: 738 1) Availability of source to destination connectivity when the 739 application needs it, expressed in number of succeses / number of 740 attempts 742 2) Availability of source to destination connectivity when the 743 application might need it, expressed in number of potential 744 failures / available bandwidth, 746 3) Ability, expressed in number of successes divided by number of 747 attempts to get data delivered from source to destination within 748 a capped time, 750 4) How well a network (serving many applications) achieves end-to- 751 end delivery of packets within a bounded latency 753 5) Trustworthiness of data that is delivered to the sinks. 755 6) and others depending on the specific case... 757 This makes quantifying reliability the equivalent of plotting it on a 758 three plus dimensional graph. Different applications have different 759 requirements, and expressing reliability as a one dimensional 760 parameter, like 'reliability my wireless network is 99.9%' is often 761 creating more confusion than clarity. 763 The impact of not receiving sensor data due to sporadic network 764 outages can be devastating if this happens unnoticed. However, if 765 destinations that expect periodic sensor data or alarm status 766 updates, fail to get them, then automatically these systems can take 767 appropriate actions that prevent dangerous situations. Pending the 768 wireless application, appropriate action ranges from initiating a 769 shut down within 100 ms, to using a last known good value for as much 770 as N successive samples, to sending out an operator into the plant to 771 collect monthly data in the conventional way, i.e. some portable 772 sensor, paper and a clipboard. 774 The impact of receiving corrupted data, and not being able to detect 775 that received data is corrupt, is often more dangerous. Data 776 corruption can either come from random bit errors due to white noise, 777 or from occasional bursty interference sources like thunderstorms or 778 leaky microwave ovens, but also from conscious attacks by 779 adversaries. 781 Another critical aspect for the routing is the capability to ensure 782 maximum disruption time and route maintainance. The maximum 783 disruption time is the time it takes at most for a specific path to 784 be restored when broken. Route maintainance ensures that a path is 785 monitored to be restored when broken within the maximum disruption 786 time. Maintenance should also ensure that a path continues to 787 provide the service for which it was established for instance in 788 terms of bandwidth, jitter and latency. 790 In industrial applications, availability is usually defined with 791 respect to end-to-end delivery of packets within a bounded latency. 792 availability requirements vary over many orders of magnitude. Some 793 non-critical monitoring applications may tolerate a availability of 794 less than 90% with hours of latency. Most industrial standards, such 795 as HART7, have set user availability expectations at 99.9%. 796 Regulatory requirements are a driver for some industrial 797 applications. Regulatory monitoring requires high data integrity 798 because lost data is assumed to be out of compliance and subject to 799 fines. This can drive up either availability, or thrustworthiness 800 requirements. 802 Because LLN link stability is often low, path diversity is critical. 803 Hop-by-hop link diversity is used to improve latency-bounded 804 reliability by sending data over diverse paths. 806 Because data from field devices are aggregated and funneled at the 807 LLN access point before they are routed to plant applications, LLN 808 access point redundancy is an important factor in overall 809 availability. A route that connects a field device to a plant 810 application may have multiple paths that go through more than one LLN 811 access point. The routing protocol MUST be able to compute paths of 812 not-necessarily-equal cost toward a given destination so as to enable 813 load balancing across a variety of paths. The availability of each 814 path in a multipath route can change over time. Hence, it is 815 important to measure the availability on a per-path basis and select 816 a path (or paths) according to the availability requirements. 818 6. Device-Aware Routing Requirements 820 Wireless LLN nodes in industrial environments are powered by a 821 variety of sources. Battery operated devices with lifetime 822 requirements of at least five years are the most common. Battery 823 operated devices have a cap on their total energy, and typically can 824 report an estimate of remaining energy, and typically do not have 825 constraints on the short-term average power consumption. Energy 826 scavenging devices are more complex. These systems contain both a 827 power scavenging device (such as solar, vibration, or temperature 828 difference) and an energy storage device, such as a rechargeable 829 battery or a capacitor. These systems, therefore, have limits on 830 both long-term average power consumption (which cannot exceed the 831 average scavenged power over the same interval) as well as the short- 832 term limits imposed by the energy storage requirements. For solar- 833 powered systems, the energy storage system is generally designed to 834 provide days of power in the absence of sunlight. Many industrial 835 sensors run off of a 4-20 mA current loop, and can scavenge on the 836 order of milliwatts from that source. Vibration monitoring systems 837 are a natural choice for vibration scavenging, which typically only 838 provides tens or hundreds of microwatts. Due to industrial 839 temperature ranges and desired lifetimes, the choices of energy 840 storage devices can be limited, and the resulting stored energy is 841 often comparable to the energy cost of sending or receiving a packet 842 rather than the energy of operating the node for several days. And 843 of course, some nodes will be line-powered. 845 Example 1: solar panel, lead-acid battery sized for two weeks of 846 rain. 848 Example 2: vibration scavenger, 1mF tantalum capacitor. 850 Field devices have limited resources. Low-power, low-cost devices 851 have limited memory for storing route information. Typical field 852 devices will have a finite number of routes they can support for 853 their embedded sensor/actuator application and for forwarding other 854 devices packets in a mesh network slotted-link. 856 Users may strongly prefer that the same device have different 857 lifetime requirements in different locations. A sensor monitoring a 858 non-critical parameter in an easily accessed location may have a 859 lifetime requirement that is shorter and tolerate more statistical 860 variation than a mission-critical sensor in a hard-to-reach place 861 that requires a plant shutdown in order to replace. 863 The routing algorithm MUST support node-constrained routing (e.g. 864 taking into account the existing energy state as a node constraint). 865 Node constraints include power and memory, as well as constraints 866 placed on the device by the user, such as battery life. 868 7. Broadcast/Multicast requirements 870 Some existing industrial plant applications do not use broadcast or 871 multicast addressing to communicate to field devices. Unicast 872 address support is sufficient for them. 874 In some other industrial process automation environments, multicast 875 over IP is used to deliver to multiple nodes that may be 876 functionally-similar or not. Example usages are: 878 1) Delivery of alerts to multiple similar servers in an automation 879 control room. Alerts are multicast to a group address based on 880 the part of the automation process where the alerts arose (e.g., 881 the multicast address "all-nodes-interested-in-alerts-for- 882 process-unit-X"). This is always a restricted-scope multicast, 883 not a broadcast 885 2) Delivery of common packets to multiple routers over a backbone, 886 where the packets results in each receiving router initiating 887 multicast (sometimes as a full broadcast) within the LLN. For 888 instance, This can be a byproduct of having potentially 889 physically separated backbone routers that can inject messages 890 into different portions of the same larger LLN. 892 3) Publication of measurement data to more than one subscriber. 893 This feature is useful in some peer to peer control applications. 894 For example, level position may be useful to a controller that 895 operates the flow valve and also to the overfill alarm indicator. 896 Both controller and alarm indicator would receive the same 897 publication sent as a multicast by the level gauge. 899 All of these uses require an 1:N security mechanism as well; they 900 aren't of any use if the end-to-end security is only point-to-point. 902 It is quite possible that first-generation wireless automation field 903 networks can be adequately useful without either of these 904 capabilities, but in the near future, wireless field devices with 905 communication controllers and protocol stacks will require control 906 and configuration, such as firmware downloading, that may benefit 907 from broadcast or multicast addressing. 909 The routing protocol SHOULD support multicast addressing. 911 8. Protocol Performance requirements 913 The routing protocol MUST converge after the addition of a new device 914 within several minutes, and SHOULD converge within tens of seconds 915 such that a device is able to establish connectivity to any other 916 point in the network or determine that there is a connectivity issue. 917 Any routing algorithm used to determine how to route packets in the 918 network, MUST be capable of routing packets to and from a newly added 919 device within the several minutes of its addition, and SHOULD be able 920 to perform this function within tens of seconds. 922 The routing protocol MUST distribute sufficient information about 923 link failures to enable traffic to be routed such that all service 924 requirements (especially latency) continue to be met. This places a 925 requirement on the speed of distribution and convergence of this 926 information as well as the responsiveness of any routing algorithms 927 used to determine how to route packets. This requirement only 928 applies at normal link failure rates (see Section 5) and MAY degrade 929 during failure storms. 931 Any algorithm that computes routes for packets in the network MUST be 932 able to perform route computations in advance of needing to use the 933 route. Since such algorithms are required to react to link failures, 934 link usage information, and other dynamic link properties as the 935 information is distributed by the routing protocol, the algorithms 936 SHOULD recompute route based on new the receipt of new information. 938 9. Mobility requirements 940 Various economic factors have contributed to a reduction of trained 941 workers in the plant. The industry as a whole appears to be trying 942 to solve this problem with what is called the "wireless worker". 943 Carrying a PDA or something similar, this worker will be able to 944 accomplish more work in less time than the older, better-trained 945 workers that he or she replaces. Whether the premise is valid, the 946 use case is commonly presented: the worker will be wirelessly 947 connected to the plant IT system to download documentation, 948 instructions, etc., and will need to be able to connect "directly" to 949 the sensors and control points in or near the equipment on which he 950 or she is working. It is possible that this "direct" connection 951 could come via the normal LLNs data collection network. This 952 connection is likely to require higher bandwidth and lower latency 953 than the normal data collection operation. 955 PDAs are typically used as the user interfaces for plant historians, 956 asset management systems, and the likes. Undecided yet is if these 957 PDAs will use the LLN network directly to talk to field sensors, or 958 if they will rather use other wireless connectivity that proxys back 959 into the field, or to anywhere else. 961 The routing protocol SHOULD support the wireless worker with fast 962 network connection times of a few of seconds, and low command and 963 response latencies to the plant behind the LLN access points, to 964 applications, and to field devices. The routing protocol SHOULD also 965 support the bandwidth allocation for bulk transfers between the field 966 device and the handheld device of the wireless worker. The routing 967 protocol SHOULD support walking speeds for maintaining network 968 connectivity as the handheld device changes position in the wireless 969 network. 971 Some field devices will be mobile. These devices may be located on 972 moving parts such as rotating components or they may be located on 973 vehicles such as cranes or fork lifts. The routing protocol SHOULD 974 support vehicular speeds of up to 35 kmph. 976 10. Manageability requirements 978 The process and control industry is manpower constrained. The aging 979 demographics of plant personnel are causing a looming manpower 980 problem for industry across many markets. The goal for the 981 industrial networks is to have the installation process not require 982 any new skills for the plant personnel. The person would install the 983 wireless sensor or wireless actuator the same way the wired sensor or 984 wired actuator is installed, except the step to connect wire is 985 eliminated. 987 Most users in fact demand even much further simplified provisioning 988 methods, a plug and play operation that would be fully transparent to 989 the user. This requires availability of open and untrusted side 990 channels for new joiners, and it requires strong and automated 991 authentication so that networks can automatically accept or reject 992 new joiners. Ideally, for a user, adding new routing devices should 993 be as easy as dragging and dropping an icon from a pool of 994 authenticated new joiners into a pool for the wired domain that this 995 new sensor should connect to. Under the hood, invisible to the user, 996 auditable security mechanisms should take care of new device 997 authentication, and secret join key distribution. These more 998 sophisticated 'over the air' secure provisioning methods should 999 eliminate the use of traditional configuration tools for setting up 1000 devices prior to being ready to securely join a LLN access point. 1002 The routing protocol SHOULD be fully configurable over the air as 1003 part of the joining process of a new routing device. 1005 There will be many new applications where even without any human 1006 intervention at the plant, devices that have never been on site 1007 before, should be allowed, based on their credentials and crypto 1008 capabilities, to connect anyway. Examples are 3rd party road 1009 tankers, rail cargo containers with overfill protection sensors, or 1010 consumer cars that need to be refueled with hydrogen by robots at 1011 future petrol stations. 1013 The routing protocol for LLNs is expected to be easy to deploy and 1014 manage. Because the number of field devices in a network is large, 1015 provisioning the devices manually may not make sense. The proper 1016 operation of the routing protocol MAY require that the node be 1017 commissioned with information about itself, like identity, security 1018 tokens, radio standards and frequencies, etc... 1020 The routing protocol SHOULD NOT require to preprovision information 1021 about the environment where the node will be deployed. The routing 1022 protocol MUST enable the full discovery and setup of the environment 1023 (available links, selected peers, reachable network). The protocol 1024 MUST enable the distribution of its own configuration to be performed 1025 by some external mechanism from a centralized management controller. 1027 11. Antagonistic requirements 1029 This document contains a number of strongly required constraints on 1030 the ROLL routing protocol. Some of those strong requirements might 1031 appear antagonistic and as such impossible to fulfill at a same time. 1033 For instance, the strong requirement of power economy applies on 1034 general routing but is variant since it is reasonable to spend more 1035 energy on ensuring the availability of a short emergency closed loop 1036 path than it is to maintain an alert path that is used for regular 1037 updates on the operating status of the device. In a same fashion, 1038 the strong requirement on easy provisionning does not match easily 1039 the strong security requirements that can be needed to implement a 1040 factory policy. Then again, a non default non trivial setup can be 1041 acceptable long as the default enables to join without configuration 1042 and yet some degree of security. 1044 Convergence time and network size are also antagonistic. The values 1045 expressed in the Protocol Performance requirements section apply to 1046 an average network with tens of devices. The use of a backbone can 1047 maintain that level of performance and still enable to grow the 1048 network to thousands of node. In any case, it is acceptable to grow 1049 reasonabily the convergence time with the network size. 1051 12. Security Considerations 1053 Given that wireless sensor networks in industrial automation operate 1054 in systems that have substantial financial and human safety 1055 implications, security is of considerable concern. Levels of 1056 security violation that are tolerated as a "cost of doing business" 1057 in the banking industry are not acceptable when in some cases 1058 literally thousands of lives may be at risk. 1060 Security is easily confused with guarantee for availability. When 1061 discussing wireless security, it's important to distinguish clearly 1062 between the risks of temporarily losing connectivity, say due to a 1063 thunderstorm, and the risks associated with knowledgeable adversaries 1064 attacking a wireless system. The conscious attacks need to be split 1065 between 1) attacks on the actual application served by the wireless 1066 devices and 2) attacks that exploit the presence of a wireless access 1067 point that may provide connectivity onto legacy wired plant networks, 1068 so attacks that have little to do with the wireless devices in the 1069 LLNs. The second type of attack, access points that might be 1070 wireless backdoors that may allow an attacker outside the fence to 1071 access typically non-secured process control and/or office networks, 1072 are typically the ones that do create exposures where lives are at 1073 risk. This implies that the LLN access point on its own must possess 1074 functionality that guarantees domain segregation, and thus prohibits 1075 many types of traffic further upstream. 1077 Current generation industrial wireless device manufactures are 1078 specifying security at the MAC layer and the transport layer. A 1079 shared key is used to authenticate messages at the MAC layer. At the 1080 transport layer, commands are encrypted with statistically unique 1081 randomly-generated end-to-end Session keys. HART7 and ISA100.11a are 1082 examples of security systems for industrial wireless networks. 1084 Although such symmetric key encryption and authentication mechanisms 1085 at MAC and transport layers may protect reasonably well during the 1086 lifecycle, the initial network boot (provisioning) step in many cases 1087 requires more sophisticated steps to securely land the initial secret 1088 keys in field devices. It is vital that also during these steps, the 1089 ease of deployment and the freedom of mixing and matching products 1090 from different suppliers does not complicate life for those that 1091 deploy and commission. Given average skill levels in the field, and 1092 given serious resource constraints in the market, investing a little 1093 bit more in sensor node hardware and software so that new devices 1094 automatically can be deemed trustworthy, and thus automatically join 1095 the domains that they should join, with just one drag and drop action 1096 for those in charge of deploying, will yield in faster adoption and 1097 proliferation of the LLN technology. 1099 Industrial plants may not maintain the same level of physical 1100 security for field devices that is associated with traditional 1101 network sites such as locked IT centers. In industrial plants it 1102 must be assumed that the field devices have marginal physical 1103 security and might be compromised. The routing protocol SHOULD limit 1104 the risk incurred by one node being compromised, for instance by 1105 proposing non congruent path for a given route and balancing the 1106 traffic across the network. 1108 The routing protocol SHOULD compartmentalize the trust placed in 1109 field devices so that a compromised field device does not destroy the 1110 security of the whole network. The routing MUST be configured and 1111 managed using secure messages and protocols that prevent outsider 1112 attacks and limit insider attacks from field devices installed in 1113 insecure locations in the plant. 1115 The wireless environment typically forces the abandonment of 1116 classical 'by perimeter' thinking when trying to secure network 1117 domains. Wireless nodes in LLN networks should thus be regarded as 1118 little islands with trusted kernels, situated in an ocean of 1119 untrusted connectivity, an ocean that might be full of pirate ships. 1120 Consequently, confidence in node identity and ability to challenge 1121 authenticity of source node credentials gets more relevant. 1122 Cryptographic boundaries inside devices that clearly demark the 1123 border between trusted and untrusted areas need to be drawn. 1124 Protection against compromise of the cryptographic boundaries inside 1125 the hardware of devices is outside of the scope this document. 1127 Note that because nodes are usually expected to be capable of 1128 routing, the end node security requirements are usually a superset of 1129 the router requirements, in order to prevent a end node from being 1130 used to inject forged information into the network that could alter 1131 the plant operations. 1133 Additional details of security across all application scenarios are 1134 provided in the ROLL security Framework 1135 [I-D.tsao-roll-security-framework]. Implications of these security 1136 requirements for the routing protocol itself are a topic for future 1137 work. 1139 13. IANA Considerations 1141 This document includes no request to IANA. 1143 14. Acknowledgements 1145 Many thanks to Rick Enns, Alexander Chernoguzov and Chol Su Kang for 1146 their contributions. 1148 15. References 1150 15.1. Normative References 1152 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1153 Requirement Levels", BCP 14, RFC 2119, March 1997. 1155 15.2. Informative References 1157 [I-D.ietf-roll-terminology] 1158 Vasseur, J., "Terminology in Low power And Lossy 1159 Networks", draft-ietf-roll-terminology-01 (work in 1160 progress), May 2009. 1162 [I-D.tsao-roll-security-framework] 1163 Tsao, T., Alexander, R., Dohler, M., Daza, V., and A. 1164 Lozano, "A Security Framework for Routing over Low Power 1165 and Lossy Networks", draft-tsao-roll-security-framework-00 1166 (work in progress), February 2009. 1168 15.3. External Informative References 1170 [HART] www.hartcomm.org, "Highway Addressable Remote Transducer", 1171 a group of specifications for industrial process and 1172 control devices administered by the HART Foundation". 1174 [ISA100.11a] 1175 ISA, "ISA100, Wireless Systems for Automation", May 2008, 1176 < http://www.isa.org/Community/ 1177 SP100WirelessSystemsforAutomation>. 1179 Authors' Addresses 1181 Kris Pister (editor) 1182 Dust Networks 1183 30695 Huntwood Ave. 1184 Hayward, 94544 1185 USA 1187 Email: kpister@dustnetworks.com 1189 Pascal Thubert (editor) 1190 Cisco Systems 1191 Village d'Entreprises Green Side 1192 400, Avenue de Roumanille 1193 Batiment T3 1194 Biot - Sophia Antipolis 06410 1195 FRANCE 1197 Phone: +33 497 23 26 34 1198 Email: pthubert@cisco.com 1200 Sicco Dwars 1201 Shell Global Solutions International B.V. 1202 Sir Winston Churchilllaan 299 1203 Rijswijk 2288 DC 1204 Netherlands 1206 Phone: +31 70 447 2660 1207 Email: sicco.dwars@shell.com 1209 Tom Phinney 1210 5012 W. Torrey Pines Circle 1211 Glendale, AZ 85308-3221 1212 USA 1214 Phone: +1 602 938 3163 1215 Email: tom.phinney@cox.net