idnits 2.17.1 draft-pister-rl2n-indus-routing-reqs-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 17. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 650. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 661. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 668. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 674. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: Network connectivity in real deployments is always time-varying, with time constants from seconds to months. Optimization is perhaps not the right word to use, in that network optimization will need to run continuously, and single-slotted-link failures that cause loss of connectivity are not likely to be tolerated. Once the network is formed, it should never need to "optimized" to a new configuration in response to a lost slotted-link. The routing algorithm SHOULD not have to re-optimize in response to the loss of a slotted-link. The routing algorithms SHOULD always be in the process of plesio-optimizing the system for the changing RF environment. The routing algorithm MUST re-optimize the path when field devices change due to insertion, removal or failure. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 9, 2007) is 6006 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'I-D.culler-rl2n-routing-reqs' is defined on line 597, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Networking Working Group K. Pister 2 Internet-Draft R. Enns 3 Intended status: Informational Dust Networks 4 Expires: May 12, 2008 JP. Vasseur 5 P. Thubert 6 Cisco Systems, Inc 7 November 9, 2007 9 Industrial Routing Requirements in Low Power and Lossy Networks 10 draft-pister-rl2n-indus-routing-reqs-00 12 Status of this Memo 14 By submitting this Internet-Draft, each author represents that any 15 applicable patent or other IPR claims of which he or she is aware 16 have been or will be disclosed, and any of which he or she becomes 17 aware will be disclosed, in accordance with Section 6 of BCP 79. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six months 25 and may be updated, replaced, or obsoleted by other documents at any 26 time. It is inappropriate to use Internet-Drafts as reference 27 material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt. 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html. 35 This Internet-Draft will expire on May 12, 2008. 37 Copyright Notice 39 Copyright (C) The IETF Trust (2007). 41 Abstract 43 Wireless, low power field devices enable industrial users to 44 significantly increase the amount of information collected and the 45 number of control points that can be remotely managed. The 46 deployment of these wireless devices will significantly improve the 47 productivity and safety of the plants while increasing the efficiency 48 of the plant workers. For wireless devices to have a significant 49 advantage over wired devices in an industrial environment the 50 wireless network needs to have three qualities: low power, high 51 reliability, and easy installation and maintenance. The aim of this 52 document is to analyze the requirements for the routing protocol used 53 for low power and lossy networks (L2N) in industrial environments. 55 Requirements Language 57 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 58 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 59 document are to be interpreted as described in RFC 2119 [RFC2119]. 61 Table of Contents 63 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 65 2.1. Applications and Traffic Patterns . . . . . . . . . . . . 5 66 3. Quality of Service (QoS) Routing requirements . . . . . . . . 7 67 3.1. Configurable Application Requirement . . . . . . . . . . . 9 68 4. Network Topology . . . . . . . . . . . . . . . . . . . . . . . 9 69 5. Device-Aware Routing Requirements . . . . . . . . . . . . . . 10 70 6. Broadcast/Multicast . . . . . . . . . . . . . . . . . . . . . 11 71 7. Route Establishment Time . . . . . . . . . . . . . . . . . . . 11 72 8. Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 73 9. Manageability and Ease Of Use . . . . . . . . . . . . . . . . 12 74 10. Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 75 11. Informative Reference . . . . . . . . . . . . . . . . . . . . 13 76 12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 77 13. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 13 78 14. References . . . . . . . . . . . . . . . . . . . . . . . . . . 13 79 14.1. Normative References . . . . . . . . . . . . . . . . . . . 13 80 14.2. Informative References . . . . . . . . . . . . . . . . . . 13 81 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 14 82 Intellectual Property and Copyright Statements . . . . . . . . . . 15 84 1. Terminology 86 Access Point: The access point is an infrastructure device that 87 connects the low power and lossy network system to a plant's backbone 88 network. 90 Actuator: a field device that moves or controls plant equipment. 92 Channel Hopping: An algorithm by which field devices synchronously 93 change channels during operation. 95 Channel: RF frequency band used to transmit a modulated signal 96 carrying packets. 98 Closed Loop Control: A process whereby a host application controls an 99 actuator based on information sensed by field devices. 101 Downstream: Data direction traveling from the host application to the 102 field device. 104 Field Device: physical devices placed in the plant's operating 105 environment (both RF and environmental). Field devices include 106 sensors and actuators as well as network routing devices and access 107 points in the plant. Superframe: A collection of timeslots repeating 108 at a constant rate. 110 HART: "Highway Addressable Remote Transducer", a group of 111 specifications for industrial process and control devices 112 administered by the HART Foundation (see [HART]). The latest version 113 for the specifications is HART7 which includes the additions for 114 WirelessHART. 116 Host Application: The host application is a process running in the 117 plant that communicates with field devices to perform tasks on that 118 may include control, monitoring and data gathering. 120 ISA: "International Society of Automation". ISA is an ANSI 121 accredited standards making society. ISA100 is an ISA working group 122 whose charter includes defining a family of standards for industrial 123 automation. ISA100.11a is a work group within ISA100 that is working 124 on a standard for non-critical process and control applications. 126 L2N: Low Power and Lossy Network Slotted-Link: a data structure that 127 is associated with a superframe that contains a connection between 128 field devices that comprises a timeslot assignment, and channel and 129 usage information. 131 Open Loop Control: A process whereby a plant technician controls an 132 actuator over the network where the decision is influenced by 133 information sensed by field devices. 135 Timeslot: A fixed time interval that may be used for the transmission 136 or reception of a packet between two field devices. A timeslot used 137 for communications is associated with a slotted-link. Upstream: Data 138 direction traveling from the field device to the host application. 140 RF: Radio Frequency Sensor: A field device that measures data and/or 141 detects an event. 143 RL2N: Routing in Low power and Lossy Networks 145 2. Introduction 147 Wireless, low power field devices enable industrial users to 148 significantly increase the amount of information collected and the 149 number of control points that can be remotely managed. The 150 deployment of these wireless devices will significantly improve the 151 productivity and safety of the plants while increasing the efficiency 152 of the plant workers. 154 Wireless field devices enable expansion of networked points by 155 appreciably reducing cost of installing a device. The cost 156 reductions come from eliminating cabling costs and simplified 157 planning. Cabling for a field device can run from $100s/ft to 158 $1,000s/ft. depending on the safety regulations of the plant. 159 Cabling also caries an overhead cost associated with planning the 160 installation, where the cable has to run, and the various 161 organizations that have to coordinate its deployment. Doing away 162 with the network and power cables reduces the planning and 163 administrative overhead of installing a device. 165 For wireless devices to have a significant advantage over wired 166 devices in an industrial environment the wireless network needs to 167 have three qualities: low power, high reliability, and easy 168 installation and maintenance. The routing protocol used for low 169 power and lossy networks (L2N) is important to fulfilling these 170 goals. 172 Industrial automation is segmented into two distinct application 173 spaces, known as "process" or "process control" and "discrete 174 manufacturing" or "factory automation". In industrial process 175 control, the product is typically a fluid (oil, gas, chemicals ...). 176 In factory automation or discrete manufacturing, the products are 177 individual elements (screws, cars, dolls). While there is some 178 overlap between products and systems between these two segments, they 179 are surprisingly separate communities. The specifications targeting 180 industrial process control tend to have more tolerance for network 181 latency than what is needed for factory automation. 183 Both application spaces desire battery operated networks of hundreds 184 of sensors and actuators communicating with wired access points. In 185 an oil refinery, the total number of devices is likely to exceed one 186 million, but the devices will be clustered into smaller networks 187 reporting to existing wired infrastructure. 189 Existing wired sensor networks in this space typically use 190 communication protocols with low data rates - from 1,200 baud (wired 191 HART) into the one to two hundred kbps range for most of the others. 192 The existing protocols are often master/slave with command/response. 194 Note that the total low power and lossy network system capacity for 195 devices using the IEEE802.15.4-2006 2.4 GHz radio is at most 1.6 Mbps 196 when spatial reuse of channels is not utilized. 198 2.1. Applications and Traffic Patterns 200 The industrial market classifies process applications into three 201 broad categories and six classes. 203 o Safety 205 * Class 0: Emergency action - Always a critical function Control 207 * Class 1: Closed loop regulatory control - Often a critical 208 function 210 * Class 2: Closed loop supervisory control - Usually non-critical 211 function 213 * Class 3: Open loop control - Operator takes action and controls 214 the actuator (human in the loop) 216 o Monitoring 218 * Class 4: Alerting - Short-term operational effect (for example 219 event-based maintenance) 221 * Class 5: Logging and downloading / uploading - No immediate 222 operational consequence (e.g., history collection, sequence-of- 223 events, preventive maintenance) 225 Critical functions are effect the basic safety or integrity of the 226 plant. Timely deliveries of messages are more important as class 227 numbers decrease. 229 Industrial customers are interested in deploying wireless networks 230 for the monitoring classes 4 and 5 and in the non-critical portions 231 of classes 3 through 1. 233 Classes 4 and 5 also include equipment monitoring which is strictly 234 speaking separate from process monitoring. An example of equipment 235 monitoring is the recording of motor vibrations to detect bearing 236 wear. 238 Most low power and lossy network systems in the near future will be 239 for low frequency data collection. Packets containing samples will 240 be generated continuously, and 90% of the market is covered by packet 241 rates of between 1/s and 1/hour, with the average under 1/min. In 242 industrial process these sensors include temperature, pressure, fluid 243 flow, tank level, and corrosion. There are some sensors which are 244 bursty, such as vibration monitors which may generate and transmit 245 tens of kilo-bytes (hundreds to thousands of packets) of time-series 246 data at reporting rates of minutes to days. 248 Almost all of these sensors will have built-in microprocessors which 249 may detect alarm conditions. Time crucial alarm packets are expected 250 to have lower latency than sensor data, often requiring substantially 251 more bandwidth. 253 Some devices will transmit a log file every day, again with typically 254 tens of Kbytes of data. For these applications there is very little 255 "downstream" traffic coming from the access point and traveling to 256 particular sensors. During diagnostics, however, a technician may be 257 investigating a fault from a control room and expect to have "low" 258 latency (human tolerable) in a command/response mode. 260 Low-rate control, often with a "human in the loop" or "open loop" is 261 implemented today via communication to a centralized controller, i.e. 262 sensor data makes its way through the access point to the centralized 263 controller where it is processed, the operator sees the information 264 and takes action, and control information is sent out to the actuator 265 node in the network. 267 In the future, it is envisioned that some open loop processes will be 268 automated (closed loop) and packets will flow over local loops and 269 not involve the access point. These closed loop controls for non- 270 critical applications will be implemented on L2Ns. Non-critical 271 closed loop applications have a latency requirement that can be as 272 low as 100 ms but many control loops are tolerant of latencies above 273 1 s. 275 In critical control, 10's of milliseconds of latencies are typical. 276 In many of these systems, if a packet does not arrive within the 277 specified interval, the system will enter an emergency shutdown 278 state, often with substantial financial repercussions. For a 1 279 second control loop in a system with a mean-time between shutdowns 280 target of 30 years, the latency requirement implies nine 9s of 281 reliability. 283 Thus, the routing protocol for L2Ns MUST support multi-topology 284 routing (e.g especially critical for critical control applications). 285 The routing protocol MUST provide the ability to color slotted-links 286 (where the color corresponds to a user defined slotted-link 287 attribute) and can be used to include/exclude slotted-links from a 288 logical topology. 290 For all but the most latency-tolerant applications, route discovery 291 is likely to be too slow a process to initiate when a route failure 292 is detected. 294 The routing protocol MUST support multiple paths (a tree-based 295 solution is not sufficient). 297 3. Quality of Service (QoS) Routing requirements 299 The industrial applications fall into four large service categories: 301 1. Published data. Data that is generated per periodically and has 302 a well understood data bandwidth requirement. The end-to-end 303 latency of this data is not as important as regularity with which 304 it is presented to the host application. 306 2. Event data. This category includes alarms and aperiodic data 307 reports with bursty data bandwidth requirements 309 3. Client/Server. Many industrial applications are based on a 310 client/server model and implement a command response protocol. 311 The data bandwidth required is often bursty. The round trip 312 latency for some operations can be 200 ms. 314 4. Bulk transfer. Bulk transfers involve the transmission of blocks 315 of data in multiple packets where temporary resources are 316 assigned to meet a transaction time constraint. Bulk transfers 317 assign resources for a limited period of time to meet the QoS 318 requirements. 320 For industrial applications QoS parameters include: 322 o Data bandwidth - periodic, burst statistics 324 o Latency - the time taken for the data to transit the network from 325 the source to the destination. This may be expressed in terms of 326 a deadline for delivery 328 o Transmission phase - process applications can be synchronized to 329 wall clock time and require coordinated transmissions. A common 330 coordination frequency is 4 Hz (250 ms). 332 o Reliability - the end-to-end data delivery statistic. All 333 applications have latency and reliability requirements. In 334 industrial applications, these vary over many orders of magnitude. 335 Some non-critical monitoring applications may tolerate latencies 336 of days and reliability of less than 90%. Most monitoring 337 latencies will be in seconds to minutes, and industrial standard 338 such as HART7 has set user reliability expectations at 99.9%. 339 Regulatory requirements are a driver for some industrial 340 applications. Regulatory monitoring requires high data integrity 341 because lost data is assumed to be out of compliance and subject 342 to fines. This can drive reliability requirements to higher then 343 99.9%. 345 o QoS contract type - revocation priority. L2Ns have limited 346 network resources that can vary with time. This means the system 347 can become fully subscribed or even over subscribed. System 348 policies determine how resources are allocated when resources are 349 over subscribed. The choices are blocking and graceful 350 degradation. 352 o Transmission Priority - within field devices there are limited 353 resources need to be allocated across multiple services. For 354 transmissions, a device has to select which packet in its queue 355 will be sent at the next transmission opportunity. Packet 356 priority is used as one criterion for selecting the next packet. 357 For reception a device has to decide how to store a received 358 packet. The field devices are memory constrained and receive 359 buffers may become full. Packet priority is used to select which 360 packets are stored or discarded. 362 In industrial wireless L2Ns a time slotting technology is used. A 363 time slotted media access protocol synchronizes channel hopping which 364 is one of the means that is used to make the wireless network 365 reliable. Timeslots also are employed to reduce the power by 366 minimizing the active duty cycle for field devices. Communications 367 between devices are assigned a combination of timeslot and channel 368 assignment called a slotted-link. 370 The routing protocol MUST also support different metric types for 371 each slotted-link used to compute the path according to some 372 objective function (e.g. minimize latency, maximize reliability, 373 ...). 375 Industrial application data flows between field devices are not 376 necessarily symmetric. The routing protocol MUST be able to set up 377 routes that are directional. 379 3.1. Configurable Application Requirement 381 Time-varying user requirements for latency and bandwidth will require 382 changes in the provisioning of the underlying L2 protocols. The 383 wireless worker may initiate a bulk transfer to configure or diagnose 384 a field device. A level sensor device may need to perform a 385 calibration and send a bulk file to a host. The routing protocol 386 MUST route on paths which are changed to appropriately provision the 387 application requirements. The routing protocol MUST support the 388 ability to recompute paths based on slotted-link characteristics that 389 may change dynamically. 391 4. Network Topology 393 Network topology is very tough to generalize, but networks of 10 to 394 200 field devices and maximum number of hops from two to twenty 395 covers the majority of existing applications. It is assumed that the 396 field devices themselves will provide routing capability for the 397 network, and in most cases additional repeaters/routers will not be 398 required. 400 Timeslot size is about 10 ms and timeslot synchronization 401 requirements are on the order of +/-1 ms for non-critical process and 402 control data (some L2 protocols provide/require tighter 403 synchronization). Wall clock time *accuracy* requirements vary 404 substantially, but are generally about 100ms. Some applications that 405 time stamp data require 1 ms accuracy to determine the sequence of 406 events reported. (Note that data time stamping does not translate to 407 a latency requirement.) 409 In low power and lossy network systems using the IEEE802.15.4-2006 410 2.4 GHz radio the total raw throughput per radio is 250 kbps. 10 ms 411 timeslots reduces this to 101.6 Kbps for maximum sized packets. This 412 constrains the typical throughput of a single radio access point to 413 less 100 kbps. Therefore an access point with one IEEE 802.15.4 414 radio has a maximum aggregate throughput 100 packets per second and 415 no more then about 100 Kbps. 417 A graph that connects a field device to a host application may have 418 more than one access point. The routing protocol MUST support 419 multiple access points and load distribution when aggregate network 420 throughputs need to exceed 100 kbps. The routing protocol MUST 421 support multiple access points when access point redundancy is 422 required. 424 5. Device-Aware Routing Requirements 426 Wireless L2N nodes in industrial environments are powered by a 427 variety of sources. Battery operated devices with lifetime 428 requirements of at least 5 years are the most common. Battery 429 operated devices have a cap on their total energy, and typically can 430 report some estimate of remaining energy, and typically do not have 431 constraints on the short term average power consumption. Energy 432 scavenging devices are more complex. These systems contain both a 433 power scavenging device (such as solar, vibration, or temperature 434 difference) and an energy storage device, such as a rechargeable 435 battery or a capacitor. Therefore these systems have limits on both 436 the long term average power consumption (which cannot exceed the 437 average scavenged power over the same interval) as well as the short- 438 term limits imposed by the energy storage requirements. For solar- 439 powered systems, the energy storage system is generally designed to 440 provide days of power in the absence of sunlight. Many industrial 441 sensors run off of a 4-20mA current loop, and can scavenge on the 442 order of mW from that source. Vibration monitoring systems are a 443 natural choice for vibration scavenging, which typically only 444 provides tens or hundreds of microwatts. Due to industrial 445 temperature ranges and desired lifetimes, the choices of energy 446 storage devices can be limited, and the resulting stored energy is 447 often comparable to the energy cost of sending or receiving a packet 448 rather than the energy of operating the node for several days. And 449 of course some nodes will be line-powered. 451 Example 1: solar panel, lead-acid battery sized for two weeks of 452 rain. 454 Example 2: vibration scavenger, 1mF tantalum capacitor 456 Field devices have limited resources. Low power, low cost devices 457 have limited memory for storing route information. Typical field 458 devices will have a finite number of routes they can support for 459 their embedded sensor/actuator application and for forwarding other 460 devices packets in a mesh network slotted-link. 462 Users may have strong preferences on lifetime that is different for 463 the same device in different locations. A sensor monitoring a non- 464 critical parameter in an easily-accessed location may have a lifetime 465 requirement that is shorter and tolerate more statistical variation 466 than a mission-critical sensor in a hard to reach place that requires 467 shutdown of the plant to replace. 469 The routing algorithm MUST support node constrained routing (e.g. 470 taking into account the existing energy state as a node constraint). 471 Node constraints include power and memory as well as constraints 472 placed on the device by the user such as battery life. 474 6. Broadcast/Multicast 476 Existing industrial host applications do not use broadcast or 477 multicast addressing to communicate to field devices. Unicast 478 address support is sufficient. However wireless field devices with 479 communication controllers and protocol stacks will require control 480 and configuration such as firmware downloading that may benefit from 481 broadcast and multicast addressing. 483 The routing protocol SHOULD support broadcast and multicast 484 addressing. 486 7. Route Establishment Time 488 Network connectivity in real deployments is always time-varying, with 489 time constants from seconds to months. Optimization is perhaps not 490 the right word to use, in that network optimization will need to run 491 continuously, and single-slotted-link failures that cause loss of 492 connectivity are not likely to be tolerated. Once the network is 493 formed, it should never need to "optimized" to a new configuration in 494 response to a lost slotted-link. The routing algorithm SHOULD not 495 have to re-optimize in response to the loss of a slotted-link. The 496 routing algorithms SHOULD always be in the process of plesio- 497 optimizing the system for the changing RF environment. The routing 498 algorithm MUST re-optimize the path when field devices change due to 499 insertion, removal or failure. 501 8. Mobility 503 Various economic factors have contributed to a reduction of trained 504 workers in the plant. The industry as a whole appears to be trying 505 to solve this problem with what is called the "wireless worker". 506 Carrying a PDA or something similar, this worker will be able to 507 accomplish more work in less time than the older, better-trained 508 workers that he or she replaces. Whether the premise is valid, the 509 use case is commonly presented: the worker will be wirelessly 510 connected to the plant IT system to download documentation, 511 instructions, etc., and will need to be able to connect "directly" to 512 the sensors and control points in or near the equipment on which he 513 or she is working. It is possible that this "direct" connection 514 could come via the normal L2Ns data collection network. This 515 connection is likely to require higher bandwidth and lower latency 516 than the normal data collection operation. 518 The routing protocol SHOULD support the wireless worker with fast 519 network connection times of a few of seconds, low latency command and 520 response latencies to host behind the access points and to 521 applications and to field devices. The routing protocol SHOULD also 522 support configuring graphs for bulk transfers. The routing protocol 523 MUST support walking speeds for maintaining network connectivity as 524 the handheld device changes position in the wireless network. 526 Some field devices will be mobile. These devices may be located on 527 moving parts such as rotating components or they may be located on 528 vehicles such as cranes or fork lifts. The routing protocol SHOULD 529 support vehicular speeds of up to 35 kmph. 531 9. Manageability and Ease Of Use 533 The process and control industry is manpower constrained. The aging 534 demographics of plant personnel are causing a looming manpower 535 problem for industry across many markets. The goal for the 536 industrial networks is to make the installation process not require 537 any new skills for the plant personnel. The industrial customers do 538 not even want to require the current level of networking knowledge 539 needed for do-it-yourself home network installations. 541 The routing protocol for L2Ns must be easy to deploy and manage. In 542 a further revision of this document, metrics to measure ease of 543 deployment for the routing protocol will be detailed. 545 10. Security 547 Wireless sensor networks in industrial automation operate in systems 548 that have substantial financial and human safety implications, 549 security is of considerable concern. Levels of security violation 550 which are tolerated as a "cost of doing business" in the banking 551 industry are not acceptable when in some cases literally thousands of 552 lives may be at risk. 554 Industrial wireless device manufactures are specifying security at 555 the MAC layer and the Transport layer. A shared "Network Key" is 556 used to authenticate messages at the MAC layer. At the transport 557 layer, commands are encrypted with unique randomly-generated end-to- 558 end Session keys. HART7 and ISA100.11a are examples of security 559 systems for industrial wireless networks. 561 Industrial plants may not maintain the same level of physical 562 security for field devices that is associated with traditional 563 network sites such as locked IT centers. In industrial plants it 564 must be assumed that the field devices have marginal physical 565 security and the security system needs to have limited trust in them. 566 The routing protocol SHOULD place limited trust in the field devices 567 deployed in the plant network. 569 The routing protocol SHOULD compartmentalize the trust placed in 570 field devices so that a compromised field device does not destroy the 571 security of the whole network. The routing MUST be configured and 572 managed using secure messages and protocols that prevent outsider 573 attacks and limit insider attacks from field devices installed in 574 insecure locations in the plant. 576 11. Informative Reference 578 [HART] "Highway Addressable Remote Transducer", a group of 579 specifications for industrial process and control devices 580 administered by the HART Foundation, www.hartcomm.org. 582 12. IANA Considerations 584 This document includes no request to IANA. 586 13. Acknowledgements 588 14. References 590 14.1. Normative References 592 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 593 Requirement Levels", BCP 14, RFC 2119, March 1997. 595 14.2. Informative References 597 [I-D.culler-rl2n-routing-reqs] 598 Vasseur, J. and D. Cullerot, "Routing Requirements for Low 599 Power And Lossy Networks", 600 draft-culler-rl2n-routing-reqs-01 (work in progress), 601 July 2007. 603 Authors' Addresses 605 K. Pister 606 Dust Networks 607 30695 Huntwood Ave. 608 Hayward, Denmark 94544 609 USA 611 Email: kpister@dustnetworks.com 613 Rick Enns 614 Dust Networks 615 30695 Huntwood Ave. 616 Hayward, 94544 617 USA 619 Email: renns@dustnetworks.com 621 JP Vasseur 622 Cisco Systems, Inc 623 1414 Massachusetts Avenue 624 Boxborough, MA 01719 625 USA 627 Email: jpv@cisco.com 629 Pascal Thubert 630 Cisco Systems, Inc 631 Village d'Entreprises Green Side - 400, Avenue de Roumanille 632 Sophia Antipolis, 06410 634 Email: pthubert@cisco.com 636 Full Copyright Statement 638 Copyright (C) The IETF Trust (2007). 640 This document is subject to the rights, licenses and restrictions 641 contained in BCP 78, and except as set forth therein, the authors 642 retain all their rights. 644 This document and the information contained herein are provided on an 645 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 646 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 647 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 648 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 649 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 650 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 652 Intellectual Property 654 The IETF takes no position regarding the validity or scope of any 655 Intellectual Property Rights or other rights that might be claimed to 656 pertain to the implementation or use of the technology described in 657 this document or the extent to which any license under such rights 658 might or might not be available; nor does it represent that it has 659 made any independent effort to identify any such rights. Information 660 on the procedures with respect to rights in RFC documents can be 661 found in BCP 78 and BCP 79. 663 Copies of IPR disclosures made to the IETF Secretariat and any 664 assurances of licenses to be made available, or the result of an 665 attempt made to obtain a general license or permission for the use of 666 such proprietary rights by implementers or users of this 667 specification can be obtained from the IETF on-line IPR repository at 668 http://www.ietf.org/ipr. 670 The IETF invites any interested party to bring to its attention any 671 copyrights, patents or patent applications, or other proprietary 672 rights that may cover technology that may be required to implement 673 this standard. Please address the information to the IETF at 674 ietf-ipr@ietf.org. 676 Acknowledgment 678 Funding for the RFC Editor function is provided by the IETF 679 Administrative Support Activity (IASA).