idnits 2.17.1 draft-pister-roll-indus-routing-reqs-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 17. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 638. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 649. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 656. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 662. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: Network connectivity in real deployments is always time-varying, with time constants from seconds to months. Optimization is perhaps not the right word to use, in that network optimization will need to run continuously, and single-slotted-link failures that cause loss of connectivity are not likely to be tolerated. Once the network is formed, it should never need to "optimized" to a new configuration in response to a lost slotted-link. The routing algorithm SHOULD not have to re-optimize in response to the loss of a slotted-link. The routing algorithms SHOULD always be in the process of plesio-optimizing the system for the changing RF environment. The routing algorithm MUST re-optimize the path when field devices change due to insertion, removal or failure. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 28, 2008) is 5866 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'I-D.culler-rl2n-routing-reqs' is defined on line 593, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Networking Working Group K. Pister 3 Internet-Draft R. Enns 4 Intended status: Informational Dust Networks 5 Expires: September 29, 2008 P. Thubert 6 Cisco Systems, Inc 7 March 28, 2008 9 Industrial Routing Requirements in Low Power and Lossy Networks 10 draft-pister-roll-indus-routing-reqs-00 12 Status of this Memo 14 By submitting this Internet-Draft, each author represents that any 15 applicable patent or other IPR claims of which he or she is aware 16 have been or will be disclosed, and any of which he or she becomes 17 aware will be disclosed, in accordance with Section 6 of BCP 79. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six months 25 and may be updated, replaced, or obsoleted by other documents at any 26 time. It is inappropriate to use Internet-Drafts as reference 27 material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt. 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html. 35 This Internet-Draft will expire on September 29, 2008. 37 Abstract 39 Wireless, low power field devices enable industrial users to 40 significantly increase the amount of information collected and the 41 number of control points that can be remotely managed. The 42 deployment of these wireless devices will significantly improve the 43 productivity and safety of the plants while increasing the efficiency 44 of the plant workers. For wireless devices to have a significant 45 advantage over wired devices in an industrial environment the 46 wireless network needs to have three qualities: low power, high 47 reliability, and easy installation and maintenance. The aim of this 48 document is to analyze the requirements for the routing protocol used 49 for low power and lossy networks (L2N) in industrial environments. 51 Requirements Language 53 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 54 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 55 document are to be interpreted as described in RFC 2119 [RFC2119]. 57 Table of Contents 59 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 60 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 61 2.1. Applications and Traffic Patterns . . . . . . . . . . . . 5 62 3. Quality of Service (QoS) Routing requirements . . . . . . . . 7 63 3.1. Configurable Application Requirement . . . . . . . . . . . 9 64 4. Network Topology . . . . . . . . . . . . . . . . . . . . . . . 9 65 5. Device-Aware Routing Requirements . . . . . . . . . . . . . . 10 66 6. Broadcast/Multicast . . . . . . . . . . . . . . . . . . . . . 11 67 7. Route Establishment Time . . . . . . . . . . . . . . . . . . . 11 68 8. Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 69 9. Manageability and Ease Of Use . . . . . . . . . . . . . . . . 12 70 10. Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 71 11. Informative Reference . . . . . . . . . . . . . . . . . . . . 13 72 12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 73 13. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 13 74 14. References . . . . . . . . . . . . . . . . . . . . . . . . . . 13 75 14.1. Normative References . . . . . . . . . . . . . . . . . . . 13 76 14.2. Informative References . . . . . . . . . . . . . . . . . . 13 77 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 14 78 Intellectual Property and Copyright Statements . . . . . . . . . . 15 80 1. Terminology 82 Access Point: The access point is an infrastructure device that 83 connects the low power and lossy network system to a plant's backbone 84 network. 86 Actuator: a field device that moves or controls plant equipment. 88 Channel Hopping: An algorithm by which field devices synchronously 89 change channels during operation. 91 Channel: RF frequency band used to transmit a modulated signal 92 carrying packets. 94 Closed Loop Control: A process whereby a host application controls an 95 actuator based on information sensed by field devices. 97 Downstream: Data direction traveling from the host application to the 98 field device. 100 Field Device: physical devices placed in the plant's operating 101 environment (both RF and environmental). Field devices include 102 sensors and actuators as well as network routing devices and access 103 points in the plant. Superframe: A collection of timeslots repeating 104 at a constant rate. 106 HART: "Highway Addressable Remote Transducer", a group of 107 specifications for industrial process and control devices 108 administered by the HART Foundation (see [HART]). The latest version 109 for the specifications is HART7 which includes the additions for 110 WirelessHART. 112 Host Application: The host application is a process running in the 113 plant that communicates with field devices to perform tasks on that 114 may include control, monitoring and data gathering. 116 ISA: "International Society of Automation". ISA is an ANSI 117 accredited standards making society. ISA100 is an ISA working group 118 whose charter includes defining a family of standards for industrial 119 automation. ISA100.11a is a work group within ISA100 that is working 120 on a standard for non-critical process and control applications. 122 L2N: Low Power and Lossy Network Slotted-Link: a data structure that 123 is associated with a superframe that contains a connection between 124 field devices that comprises a timeslot assignment, and channel and 125 usage information. 127 Open Loop Control: A process whereby a plant technician controls an 128 actuator over the network where the decision is influenced by 129 information sensed by field devices. 131 Timeslot: A fixed time interval that may be used for the transmission 132 or reception of a packet between two field devices. A timeslot used 133 for communications is associated with a slotted-link. Upstream: Data 134 direction traveling from the field device to the host application. 136 RF: Radio Frequency Sensor: A field device that measures data and/or 137 detects an event. 139 RL2N: Routing in Low power and Lossy Networks 141 2. Introduction 143 Wireless, low power field devices enable industrial users to 144 significantly increase the amount of information collected and the 145 number of control points that can be remotely managed. The 146 deployment of these wireless devices will significantly improve the 147 productivity and safety of the plants while increasing the efficiency 148 of the plant workers. 150 Wireless field devices enable expansion of networked points by 151 appreciably reducing cost of installing a device. The cost 152 reductions come from eliminating cabling costs and simplified 153 planning. Cabling for a field device can run from $100s/ft to 154 $1,000s/ft. depending on the safety regulations of the plant. 155 Cabling also caries an overhead cost associated with planning the 156 installation, where the cable has to run, and the various 157 organizations that have to coordinate its deployment. Doing away 158 with the network and power cables reduces the planning and 159 administrative overhead of installing a device. 161 For wireless devices to have a significant advantage over wired 162 devices in an industrial environment the wireless network needs to 163 have three qualities: low power, high reliability, and easy 164 installation and maintenance. The routing protocol used for low 165 power and lossy networks (L2N) is important to fulfilling these 166 goals. 168 Industrial automation is segmented into two distinct application 169 spaces, known as "process" or "process control" and "discrete 170 manufacturing" or "factory automation". In industrial process 171 control, the product is typically a fluid (oil, gas, chemicals ...). 172 In factory automation or discrete manufacturing, the products are 173 individual elements (screws, cars, dolls). While there is some 174 overlap between products and systems between these two segments, they 175 are surprisingly separate communities. The specifications targeting 176 industrial process control tend to have more tolerance for network 177 latency than what is needed for factory automation. 179 Both application spaces desire battery operated networks of hundreds 180 of sensors and actuators communicating with wired access points. In 181 an oil refinery, the total number of devices is likely to exceed one 182 million, but the devices will be clustered into smaller networks 183 reporting to existing wired infrastructure. 185 Existing wired sensor networks in this space typically use 186 communication protocols with low data rates - from 1,200 baud (wired 187 HART) into the one to two hundred kbps range for most of the others. 188 The existing protocols are often master/slave with command/response. 190 Note that the total low power and lossy network system capacity for 191 devices using the IEEE802.15.4-2006 2.4 GHz radio is at most 1.6 Mbps 192 when spatial reuse of channels is not utilized. 194 2.1. Applications and Traffic Patterns 196 The industrial market classifies process applications into three 197 broad categories and six classes. 199 o Safety 201 * Class 0: Emergency action - Always a critical function Control 203 * Class 1: Closed loop regulatory control - Often a critical 204 function 206 * Class 2: Closed loop supervisory control - Usually non-critical 207 function 209 * Class 3: Open loop control - Operator takes action and controls 210 the actuator (human in the loop) 212 o Monitoring 214 * Class 4: Alerting - Short-term operational effect (for example 215 event-based maintenance) 217 * Class 5: Logging and downloading / uploading - No immediate 218 operational consequence (e.g., history collection, sequence-of- 219 events, preventive maintenance) 221 Critical functions are effect the basic safety or integrity of the 222 plant. Timely deliveries of messages are more important as class 223 numbers decrease. 225 Industrial customers are interested in deploying wireless networks 226 for the monitoring classes 4 and 5 and in the non-critical portions 227 of classes 3 through 1. 229 Classes 4 and 5 also include equipment monitoring which is strictly 230 speaking separate from process monitoring. An example of equipment 231 monitoring is the recording of motor vibrations to detect bearing 232 wear. 234 Most low power and lossy network systems in the near future will be 235 for low frequency data collection. Packets containing samples will 236 be generated continuously, and 90% of the market is covered by packet 237 rates of between 1/s and 1/hour, with the average under 1/min. In 238 industrial process these sensors include temperature, pressure, fluid 239 flow, tank level, and corrosion. There are some sensors which are 240 bursty, such as vibration monitors which may generate and transmit 241 tens of kilo-bytes (hundreds to thousands of packets) of time-series 242 data at reporting rates of minutes to days. 244 Almost all of these sensors will have built-in microprocessors which 245 may detect alarm conditions. Time crucial alarm packets are expected 246 to have lower latency than sensor data, often requiring substantially 247 more bandwidth. 249 Some devices will transmit a log file every day, again with typically 250 tens of Kbytes of data. For these applications there is very little 251 "downstream" traffic coming from the access point and traveling to 252 particular sensors. During diagnostics, however, a technician may be 253 investigating a fault from a control room and expect to have "low" 254 latency (human tolerable) in a command/response mode. 256 Low-rate control, often with a "human in the loop" or "open loop" is 257 implemented today via communication to a centralized controller, i.e. 258 sensor data makes its way through the access point to the centralized 259 controller where it is processed, the operator sees the information 260 and takes action, and control information is sent out to the actuator 261 node in the network. 263 In the future, it is envisioned that some open loop processes will be 264 automated (closed loop) and packets will flow over local loops and 265 not involve the access point. These closed loop controls for non- 266 critical applications will be implemented on L2Ns. Non-critical 267 closed loop applications have a latency requirement that can be as 268 low as 100 ms but many control loops are tolerant of latencies above 269 1 s. 271 In critical control, 10's of milliseconds of latencies are typical. 272 In many of these systems, if a packet does not arrive within the 273 specified interval, the system will enter an emergency shutdown 274 state, often with substantial financial repercussions. For a 1 275 second control loop in a system with a mean-time between shutdowns 276 target of 30 years, the latency requirement implies nine 9s of 277 reliability. 279 Thus, the routing protocol for L2Ns MUST support multi-topology 280 routing (e.g especially critical for critical control applications). 281 The routing protocol MUST provide the ability to color slotted-links 282 (where the color corresponds to a user defined slotted-link 283 attribute) and can be used to include/exclude slotted-links from a 284 logical topology. 286 For all but the most latency-tolerant applications, route discovery 287 is likely to be too slow a process to initiate when a route failure 288 is detected. 290 The routing protocol MUST support multiple paths (a tree-based 291 solution is not sufficient). 293 3. Quality of Service (QoS) Routing requirements 295 The industrial applications fall into four large service categories: 297 1. Published data. Data that is generated per periodically and has 298 a well understood data bandwidth requirement. The end-to-end 299 latency of this data is not as important as regularity with which 300 it is presented to the host application. 302 2. Event data. This category includes alarms and aperiodic data 303 reports with bursty data bandwidth requirements 305 3. Client/Server. Many industrial applications are based on a 306 client/server model and implement a command response protocol. 307 The data bandwidth required is often bursty. The round trip 308 latency for some operations can be 200 ms. 310 4. Bulk transfer. Bulk transfers involve the transmission of blocks 311 of data in multiple packets where temporary resources are 312 assigned to meet a transaction time constraint. Bulk transfers 313 assign resources for a limited period of time to meet the QoS 314 requirements. 316 For industrial applications QoS parameters include: 318 o Data bandwidth - periodic, burst statistics 320 o Latency - the time taken for the data to transit the network from 321 the source to the destination. This may be expressed in terms of 322 a deadline for delivery 324 o Transmission phase - process applications can be synchronized to 325 wall clock time and require coordinated transmissions. A common 326 coordination frequency is 4 Hz (250 ms). 328 o Reliability - the end-to-end data delivery statistic. All 329 applications have latency and reliability requirements. In 330 industrial applications, these vary over many orders of magnitude. 331 Some non-critical monitoring applications may tolerate latencies 332 of days and reliability of less than 90%. Most monitoring 333 latencies will be in seconds to minutes, and industrial standard 334 such as HART7 has set user reliability expectations at 99.9%. 335 Regulatory requirements are a driver for some industrial 336 applications. Regulatory monitoring requires high data integrity 337 because lost data is assumed to be out of compliance and subject 338 to fines. This can drive reliability requirements to higher then 339 99.9%. 341 o QoS contract type - revocation priority. L2Ns have limited 342 network resources that can vary with time. This means the system 343 can become fully subscribed or even over subscribed. System 344 policies determine how resources are allocated when resources are 345 over subscribed. The choices are blocking and graceful 346 degradation. 348 o Transmission Priority - within field devices there are limited 349 resources need to be allocated across multiple services. For 350 transmissions, a device has to select which packet in its queue 351 will be sent at the next transmission opportunity. Packet 352 priority is used as one criterion for selecting the next packet. 353 For reception a device has to decide how to store a received 354 packet. The field devices are memory constrained and receive 355 buffers may become full. Packet priority is used to select which 356 packets are stored or discarded. 358 In industrial wireless L2Ns a time slotting technology is used. A 359 time slotted media access protocol synchronizes channel hopping which 360 is one of the means that is used to make the wireless network 361 reliable. Timeslots also are employed to reduce the power by 362 minimizing the active duty cycle for field devices. Communications 363 between devices are assigned a combination of timeslot and channel 364 assignment called a slotted-link. 366 The routing protocol MUST also support different metric types for 367 each slotted-link used to compute the path according to some 368 objective function (e.g. minimize latency, maximize reliability, 369 ...). 371 Industrial application data flows between field devices are not 372 necessarily symmetric. The routing protocol MUST be able to set up 373 routes that are directional. 375 3.1. Configurable Application Requirement 377 Time-varying user requirements for latency and bandwidth will require 378 changes in the provisioning of the underlying L2 protocols. The 379 wireless worker may initiate a bulk transfer to configure or diagnose 380 a field device. A level sensor device may need to perform a 381 calibration and send a bulk file to a host. The routing protocol 382 MUST route on paths which are changed to appropriately provision the 383 application requirements. The routing protocol MUST support the 384 ability to recompute paths based on slotted-link characteristics that 385 may change dynamically. 387 4. Network Topology 389 Network topology is very tough to generalize, but networks of 10 to 390 200 field devices and maximum number of hops from two to twenty 391 covers the majority of existing applications. It is assumed that the 392 field devices themselves will provide routing capability for the 393 network, and in most cases additional repeaters/routers will not be 394 required. 396 Timeslot size is about 10 ms and timeslot synchronization 397 requirements are on the order of +/-1 ms for non-critical process and 398 control data (some L2 protocols provide/require tighter 399 synchronization). Wall clock time *accuracy* requirements vary 400 substantially, but are generally about 100ms. Some applications that 401 time stamp data require 1 ms accuracy to determine the sequence of 402 events reported. (Note that data time stamping does not translate to 403 a latency requirement.) 405 In low power and lossy network systems using the IEEE802.15.4-2006 406 2.4 GHz radio the total raw throughput per radio is 250 kbps. 10 ms 407 timeslots reduces this to 101.6 Kbps for maximum sized packets. This 408 constrains the typical throughput of a single radio access point to 409 less 100 kbps. Therefore an access point with one IEEE 802.15.4 410 radio has a maximum aggregate throughput 100 packets per second and 411 no more then about 100 Kbps. 413 A graph that connects a field device to a host application may have 414 more than one access point. The routing protocol MUST support 415 multiple access points and load distribution when aggregate network 416 throughputs need to exceed 100 kbps. The routing protocol MUST 417 support multiple access points when access point redundancy is 418 required. 420 5. Device-Aware Routing Requirements 422 Wireless L2N nodes in industrial environments are powered by a 423 variety of sources. Battery operated devices with lifetime 424 requirements of at least 5 years are the most common. Battery 425 operated devices have a cap on their total energy, and typically can 426 report some estimate of remaining energy, and typically do not have 427 constraints on the short term average power consumption. Energy 428 scavenging devices are more complex. These systems contain both a 429 power scavenging device (such as solar, vibration, or temperature 430 difference) and an energy storage device, such as a rechargeable 431 battery or a capacitor. Therefore these systems have limits on both 432 the long term average power consumption (which cannot exceed the 433 average scavenged power over the same interval) as well as the short- 434 term limits imposed by the energy storage requirements. For solar- 435 powered systems, the energy storage system is generally designed to 436 provide days of power in the absence of sunlight. Many industrial 437 sensors run off of a 4-20mA current loop, and can scavenge on the 438 order of mW from that source. Vibration monitoring systems are a 439 natural choice for vibration scavenging, which typically only 440 provides tens or hundreds of microwatts. Due to industrial 441 temperature ranges and desired lifetimes, the choices of energy 442 storage devices can be limited, and the resulting stored energy is 443 often comparable to the energy cost of sending or receiving a packet 444 rather than the energy of operating the node for several days. And 445 of course some nodes will be line-powered. 447 Example 1: solar panel, lead-acid battery sized for two weeks of 448 rain. 450 Example 2: vibration scavenger, 1mF tantalum capacitor 452 Field devices have limited resources. Low power, low cost devices 453 have limited memory for storing route information. Typical field 454 devices will have a finite number of routes they can support for 455 their embedded sensor/actuator application and for forwarding other 456 devices packets in a mesh network slotted-link. 458 Users may have strong preferences on lifetime that is different for 459 the same device in different locations. A sensor monitoring a non- 460 critical parameter in an easily-accessed location may have a lifetime 461 requirement that is shorter and tolerate more statistical variation 462 than a mission-critical sensor in a hard to reach place that requires 463 shutdown of the plant to replace. 465 The routing algorithm MUST support node constrained routing (e.g. 466 taking into account the existing energy state as a node constraint). 467 Node constraints include power and memory as well as constraints 468 placed on the device by the user such as battery life. 470 6. Broadcast/Multicast 472 Existing industrial host applications do not use broadcast or 473 multicast addressing to communicate to field devices. Unicast 474 address support is sufficient. However wireless field devices with 475 communication controllers and protocol stacks will require control 476 and configuration such as firmware downloading that may benefit from 477 broadcast and multicast addressing. 479 The routing protocol SHOULD support broadcast and multicast 480 addressing. 482 7. Route Establishment Time 484 Network connectivity in real deployments is always time-varying, with 485 time constants from seconds to months. Optimization is perhaps not 486 the right word to use, in that network optimization will need to run 487 continuously, and single-slotted-link failures that cause loss of 488 connectivity are not likely to be tolerated. Once the network is 489 formed, it should never need to "optimized" to a new configuration in 490 response to a lost slotted-link. The routing algorithm SHOULD not 491 have to re-optimize in response to the loss of a slotted-link. The 492 routing algorithms SHOULD always be in the process of plesio- 493 optimizing the system for the changing RF environment. The routing 494 algorithm MUST re-optimize the path when field devices change due to 495 insertion, removal or failure. 497 8. Mobility 499 Various economic factors have contributed to a reduction of trained 500 workers in the plant. The industry as a whole appears to be trying 501 to solve this problem with what is called the "wireless worker". 502 Carrying a PDA or something similar, this worker will be able to 503 accomplish more work in less time than the older, better-trained 504 workers that he or she replaces. Whether the premise is valid, the 505 use case is commonly presented: the worker will be wirelessly 506 connected to the plant IT system to download documentation, 507 instructions, etc., and will need to be able to connect "directly" to 508 the sensors and control points in or near the equipment on which he 509 or she is working. It is possible that this "direct" connection 510 could come via the normal L2Ns data collection network. This 511 connection is likely to require higher bandwidth and lower latency 512 than the normal data collection operation. 514 The routing protocol SHOULD support the wireless worker with fast 515 network connection times of a few of seconds, low latency command and 516 response latencies to host behind the access points and to 517 applications and to field devices. The routing protocol SHOULD also 518 support configuring graphs for bulk transfers. The routing protocol 519 MUST support walking speeds for maintaining network connectivity as 520 the handheld device changes position in the wireless network. 522 Some field devices will be mobile. These devices may be located on 523 moving parts such as rotating components or they may be located on 524 vehicles such as cranes or fork lifts. The routing protocol SHOULD 525 support vehicular speeds of up to 35 kmph. 527 9. Manageability and Ease Of Use 529 The process and control industry is manpower constrained. The aging 530 demographics of plant personnel are causing a looming manpower 531 problem for industry across many markets. The goal for the 532 industrial networks is to make the installation process not require 533 any new skills for the plant personnel. The industrial customers do 534 not even want to require the current level of networking knowledge 535 needed for do-it-yourself home network installations. 537 The routing protocol for L2Ns must be easy to deploy and manage. In 538 a further revision of this document, metrics to measure ease of 539 deployment for the routing protocol will be detailed. 541 10. Security 543 Wireless sensor networks in industrial automation operate in systems 544 that have substantial financial and human safety implications, 545 security is of considerable concern. Levels of security violation 546 which are tolerated as a "cost of doing business" in the banking 547 industry are not acceptable when in some cases literally thousands of 548 lives may be at risk. 550 Industrial wireless device manufactures are specifying security at 551 the MAC layer and the Transport layer. A shared "Network Key" is 552 used to authenticate messages at the MAC layer. At the transport 553 layer, commands are encrypted with unique randomly-generated end-to- 554 end Session keys. HART7 and ISA100.11a are examples of security 555 systems for industrial wireless networks. 557 Industrial plants may not maintain the same level of physical 558 security for field devices that is associated with traditional 559 network sites such as locked IT centers. In industrial plants it 560 must be assumed that the field devices have marginal physical 561 security and the security system needs to have limited trust in them. 562 The routing protocol SHOULD place limited trust in the field devices 563 deployed in the plant network. 565 The routing protocol SHOULD compartmentalize the trust placed in 566 field devices so that a compromised field device does not destroy the 567 security of the whole network. The routing MUST be configured and 568 managed using secure messages and protocols that prevent outsider 569 attacks and limit insider attacks from field devices installed in 570 insecure locations in the plant. 572 11. Informative Reference 574 [HART] "Highway Addressable Remote Transducer", a group of 575 specifications for industrial process and control devices 576 administered by the HART Foundation, www.hartcomm.org. 578 12. IANA Considerations 580 This document includes no request to IANA. 582 13. Acknowledgements 584 14. References 586 14.1. Normative References 588 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 589 Requirement Levels", BCP 14, RFC 2119, March 1997. 591 14.2. Informative References 593 [I-D.culler-rl2n-routing-reqs] 594 Vasseur, J. and D. Cullerot, "Routing Requirements for Low 595 Power And Lossy Networks", 596 draft-culler-rl2n-routing-reqs-01 (work in progress), 597 July 2007. 599 Authors' Addresses 601 Kris Pister 602 Dust Networks 603 30695 Huntwood Ave. 604 Hayward, 94544 605 USA 607 Email: kpister@dustnetworks.com 609 Rick Enns 610 Dust Networks 611 30695 Huntwood Ave. 612 Hayward, 94544 613 USA 615 Email: enns@stanfordalumni.org 617 Pascal Thubert 618 Cisco Systems, Inc 619 Village d'Entreprises Green Side - 400, Avenue de Roumanille 620 Sophia Antipolis, 06410 622 Email: pthubert@cisco.com 624 Full Copyright Statement 626 Copyright (C) The IETF Trust (2008). 628 This document is subject to the rights, licenses and restrictions 629 contained in BCP 78, and except as set forth therein, the authors 630 retain all their rights. 632 This document and the information contained herein are provided on an 633 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 634 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 635 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 636 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 637 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 638 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 640 Intellectual Property 642 The IETF takes no position regarding the validity or scope of any 643 Intellectual Property Rights or other rights that might be claimed to 644 pertain to the implementation or use of the technology described in 645 this document or the extent to which any license under such rights 646 might or might not be available; nor does it represent that it has 647 made any independent effort to identify any such rights. Information 648 on the procedures with respect to rights in RFC documents can be 649 found in BCP 78 and BCP 79. 651 Copies of IPR disclosures made to the IETF Secretariat and any 652 assurances of licenses to be made available, or the result of an 653 attempt made to obtain a general license or permission for the use of 654 such proprietary rights by implementers or users of this 655 specification can be obtained from the IETF on-line IPR repository at 656 http://www.ietf.org/ipr. 658 The IETF invites any interested party to bring to its attention any 659 copyrights, patents or patent applications, or other proprietary 660 rights that may cover technology that may be required to implement 661 this standard. Please address the information to the IETF at 662 ietf-ipr@ietf.org.