idnits 2.17.1 draft-ietf-roll-building-routing-reqs-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 7, 2009) is 5375 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'I-D.ietf-roll-terminology' is mentioned on line 138, but not defined == Unused Reference: 'RFC2119' is defined on line 826, but no explicit reference was found in the text Summary: 2 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Networking Working Group J. Martocci, Ed. 2 Internet-Draft Johnson Controls Inc. 3 Intended status: Informational Pieter De Mil 4 Expires: February 7, 2010 Ghent University IBCN 5 W. Vermeylen 6 Arts Centre Vooruit 7 Nicolas Riou 8 Schneider Electric 9 August 7, 2009 11 Building Automation Routing Requirements in Low Power and Lossy 12 Networks 13 draft-ietf-roll-building-routing-reqs-06 15 Status of this Memo 17 This Internet-Draft is submitted to IETF in full conformance with the 18 provisions of BCP 78 and BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on February 7, 2010. 38 Copyright Notice 40 Copyright (c) 2009 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents in effect on the date of 45 publication of this document (http://trustee.ietf.org/license-info). 46 Please review these documents carefully, as they describe your rights 47 and restrictions with respect to this document. 49 Abstract 51 The Routing Over Low power and Lossy network (ROLL) Working Group has 52 been chartered to work on routing solutions for Low Power and Lossy 53 networks (LLN) in various markets: Industrial, Commercial (Building), 54 Home and Urban networks. Pursuant to this effort, this document 55 defines the IPv6 routing requirements for building automation. 57 Requirements Language 59 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 60 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 61 document are to be interpreted as described in (RFC2119). 63 Table of Contents 65 1. Terminology....................................................4 66 2. Introduction...................................................4 67 3. Overview of Building Automation Networks.......................5 68 3.1. Introduction..............................................5 69 3.2. Building Systems Equipment................................6 70 3.2.1. Sensors/Actuators....................................6 71 3.2.2. Area Controllers.....................................7 72 3.2.3. Zone Controllers.....................................7 73 3.3. Equipment Installation Methods............................7 74 3.4. Device Density............................................8 75 3.4.1. HVAC Device Density..................................8 76 3.4.2. Fire Device Density..................................9 77 3.4.3. Lighting Device Density..............................9 78 3.4.4. Physical Security Device Density.....................9 79 4. Traffic Pattern................................................9 80 5. Building Automation Routing Requirements......................11 81 5.1. Device and Network Commissioning.........................11 82 5.1.1. Zero-Configuration Installation.....................12 83 5.1.2. Local Testing.......................................12 84 5.1.3. Device Replacement..................................12 85 5.2. Scalability..............................................12 86 5.2.1. Network Domain......................................13 87 5.2.2. Peer-to-Peer Communication..........................13 88 5.3. Mobility.................................................13 89 5.3.1. Mobile Device Requirements..........................13 90 5.4. Resource Constrained Devices.............................14 91 5.4.1. Limited memory footprint on host devices............14 92 5.4.2. Limited Processing Power for routers................14 93 5.4.3. Sleeping Devices....................................14 94 5.5. Addressing...............................................15 95 5.6. Manageability............................................15 96 5.6.1. Diagnostics.........................................15 97 5.6.2. Route Tracking......................................16 98 5.7. Route Selection..........................................16 99 5.7.1. Route Cost..........................................16 100 5.7.2. Route Adaptation....................................16 101 5.7.3. Route Redundancy....................................16 102 5.7.4. Route Discovery Time................................16 103 5.7.5. Route Preference....................................17 104 5.7.6. Real-time Performance Measures......................17 105 5.7.7. Prioritized Routing.................................17 106 5.8. Security Requirements....................................17 107 5.8.1. Authentication......................................18 108 5.8.2. Encryption..........................................18 109 5.8.3. Disparate Security Policies.........................18 110 5.8.4. Routing Security Policies To Sleeping Devices.......18 111 6. IANA Considerations...........................................19 112 7. Acknowledgments...............................................19 113 8. References....................................................19 114 8.1. Normative References.....................................19 115 8.2. Informative References...................................19 116 9. Appendix A: Additional Building Requirements..................19 117 9.1. Additional Commercial Product Requirements...............20 118 9.1.1. Wired and Wireless Implementations..................20 119 9.1.2. World-wide Applicability............................20 120 9.2. Additional Installation and Commissioning Requirements...20 121 9.2.1. Unavailability of an IP network.....................20 122 9.3. Additional Network Requirements..........................20 123 9.3.1. TCP/UDP.............................................20 124 9.3.2. Interference Mitigation.............................20 125 9.3.3. Packet Reliability..................................20 126 9.3.4. Merging Commissioned Islands........................21 127 9.3.5. Adjustable Routing Table Sizes......................21 128 9.3.6. Automatic Gain Control..............................21 129 9.3.7. Device and Network Integrity........................21 130 9.4. Additional Performance Requirements......................21 131 9.4.1. Data Rate Performance...............................21 132 9.4.2. Firmware Upgrades...................................22 133 9.4.3. Route Persistence...................................22 135 1. Terminology 137 For description of the terminology used in this specification, please 138 see [I-D.ietf-roll-terminology]. 140 2. Introduction 142 The Routing Over Low power and Lossy network (ROLL) Working Group has 143 been chartered to work on routing solutions for Low Power and Lossy 144 networks (LLN) in various markets: Industrial, Commercial (Building), 145 Home and Urban networks. Pursuant to this effort, this document 146 defines the IPv6 routing requirements for building automation. 148 Commercial buildings have been fitted with pneumatic and subsequently 149 electronic communication pathways connecting sensors to their 150 controllers for over one hundred years. Recent economic and 151 technical advances in wireless communication allow facilities to 152 increasingly utilize a wireless solution in lieu of a wired solution; 153 thereby reducing installation costs while maintaining highly reliant 154 communication. 156 The cost benefits and ease of installation of wireless sensors allow 157 customers to further instrument their facilities with additional 158 sensors; providing tighter control while yielding increased energy 159 savings. 161 Wireless solutions will be adapted from their existing wired 162 counterparts in many of the building applications including, but not 163 limited to Heating, Ventilation, and Air Conditioning (HVAC), 164 Lighting, Physical Security, Fire, and Elevator systems. These 165 devices will be developed to reduce installation costs; while 166 increasing installation and retrofit flexibility, as well as 167 increasing the sensing fidelity to improve efficiency and building 168 service quality. 170 Sensing devices may be battery-less; battery or mains powered. 171 Actuators and area controllers will be mains powered. Due to 172 building code and/or device density (e.g. equipment room), it is 173 envisioned that a mix of wired and wireless sensors and actuators 174 will be deployed within a building. 176 Facility Management Systems (FMS) are deployed in a large set of 177 vertical markets including universities; hospitals; government 178 facilities; Kindergarten through High School (K-12); pharmaceutical 179 manufacturing facilities; and single-tenant or multi-tenant office 180 buildings. These buildings range in size from 100K sqft structures (5 181 story office buildings), to 1M sqft skyscrapers (100 story 182 skyscrapers) to complex government facilities such as the Pentagon. 183 The described topology is meant to be the model to be used in all 184 these types of environments, but clearly must be tailored to the 185 building class, building tenant and vertical market being served. 187 Section 3 describes the necessary background to understand the 188 context of building automation including the sensor, actuator, area 189 controller and zone controller layers of the topology; typical device 190 density; and installation practices. 192 Section 4 defines the traffic flow of the aforementioned sensors, 193 actuators and controllers in commercial buildings. 195 Section 5 defines the full set of IPv6 routing requirements for 196 commercial buildings. 198 Appendix A documents important commercial building requirements that 199 are out of scope for routing yet will be essential to the final 200 acceptance of the protocols used within the building. 202 Sections 3 and Appendix A are mainly included for educational 203 purposes. 205 The expressed aim of this document is to provide the set of IPv6 206 routing requirements for LLNs in buildings as described in Section 5. 208 3. Overview of Building Automation Networks 210 3.1. Introduction 212 To understand the network systems requirements of a facility 213 management system in a commercial building, this document uses a 214 framework to describe the basic functions and composition of the 215 system. An FMS is a hierarchical system of sensors, actuators, 216 controllers and user interface devices that interoperate to provide a 217 safe and comfortable environment while constraining energy costs. 219 An FMS may is divided functionally across alike, but different 220 building subsystems such as heating, ventilation and air conditioning 221 (HVAC); Fire; Security; Lighting; Shutters and Elevator control 222 systems as denoted in Figure 1. 224 Much of the makeup of an FMS is optional and installed at the behest 225 of the customer. Sensors and actuators have no standalone 226 functionality. All other devices support partial or complete 227 standalone functionality. These devices can optionally be tethered 228 to form a more cohesive system. The customer requirements dictate 229 the level of integration within the facility. This architecture 230 provides excellent fault tolerance since each node is designed to 231 operate in an independent mode if the higher layers are unavailable. 233 +------+ +-----+ +------+ +------+ +------+ +------+ 235 Bldg App'ns | | | | | | | | | | | | 237 | | | | | | | | | | | | 239 Building Cntl | | | | | S | | L | | S | | E | 241 | | | | | E | | I | | H | | L | 243 Area Control | H | | F | | C | | G | | U | | E | 245 | V | | I | | U | | H | | T | | V | 247 Zone Control | A | | R | | R | | T | | T | | A | 249 | C | | E | | I | | I | | E | | T | 251 Actuators | | | | | T | | N | | R | | O | 253 | | | | | Y | | G | | S | | R | 255 Sensors | | | | | | | | | | | | 257 +------+ +-----+ +------+ +------+ +------+ +------+ 259 Figure 1: Building Systems and Devices 261 3.2. Building Systems Equipment 263 3.2.1. Sensors/Actuators 265 As Figure 1 indicates an FMS may be composed of many functional 266 stacks or silos that are interoperably woven together via Building 267 Applications. Each silo has an array of sensors that monitor the 268 environment and actuators that effect the environment as determined 269 by the upper layers of the FMS topology. The sensors typically are 270 at the edge of the network structure providing environmental data 271 into the system. The actuators are the sensors' counterparts 272 modifying the characteristics of the system based on the sensor data 273 and the applications deployed. 275 3.2.2. Area Controllers 277 An area describes a small physical locale within a building, 278 typically a room. HVAC (temperature and humidity) and Lighting (room 279 lighting, shades, solar loads) vendors oft times deploy area 280 controllers. Area controls are fed by sensor inputs that monitor the 281 environmental conditions within the room. Common sensors found in 282 many rooms that feed the area controllers include temperature, 283 occupancy, lighting load, solar load and relative humidity. Sensors 284 found in specialized rooms (such as chemistry labs) might include air 285 flow, pressure, CO2 and CO particle sensors. Room actuation includes 286 temperature setpoint, lights and blinds/curtains. 288 3.2.3. Zone Controllers 290 Zone Control supports a similar set of characteristics as the Area 291 Control albeit to an extended space. A zone is normally a logical 292 grouping or functional division of a commercial building. A zone may 293 also coincidentally map to a physical locale such as a floor. 295 Zone Control may have direct sensor inputs (smoke detectors for 296 fire), controller inputs (room controllers for air-handlers in HVAC) 297 or both (door controllers and tamper sensors for security). Like 298 area/room controllers, zone controllers are standalone devices that 299 operate independently or may be attached to the larger network for 300 more synergistic control. 302 3.3. Equipment Installation Methods 304 Commercial controllers have been traditionally deployed in a facility 305 using serial media following the EIA-485 electrical standard 306 operating nominally at 76800 baud with distances upward to 15000 307 feet. EIA-485 is a multi-drop media allowing upwards to 255 devices 308 to be connected to a single trunk. 310 Wired FMS installation is a multifaceted procedure depending on the 311 extent of the system and the software interoperability requirement. 313 However, at the sensor/actuator and controller level, the procedure 314 is typically a two or three step process. 316 The installer arrives on-site during the construction of the building 317 prior to drywall and ceiling installation. The installer allocates 318 wall space installs the controller and sensor networks. The Building 319 Controllers and Enterprise network are not normally installed until 320 months later. The electrician completes the task by running a 321 verification procedure that verifies proper wired or wireless 322 continuity between the devices. 324 Months later, the higher order controllers are installed, programmed 325 and commissioned together with the previously installed sensors, 326 actuators and controllers. In most cases the IP network is still not 327 in place. The Building Controllers are completely commissioned using 328 a crossover cable or a temporary IP switch together with static IP 329 addresses. 331 After occupancy, when the IP network is operational, the FMS often 332 connects to the enterprise network. Dynamic IPs replace static IPs. 333 VLANs oft time segregate the facility and IT systems. For multi- 334 building multi-site facilities VPNs, NATs and firewalls are also 335 introduced. 337 3.4. Device Density 339 Device density differs depending on the application and as dictated 340 by the local building code requirements. The following sections 341 detail typical installation densities for different applications. 343 3.4.1. HVAC Device Density 345 HVAC room applications typically have sensors/actuators and 346 controllers spaced about 50ft apart. In most cases there is a 3:1 347 ratio of sensors/actuators to controllers. That is, for each room 348 there is an installed temperature sensor, flow sensor and damper 349 actuator for the associated room controller. 351 HVAC equipment room applications are quite different. An air handler 352 system may have a single controller with upwards to 25 sensors and 353 actuators within 50 ft of the air handler. A chiller or boiler is 354 also controlled with a single equipment controller instrumented with 355 25 sensors and actuators. Each of these devices would be 356 individually addressed since the devices are mandated or optional as 357 defined by the specified HVAC application. Air handlers typically 358 serve one or two floors of the building. Chillers and boilers may be 359 installed per floor, but many times service a wing, building or the 360 entire complex via a central plant. 362 These numbers are typical. In special cases, such as clean rooms, 363 operating rooms, pharmaceuticals and labs, the ratio of sensors to 364 controllers can increase by a factor of three. Tenant installations 365 such as malls would opt for packaged units where much of the sensing 366 and actuation is integrated into the unit. Here a single device 367 address would serve the entire unit. 369 3.4.2. Fire Device Density 371 Fire systems are much more uniformly installed with smoke detectors 372 installed about every 50 feet. This is dictated by local building 373 codes. Fire pull boxes are installed uniformly about every 150 feet. 374 A fire controller will service a floor or wing. The fireman's fire 375 panel will service the entire building and typically is installed in 376 the atrium. 378 3.4.3. Lighting Device Density 380 Lighting is also very uniformly installed with ballasts installed 381 approximately every 10 feet. A lighting panel typically serves 48 to 382 64 zones. Wired systems tether many lights together into a single 383 zone. Wireless systems configure each fixture independently to 384 increase flexibility and reduce installation costs. 386 3.4.4. Physical Security Device Density 388 Security systems are non-uniformly oriented with heavy density near 389 doors and windows and lighter density in the building interior space. 390 The recent influx of interior and perimeter camera systems is 391 increasing the security footprint. These cameras are atypical 392 endpoints requiring upwards to 1 megabit/second (Mbit/s) data rates 393 per camera as contrasted by the few Kbits/s needed by most other FMS 394 sensing equipment. Previously, camera systems had been deployed on 395 proprietary wired high speed network. More recent implementations 396 utilize wired or wireless IP cameras integrated to the enterprise 397 LAN. 399 4. Traffic Pattern 401 The independent nature of the automation subsystems within a building 402 plays heavy onto the network traffic patterns. Much of the real-time 403 sensor environmental data and actuator control stays within the local 404 LLN environment; while alarming and other event data will percolate 405 to higher layers. 407 Each sensor in the LLN unicasts P2P about 200 bytes of sensor data to 408 its associated controller each minute and expects an application 409 acknowledgement unicast returned from the destination. Each 410 controller unicasts messages at a nominal rate of 6kB/min to peer or 411 supervisory controllers. 30% of each node's packets are destined for 412 other nodes within the LLN. 70% of each node's packets are destined 413 for an aggregation device (MP2P)and routed off the LLN. These 414 messages also require a unicast acknowledgement from the destination. 415 The above values assume direct node-to-node communication; meshing 416 and error retransmissions are not considered. 418 Multicasts (P2MP) to all nodes in the LLN occur for node and object 419 discovery when the network is first commissioned. This data is 420 typically a one-time bind that is henceforth persisted. Lighting 421 systems will also readily use multicasting during normal operations 422 to turn banks of lights 'on' and 'off' simultaneously. 424 FMS systems may be either polled or event based. Polled data systems 425 will generate a uniform and constant packet load on the network. 426 Polled architectures, however have proven not scalable. Today, most 427 vendors have developed event based systems which pass data on event. 428 These systems are highly scalable and generate low data on the 429 network at quiescence. Unfortunately, the systems will generate a 430 heavy load on startup since all initial sensor data must migrate to 431 the controller level. They also will generate a temporary but heavy 432 load during firmware upgrades. This latter load can normally be 433 mitigated by performing these downloads during off-peak hours. 435 Devices will also need to reference peers periodically for sensor 436 data or to coordinate operation across systems. Normally, though, 437 data will migrate from the sensor level upwards through the local, 438 area then supervisory level. Traffic bottlenecks will typically form 439 at the funnel point from the area controllers to the supervisory 440 controllers. 442 Initial system startup after a controlled outage or unexpected power 443 failure puts tremendous stress on the network and on the routing 444 algorithms. An FMS system is comprised of a myriad of control 445 algorithms at the room, area, zone, and enterprise layers. When 446 these control algorithms are at quiescence, the real-time data rate 447 is small and the network will not saturate. An overall network 448 traffic load of 6KBps is typical at quiescence. However, upon any 449 power loss, the control loops and real-time data quickly atrophy. A 450 ten minute power outage may require many hours to regain building 451 control. Traffic flow may increase ten-fold until the building 452 control stabilizes. 454 Power disruptions are unexpected and in most cases will immediately 455 impact lines-powered devices. Power disruptions however, are 456 transparent to battery powered devices. These devices will continue 457 to attempt to access the LLN during the outage. Battery powered 458 devices designed to buffer data that has not been delivered will 459 further stress the network operation when power returns. 461 Upon restart, lines-powered devices will naturally dither due to 462 primary equipment delays or variance in the device self-tests. 463 However, most lines-powered devices will be ready to access the LLN 464 network within 10 seconds of power up. Empirical testing indicates 465 that routes acquired during startup will tend to be very oblique 466 since the available neighbor lists are incomplete. This demands an 467 adaptive routing protocol to allow for route optimization as the 468 network stabilizes. 470 5. Building Automation Routing Requirements 472 Following are the building automation routing requirements for 473 networks used to integrate building sensor, actuator and control 474 products. These requirements are written not presuming any 475 preordained network topology, physical media (wired) or radio 476 technology (wireless). 478 5.1. Device and Network Commissioning 480 Building control systems typically are installed and tested by 481 electricians having little computer knowledge and no network 482 knowledge whatsoever. These systems are often installed during the 483 building construction phase before the drywall and ceilings are in 484 place. For new construction projects, the building enterprise IP 485 network is not in place during installation of the building control 486 system. For retrofit applications, the installer will still operate 487 independently from the IP network so as not to affect network 488 operations during the installation phase. 490 In traditional wired systems correct operation of a light 491 switch/ballast pair was as simple as flipping on the light switch. 492 In wireless applications, the tradesperson has to assure the same 493 operation, yet be sure the operation of the light switch is 494 associated to the proper ballast. 496 System level commissioning will later be deployed using a more 497 computer savvy person with access to a commissioning device (e.g. a 498 laptop computer). The completely installed and commissioned 499 enterprise IP network may or may not be in place at this time. 500 Following are the installation routing requirements. 502 5.1.1. Zero-Configuration Installation 504 It MUST be possible to fully commission network devices without 505 requiring any additional commissioning device (e.g. laptop). 507 5.1.2. Local Testing 509 The local sensors and requisite actuators and controllers must be 510 testable within the locale (e.g. room) to assure communication 511 connectivity and local operation without requiring other systemic 512 devices. 514 LLN nodes SHOULD be testable for end-to-end link connectivity and 515 application conformance without requiring other network 516 infrastructure. 518 5.1.3. Device Replacement 520 Replacement devices need to be plug-and-play with no additional setup 521 compared to what is normally required for a new device. Devices 522 referencing data in the replaced device MUST be able to reference 523 data in its replacement without requiring reconfiguration. Thus, 524 such a reference cannot be a hardware identifier, such as the MAC 525 address, nor a hard-coded route. If such a reference is an IP 526 address, the replacement device MUST be assigned the IP addressed 527 previously bound to the replaced device. Or if the logical 528 equivalent of a hostname is used for the reference, it must be 529 translated to the replacement IP address. 531 5.2. Scalability 533 Building control systems are designed for facilities from 50000 sq. 534 ft. to 1M+ sq. ft. The networks that support these systems must 535 cost-effectively scale accordingly. In larger facilities 536 installation may occur simultaneously on various wings or floors, yet 537 the end system must seamlessly merge. Following are the scalability 538 requirements. 540 5.2.1. Network Domain 542 The routing protocol MUST be able to support networks with at least 543 2000 nodes where 1000 nodes would act as routers and the other 1000 544 nodes would be hosts. Subnetworks (e.g. rooms, primary equipment) 545 within the network must support upwards to 255 sensors and/or 546 actuators. 548 5.2.2. Peer-to-Peer Communication 550 The data domain for commercial FMS systems may sprawl across a vast 551 portion of the physical domain. For example, a chiller may reside in 552 the facility's basement due to its size, yet the associated cooling 553 towers will reside on the roof. The cold-water supply and return 554 pipes serpentine through all the intervening floors. The feedback 555 control loops for these systems require data from across the 556 facility. 558 A network device MUST be able to communicate in a point-to-point 559 manner with any other device on the network. Thus, the routing 560 protocol MUST provide routes between arbitrary hosts within the 561 appropriate administrative domain. 563 5.3. Mobility 565 Most devices are affixed to walls or installed on ceilings within 566 buildings. Hence the mobility requirements for commercial buildings 567 are few. However, in wireless environments location tracking of 568 occupants and assets is gaining favor. Asset tracking applications, 569 such as tracking capital equipment (e.g. wheel chairs) in medical 570 facilities, require monitoring movement with granularity of a minute. 571 This soft real-time performance requirement is reflected in the 572 performance requirements below. 574 5.3.1. Mobile Device Requirements 576 To minimize network dynamics, mobile devices should not be allowed to 577 act as forwarding devices (routers) for other devices in the LLN. 578 Network configuration should allow devices to be configured as 579 routers or hosts. 581 5.3.1.1. Device Mobility within the LLN 583 An LLN typically spans a single floor in a commercial building. 584 Mobile devices may move within this LLN. For example, a wheel chair 585 may be moved from one room on the floor to another room on the same 586 floor. 588 A mobile LLN device that moves within the confines of the same LLN 589 SHOULD reestablish end-to-end communication to a fixed device also in 590 the LLN within 5 seconds after it ceases movement. The LLN network 591 convergence time should be less than 10 seconds once the mobile 592 device stops moving. 594 5.3.1.2. Device Mobility across LLNs 596 A mobile device may move across LLNs, such as a wheel chair being 597 moved to a different floor. 599 A mobile device that moves outside its original LLN SHOULD 600 reestablish end-to-end communication to a fixed device also in the 601 new LLN within 10 seconds after the mobile device ceases movement. 602 The network convergence time should be less than 20 seconds once the 603 mobile device stops moving. 605 5.4. Resource Constrained Devices 607 Sensing and actuator device processing power and memory may be 4 608 orders of magnitude less (i.e. 10,000x) than many more traditional 609 client devices on an IP network. The routing mechanisms must 610 therefore be tailored to fit these resource constrained devices. 612 5.4.1. Limited memory footprint on host devices. 614 The software size requirement for non-routing devices (e.g. sleeping 615 sensors and actuators) SHOULD be implementable in 8-bit devices with 616 no more than 128KB of memory. 618 5.4.2. Limited Processing Power for routers 620 The software size requirements for routing devices (e.g. room 621 controllers) SHOULD be implementable in 8-bit devices with no more 622 than 256KB of flash memory. 624 5.4.3. Sleeping Devices 626 Sensing devices will, in some cases, utilize battery power or energy 627 harvesting techniques for power and will operate mostly in a sleep 628 mode to maintain power consumption within a modest budget. The 629 routing protocol MUST take into account device characteristics such 630 as power budget. If such devices provide routing, rather than merely 631 host connectivity, the energy costs associated with such routing 632 needs to fit within the power budget. If the mechanisms for duty 633 cycling dictate very long response times or specific temporal 634 scheduling, routing will need to take such constraints into account. 636 Typically, battery life (2000mah) needs to extend for at least 5 637 years when the sensing device is transmitting its data(200 octets) 638 once per minute over a low power transceiver (25ma). This requires 639 that sleeping devices MUST upon awakening route its data to its 640 destination and receive an ACK from the destination within 20msec. 641 Additionally, awakened sleepy devices MUST be able to receive 642 awaiting inbound data within 20msec. 644 Proxies with unconstrained power budgets oft times are used to cache 645 the inbound data for a sleeping device until the device awakens. In 646 such cases, the routing protocol MUST discover the capability of a 647 node to act as a proxy during route calculation; then deliver the 648 packet to the assigned proxy for later delivery to the sleeping 649 device upon its next awakened cycle. 651 5.5. Addressing 653 Facility Management systems require different communication schemes 654 to solicit or post network information. Multicasts or anycasts need 655 be used to resolve unresolved references within a device when the 656 device first joins the network. 658 As with any network communication, multicasting should be minimized. 659 This is especially a problem for small embedded devices with limited 660 network bandwidth. Multicasts are typically used for network joins 661 and application binding in embedded systems. Routing MUST support 662 anycast, unicast, and multicast. 664 5.6. Manageability 666 In addition to the initial installation of the system, it is equally 667 important for the ongoing maintenance of the system to be simple and 668 inexpensive. 670 5.6.1. Diagnostics 672 To improve diagnostics, the routing protocol SHOULD be able to be 673 placed in and out of 'verbose' mode. Verbose mode is a temporary 674 debugging mode that provides additional communication information 675 including at least total number of routed packets sent and received, 676 number of routing failures (no route available), neighbor table 677 members, and routing table entries. 679 5.6.2. Route Tracking 681 Route diagnostics SHOULD be supported providing information such as 682 route quality; number of hops; available alternate active routes with 683 associated costs. Route quality is the relative measure of 684 'goodness' of the selected source to destination path as compared to 685 alternate paths. This composite value may be measured as a function 686 of hop count, signal strength, available power, existing active 687 routes or any other criteria deemed by ROLL as the route cost 688 differentiator. 690 5.7. Route Selection 692 Route selection determines reliability and quality of the 693 communication paths among the devices by optimizing routes over time 694 and resolving any nuances developed at system startup when nodes are 695 asynchronously adding themselves to the network. 697 5.7.1. Route Cost 699 The routing protocol MUST support a metric of route quality and 700 optimize path selection according to such metrics within constraints 701 established for links along the routes. These metrics SHOULD reflect 702 metrics such as signal strength, available bandwidth, hop count, 703 energy availability and communication error rates. 705 5.7.2. Route Adaptation 707 Communication routes MUST adapt toward the chosen metric(s) (e.g. 708 signal quality) optimality in time. 710 5.7.3. Route Redundancy 712 The routing layer SHOULD be configurable to allow secondary and 713 tertiary paths to be established and used upon failure of the primary 714 route. 716 5.7.4. Route Discovery Time 718 Mission critical commercial applications (e.g. Fire, Security) 719 require reliable communication and guaranteed end-to-end delivery of 720 all messages in a timely fashion. Application layer time-outs must 721 be selected judiciously to cover anomalous conditions such as lost 722 packets and/or route discoveries; yet not be set too large to over 723 damp the network response. If route discovery occurs during packet 724 transmission time (proactive routing), it SHOULD NOT add more than 725 120ms of latency to the packet delivery time. 727 5.7.5. Route Preference 729 The routing protocol SHOULD allow for the support of manually 730 configured static preferred routes. 732 5.7.6. Real-time Performance Measures 734 A node transmitting a 'request with expected reply' to another node 735 must send the message to the destination and receive the response in 736 not more than 120 msec. This response time should be achievable with 737 5 or less hops in each direction. This requirement assumes network 738 quiescence and a negligible turnaround time at the destination node. 740 5.7.7. Prioritized Routing 742 Network and application packet routing prioritization MUST be 743 supported to assure that mission critical applications (e.g. Fire 744 Detection) cannot be deferred while less critical applications access 745 the network. 747 5.8. Security Requirements 749 Security policies, especially wireless encryption and device 750 authentication needs to be considered, especially with concern to the 751 impact on the processing capabilities and additional latency incurred 752 on the sensors, actuators and controllers. 754 FMS systems are typically highly configurable in the field and hence 755 the security policy is most often dictated by the type of building to 756 which the FMS is being installed. Single tenant owner occupied 757 office buildings installing lighting or HVAC control are candidates 758 for implementing low or even no security on the LLN. Antithetically, 759 military or pharmaceutical facilities require strong security 760 policies. As noted in the installation procedures, security policies 761 must be facile to allow for no security policy during the 762 installation phase (prior to building occupancy), yet easily raise 763 the security level network wide during the commissioning phase of the 764 system. 766 5.8.1. Authentication 768 Authentication SHOULD be optional on the LLN. Authentication SHOULD 769 be fully configurable on-site. Authentication policy and updates MUST 770 be routable over-the-air. Authentication SHOULD occur upon joining 771 or rejoining a network. However, once authenticated devices SHOULD 772 NOT need to reauthenticate with any other devices in the LLN. 773 Packets may need authentication at the source and destination nodes, 774 however, packets routed through intermediate hops should not need 775 reauthentication at each hop. 777 5.8.2. Encryption 779 5.8.2.1. Encryption Types 781 Data encryption of packets MUST optionally be supported by use of 782 either a network wide key and/or application key. The network key 783 would apply to all devices in the LLN. The application key would 784 apply to a subset of devices on the LLN. 786 The network key and application keys would be mutually exclusive. 787 The routing protocol MUST allow routing a packet encrypted with an 788 application key through forwarding devices that without requiring 789 each node in the route to have the application key. 791 5.8.2.2. Packet Encryption 793 The encryption policy MUST support encryption of the payload only or 794 the entire packet. Payload only encryption would eliminate the 795 decryption/re-encryption overhead at every hop providing more real- 796 time performance. 798 5.8.3. Disparate Security Policies 800 Due to the limited resources of an LLN, the security policy defined 801 within the LLN MUST be able to differ from that of the rest of the IP 802 network within the facility yet packets MUST still be able to route 803 to or through the LLN from/to these networks. 805 5.8.4. Routing Security Policies To Sleeping Devices 807 The routing protocol MUST gracefully handle routing temporal security 808 updates (e.g. dynamic keys) to sleeping devices on their 'awake' 809 cycle to assure that sleeping devices can readily and efficiently 810 access then network. 812 6. IANA Considerations 814 This document includes no request to IANA. 816 7. Acknowledgments 818 In addition to the authors, J. P. Vasseur, David Culler, Ted Humpal 819 and Zach Shelby are gratefully acknowledged for their contributions 820 to this document. 822 8. References 824 8.1. Normative References 826 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 827 Requirement Levels", BCP 14, RFC 2119, March 1997. 829 8.2. Informative References 831 [I-D.ietf-roll-terminology]Vasseur, J., "Terminology in Low power And 832 Lossy Networks", draft-ietf-roll-terminology-00 (work in progress), 833 October 2008. 835 9. Appendix A: Additional Building Requirements 837 Appendix A contains additional building requirements that were deemed 838 out of scope for ROLL, yet provided ancillary substance for the 839 reader. 841 9.1. Additional Commercial Product Requirements 843 9.1.1. Wired and Wireless Implementations 845 Vendors will likely not develop a separate product line for both 846 wired and wireless networks. Hence, the solutions set forth must 847 support both wired and wireless implementations. 849 9.1.2. World-wide Applicability 851 Wireless devices must be supportable at the 2.4Ghz ISM band. 852 Wireless devices should be supportable at the 900 and 868 ISM bands 853 as well. 855 9.2. Additional Installation and Commissioning Requirements 857 9.2.1. Unavailability of an IP network 859 Product commissioning must be performed by an application engineer 860 prior to the installation of the IP network (e.g. switches, routers, 861 DHCP, DNS). 863 9.3. Additional Network Requirements 865 9.3.1. TCP/UDP 867 Connection based and connectionless services must be supported 869 9.3.2. Interference Mitigation 871 The network must automatically detect interference and seamlessly 872 migrate the network hosts channel to improve communication. Channel 873 changes and nodes response to the channel change must occur within 60 874 seconds. 876 9.3.3. Packet Reliability 878 In building automation, it is required for the network to meet the 879 following minimum criteria : 881 < 1% MAC layer errors on all messages; After no more than three 882 retries 883 < .1% Network layer errors on all messages; 885 After no more than three additional retries; 887 < 0.01% Application layer errors on all messages. 889 Therefore application layer messages will fail no more than once 890 every 100,000 messages. 892 9.3.4. Merging Commissioned Islands 894 Subsystems are commissioned by various vendors at various times 895 during building construction. These subnetworks must seamlessly 896 merge into networks and networks must seamlessly merge into 897 internetworks since the end user wants a holistic view of the system. 899 9.3.5. Adjustable Routing Table Sizes 901 The routing protocol must allow constrained nodes to hold an 902 abbreviated set of routes. That is, the protocol should not mandate 903 that the node routing tables be exhaustive. 905 9.3.6. Automatic Gain Control 907 For wireless implementations, the device radios should incorporate 908 automatic transmit power regulation to maximize packet transfer and 909 minimize network interference regardless of network size or density. 911 9.3.7. Device and Network Integrity 913 Commercial Building devices must all be periodically scanned to 914 assure that the device is viable and can communicate data and alarm 915 information as needed. Router should maintain previous packet flow 916 information temporally to minimize overall network overhead. 918 9.4. Additional Performance Requirements 920 9.4.1. Data Rate Performance 922 An effective data rate of 20kbits/s is the lowest acceptable 923 operational data rate acceptable on the network. 925 9.4.2. Firmware Upgrades 927 To support high speed code downloads, routing should support 928 transports that provide parallel downloads to targeted devices yet 929 guarantee packet delivery. In cases where the spatial position of 930 the devices requires multiple hops, the algorithm should recurse 931 through the network until all targeted devices have been serviced. 932 Devices receiving a download may cease normal operation, but upon 933 completion of the download must automatically resume normal 934 operation. 936 9.4.3. Route Persistence 938 To eliminate high network traffic in power-fail or brown-out 939 conditions previously established routes should be remembered and 940 invoked prior to establishing new routes for those devices reentering 941 the network. 943 Authors' Addresses 945 Jerry Martocci 946 Johnson Control 947 507 E. Michigan Street 948 Milwaukee, Wisconsin, 53202 949 USA 950 Phone: 414.524.4010 951 Email: jerald.p.martocci@jci.com 953 Nicolas Riou 954 Schneider Electric 955 Technopole 38TEC T3 956 37 quai Paul Louis Merlin 957 38050 Grenoble Cedex 9 958 France 959 Phone: +33 4 76 57 66 15 960 Email: nicolas.riou@fr.schneider-electric.com 962 Pieter De Mil 963 Ghent University - IBCN 964 G. Crommenlaan 8 bus 201 965 Ghent 9050 966 Belgium 967 Phone: +32-9331-4981 968 Fax: +32--9331--4899 969 Email: pieter.demil@intec.ugent.be 971 Wouter Vermeylen 972 Arts Centre Vooruit 973 ??? 974 Ghent 9000 975 Belgium 977 Phone: ??? 978 Fax: ??? 979 Email: wouter@vooruit.be