idnits 2.17.1 draft-ietf-roll-building-routing-reqs-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? -- It seems you're using the 'non-IETF stream' Licence Notice instead Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 894 has weird spacing: '...icating shoul...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: To minimize network dynamics, mobile devices SHOULD not be allowed to act as forwarding devices (routers) for other devices in the LLN. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: Authentication SHOULD be optional on the LLN. Authentication SHOULD be fully configurable on-site. Authentication policy and updates MUST be routable over-the-air. Authentication SHOULD occur upon joining or rejoining a network. However, once authenticated devices SHOULD not need to reauthenticate with any other devices in the LLN. Packets may need authentication at the source and destination nodes, however, packets routed through intermediate hops should not need reauthentication at each hop. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 3, 2009) is 5561 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'I-D.ietf-roll-terminology' is mentioned on line 161, but not defined == Unused Reference: 'RFC2119' is defined on line 820, but no explicit reference was found in the text == Unused Reference: '1' is defined on line 825, but no explicit reference was found in the text == Unused Reference: '2' is defined on line 830, but no explicit reference was found in the text == Unused Reference: '3' is defined on line 835, but no explicit reference was found in the text == Unused Reference: '4' is defined on line 839, but no explicit reference was found in the text == Unused Reference: '5' is defined on line 843, but no explicit reference was found in the text == Outdated reference: A later version (-11) exists of draft-ietf-roll-home-routing-reqs-06 == Outdated reference: A later version (-06) exists of draft-ietf-roll-indus-routing-reqs-03 == Outdated reference: A later version (-13) exists of draft-ietf-roll-terminology-00 Summary: 1 error (**), 0 flaws (~~), 14 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Networking Working Group J. Martocci, Ed. 2 Internet-Draft Johnson Controls Inc. 3 Intended status: Informational Pieter De Mil 4 Expires: August 3, 2009 Ghent University IBCN 5 W. Vermeylen 6 Arts Centre Vooruit 7 Nicolas Riou 8 Schneider Electric 9 February 3, 2009 11 Building Automation Routing Requirements in Low Power and Lossy 12 Networks 13 draft-ietf-roll-building-routing-reqs-04 15 Status of this Memo 17 This Internet-Draft is submitted to IETF in full conformance with the 18 provisions of BCP 78 and BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on August 3, 2009. 38 Copyright Notice 40 Copyright (c) 2009 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. 50 Abstract 52 The Routing Over Low power and Lossy network (ROLL) Working Group has 53 been chartered to work on routing solutions for Low Power and Lossy 54 networks (LLN) in various markets: Industrial, Commercial (Building), 55 Home and Urban. Pursuant to this effort, this document defines the 56 routing requirements for building automation. 58 Requirements Language 60 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 61 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 62 document are to be interpreted as described in RFC-2119. 64 Table of Contents 66 1. Terminology....................................................4 67 2. Introduction...................................................4 68 3. Facility Management System (FMS) Topology......................5 69 3.1. Introduction..............................................5 70 3.2. Sensors/Actuators.........................................7 71 3.3. Area Controllers..........................................7 72 3.4. Zone Controllers..........................................7 73 4. Installation Methods...........................................7 74 4.1. Wired Communication Media.................................7 75 4.2. Device Density............................................8 76 4.2.1. HVAC Device Density..................................8 77 4.2.2. Fire Device Density..................................9 78 4.2.3. Lighting Device Density..............................9 79 4.2.4. Physical Security Device Density.....................9 80 4.3. Installation Procedure....................................9 81 5. Building Automation Routing Requirements......................10 82 5.1. Installation.............................................10 83 5.1.1. Zero-Configuration Installation.....................11 84 5.1.2. Sleeping Devices....................................11 85 5.1.3. Local Testing.......................................11 86 5.1.4. Device Replacement..................................12 87 5.2. Scalability..............................................12 88 5.2.1. Network Domain......................................12 89 5.2.2. Peer-to-Peer Communication..........................12 90 5.3. Mobility.................................................13 91 5.3.1. Mobile Device Requirements..........................13 92 5.4. Resource Constrained Devices.............................14 93 5.4.1. Limited Processing Power for Non-routing Devices....14 94 5.4.2. Limited Processing Power for Routing Devices........14 95 5.5. Addressing...............................................14 96 5.5.1. Unicast/Multicast/Anycast...........................14 98 5.6. Manageability............................................14 99 5.6.1. Diagnostics.........................................15 100 5.6.2. Route Tracking......................................15 101 5.7. Route Selection..........................................15 102 5.7.1. Path Cost...........................................15 103 5.7.2. Path Adaptation.....................................15 104 5.7.3. Route Redundancy....................................16 105 5.7.4. Route Discovery Time................................16 106 5.7.5. Route Preference....................................16 107 6. Traffic Pattern...............................................16 108 7. Security Considerations.......................................17 109 7.1. Security Requirements....................................17 110 7.1.1. Authentication......................................17 111 7.1.2. Encryption..........................................18 112 7.1.3. Disparate Security Policies.........................18 113 7.1.4. Routing Security Policies To Sleeping Devices.......18 114 8. IANA Considerations...........................................19 115 9. Acknowledgments...............................................19 116 10. References...................................................19 117 10.1. Normative References....................................19 118 10.2. Informative References..................................19 119 11. Appendix A: Additional Building Requirements.................20 120 11.1. Additional Commercial Product Requirements..............20 121 11.1.1. Cost...............................................20 122 11.1.2. Wired and Wireless Implementations.................20 123 11.1.3. World-wide Applicability...........................20 124 11.1.4. Support of Application Layer Protocols.............20 125 11.1.5. Use of Constrained Devices.........................21 126 11.2. Additional Installation and Commissioning Requirements..21 127 11.2.1. Device Setup Time..................................21 128 11.2.2. Unavailability of an IP network....................21 129 11.3. Additional Network Requirements.........................21 130 11.3.1. TCP/UDP............................................21 131 11.3.2. Interference Mitigation............................21 132 11.3.3. Real-time Performance Measures.....................21 133 11.3.4. Packet Reliability.................................22 134 11.3.5. Merging Commissioned Islands.......................22 135 11.3.6. Adjustable System Table Sizes......................22 136 11.3.7. Communication Distance.............................22 137 11.3.8. Automatic Gain Control.............................22 138 11.3.9. IPv4 Compatibility.................................23 139 11.3.10. Proxying for Sleeping Devices.....................23 140 11.3.11. Device and Network Integrity......................23 141 11.4. Additional Performance Requirements.....................23 142 11.4.1. Data Rate Performance..............................23 143 11.4.2. Firmware Upgrades..................................23 144 11.4.3. Prioritized Routing................................23 145 11.4.4. Path Persistence...................................24 146 11.5. Additional Network Security Requirements................24 147 11.5.1. Encryption Levels..................................24 148 11.5.2. Security Policy Flexibility........................24 149 12. Appendix B: FMS Use-Cases....................................24 150 12.1. Locking and Unlocking the Building......................25 151 12.2. Building Energy Conservation............................25 152 12.3. Inventory and Remote Diagnosis of Safety Equipment......25 153 12.4. Life Cycle of Field Devices.............................26 154 12.5. Surveillance............................................26 155 12.6. Emergency...............................................26 156 12.7. Public Address..........................................27 158 1. Terminology 160 For description of the terminology used in this specification, please 161 see [I-D.ietf-roll-terminology]. 163 2. Introduction 165 Commercial buildings have been fitted with pneumatic and subsequently 166 electronic communication pathways connecting sensors to their 167 controllers for over one hundred years. Recent economic and 168 technical advances in wireless communication allow facilities to 169 increasingly utilize a wireless solution in lieu of a wired solution; 170 thereby reducing installation costs while maintaining highly reliant 171 communication. 173 The cost benefits and ease of installation of wireless sensors allow 174 customers to further instrument their facilities with additional 175 sensors; providing tighter control while yielding increased energy 176 savings. 178 Wireless solutions will be adapted from their existing wired 179 counterparts in many of the building applications including, but not 180 limited to Heating, Ventilation, and Air Conditioning (HVAC), 181 Lighting, Physical Security, Fire, and Elevator systems. These 182 devices will be developed to reduce installation costs; while 183 increasing installation and retrofit flexibility, as well as 184 increasing the sensing fidelity to improve efficiency and building 185 service quality. 187 Sensing devices may be battery-less; battery or mains powered. 188 Actuators and area controllers will be mains powered. Due to 189 building code and/or device density (e.g. equipment room), it is 190 envisioned that a mix of wired and wireless sensors and actuators 191 will be deployed within a building. 193 Facility Management Systems (FMS) are deployed in a large set of 194 vertical markets including universities; hospitals; government 195 facilities; Kindergarten through High School (K-12); pharmaceutical 196 manufacturing facilities; and single-tenant or multi-tenant office 197 buildings. These buildings range in size from 100K sqft structures (5 198 story office buildings), to 1M sqft skyscrapers (100 story 199 skyscrapers) to complex government facilities such as the Pentagon. 200 The described topology is meant to be the model to be used in all 201 these types of environments, but clearly must be tailored to the 202 building class, building tenant and vertical market being served. 204 The following sections describe the sensor, actuator, area controller 205 and zone controller layers of the topology. (NOTE: The Building 206 Controller and Enterprise layers of the FMS are excluded from this 207 discussion since they typically deal in communication rates requiring 208 LAN/WLAN communication technologies). 210 Section 3 describes FMS architectures commonly installed in 211 commercial buildings. Section 4 describes installation methods 212 deployed for new and remodeled construction. Appendix A documents 213 important commercial building requirements that are out of scope for 214 routing yet will be essential to the final acceptance of the 215 protocols used within the building. Appendix B describes various FMS 216 use-cases and the interaction with humans for energy conservation and 217 life-safety applications. 219 Sections 3, 4, Appendix A and Appendix B are mainly included for 220 educational purposes. The aim of this document is to provide the set 221 of IPv6 routing requirements for LLNs in buildings as described in 222 Section 5. 224 3. Facility Management System (FMS) Topology 226 3.1. Introduction 228 To understand the network systems requirements of a facility 229 management system in a commercial building, this document uses a 230 framework to describe the basic functions and composition of the 231 system. An FMS is a hierarchical system of sensors, actuators, 232 controllers and user interface devices based on spatial extent. 233 Additionally, an FMS may also be divided functionally across alike, 234 but different building subsystems such as HVAC, Fire, Security, 235 Lighting, Shutters and Elevator control systems as denoted in Figure 236 1. 238 Much of the makeup of an FMS is optional and installed at the behest 239 of the customer. Sensors and actuators have no standalone 240 functionality. All other devices support partial or complete 241 standalone functionality. These devices can optionally be tethered 242 to form a more cohesive system. The customer requirements dictate 243 the level of integration within the facility. This architecture 244 provides excellent fault tolerance since each node is designed to 245 operate in an independent mode if the higher layers are unavailable. 247 +------+ +-----+ +------+ +------+ +------+ +------+ 249 Bldg App'ns | | | | | | | | | | | | 251 | | | | | | | | | | | | 253 Building Cntl | | | | | S | | L | | S | | E | 255 | | | | | E | | I | | H | | L | 257 Area Control | H | | F | | C | | G | | U | | E | 259 | V | | I | | U | | H | | T | | V | 261 Zone Control | A | | R | | R | | T | | T | | A | 263 | C | | E | | I | | I | | E | | T | 265 Actuators | | | | | T | | N | | R | | O | 267 | | | | | Y | | G | | S | | R | 269 Sensors | | | | | | | | | | | | 271 +------+ +-----+ +------+ +------+ +------+ +------+ 273 Figure 1: Building Systems and Devices 275 3.2. Sensors/Actuators 277 As Figure 1 indicates an FMS may be composed of many functional 278 stacks or silos that are interoperably woven together via Building 279 Applications. Each silo has an array of sensors that monitor the 280 environment and actuators that effect the environment as determined 281 by the upper layers of the FMS topology. The sensors typically are 282 the fringe of the network structure providing environmental data into 283 the system. The actuators are the sensor's counterparts modifying 284 the characteristics of the system based on the input sensor data and 285 the applications deployed. 287 3.3. Area Controllers 289 An area describes a small physical locale within a building, 290 typically a room. HVAC (temperature and humidity) and Lighting (room 291 lighting, shades, solar loads) vendors oft times deploy area 292 controllers. Area controls are fed by sensor inputs that monitor the 293 environmental conditions within the room. Common sensors found in 294 many rooms that feed the area controllers include temperature, 295 occupancy, lighting load, solar load and relative humidity. Sensors 296 found in specialized rooms (such as chemistry labs) might include air 297 flow, pressure, CO2 and CO particle sensors. Room actuation includes 298 temperature setpoint, lights and blinds/curtains. 300 3.4. Zone Controllers 302 Zone Control supports a similar set of characteristics as the Area 303 Control albeit to an extended space. A zone is normally a logical 304 grouping or functional division of a commercial building. A zone may 305 also coincidentally map to a physical locale such as a floor. 307 Zone Control may have direct sensor inputs (smoke detectors for 308 fire), controller inputs (room controllers for air-handlers in HVAC) 309 or both (door controllers and tamper sensors for security). Like 310 area/room controllers, zone controllers are standalone devices that 311 operate independently or may be attached to the larger network for 312 more synergistic control. 314 4. Installation Methods 316 4.1. Wired Communication Media 318 Commercial controllers are traditionally deployed in a facility using 319 twisted pair serial media following the EIA-485 electrical standard 320 operating nominally at 38400 to 76800 baud. This allows runs to 5000 321 ft without a repeater. With the maximum of three repeaters, a single 322 communication trunk can serpentine 15000 ft. EIA-485 is a multi-drop 323 media allowing upwards to 255 devices to be connected to a single 324 trunk. 326 Most sensors and virtually all actuators currently used in 327 commercial buildings are "dumb", non-communicating hardwired devices. 328 However, sensor buses are beginning to be deployed by vendors which 329 are used for smart sensors and point multiplexing. The Fire 330 industry deploys addressable fire devices, which usually use some 331 form of proprietary communication wiring driven by fire codes. 333 4.2. Device Density 335 Device density differs depending on the application and as dictated 336 by the local building code requirements. The following sections 337 detail typical installation densities for different applications. 339 4.2.1. HVAC Device Density 341 HVAC room applications typically have sensors/actuators and 342 controllers spaced about 50ft apart. In most cases there is a 3:1 343 ratio of sensors/actuators to controllers. That is, for each room 344 there is an installed temperature sensor, flow sensor and damper 345 actuator for the associated room controller. 347 HVAC equipment room applications are quite different. An air handler 348 system may have a single controller with upwards to 25 sensors and 349 actuators within 50 ft of the air handler. A chiller or boiler is 350 also controlled with a single equipment controller instrumented with 351 25 sensors and actuators. Each of these devices would be 352 individually addressed since the devices are mandated or optional as 353 defined by the specified HVAC application. Air handlers typically 354 serve one or two floors of the building. Chillers and boilers may be 355 installed per floor, but many times service a wing, building or the 356 entire complex via a central plant. 358 These numbers are typical. In special cases, such as clean rooms, 359 operating rooms, pharmaceuticals and labs, the ratio of sensors to 360 controllers can increase by a factor of three. Tenant installations 361 such as malls would opt for packaged units where much of the sensing 362 and actuation is integrated into the unit. Here a single device 363 address would serve the entire unit. 365 4.2.2. Fire Device Density 367 Fire systems are much more uniformly installed with smoke detectors 368 installed about every 50 feet. This is dictated by local building 369 codes. Fire pull boxes are installed uniformly about every 150 feet. 370 A fire controller will service a floor or wing. The fireman's fire 371 panel will service the entire building and typically is installed in 372 the atrium. 374 4.2.3. Lighting Device Density 376 Lighting is also very uniformly installed with ballasts installed 377 approximately every 10 feet. A lighting panel typically serves 48 to 378 64 zones. Wired systems tether many lights together into a single 379 zone. Wireless systems configure each fixture independently to 380 increase flexibility and reduce installation costs. 382 4.2.4. Physical Security Device Density 384 Security systems are non-uniformly oriented with heavy density near 385 doors and windows and lighter density in the building interior space. 386 The recent influx of interior and perimeter camera systems is 387 increasing the security footprint. These cameras are atypical 388 endpoints requiring upwards to 1 megabit/second (Mbit/s) data rates 389 per camera as contrasted by the few Kbits/s needed by most other FMS 390 sensing equipment. Previously, camera systems had been deployed on 391 proprietary wired high speed network. More recent implementations 392 utilize wired or wireless IP cameras integrated to the enterprise 393 LAN. 395 4.3. Installation Procedure 397 Wired FMS installation is a multifaceted procedure depending on the 398 extent of the system and the software interoperability requirement. 399 However, at the sensor/actuator and controller level, the procedure 400 is typically a two or three step process. 402 Most FMS equipment will utilize 24 VAC power sources that can be 403 installed by a low-voltage electrician. He/she arrives on-site 404 during the construction of the building prior to drywall and ceiling 405 installation. This allows him/her to allocate wall space, easily 406 land the equipment and run the wired controller and sensor networks. 407 The Building Controllers and Enterprise network are not normally 408 installed until months later. The electrician completes his task by 409 running a wire verification procedure that shows proper continuity 410 between the devices and proper local operation of the devices. 412 Later in the installation cycle, the higher order controllers are 413 installed, programmed and commissioned together with the previously 414 installed sensors, actuators and controllers. In most cases the IP 415 network is still not operable. The Building Controllers are 416 completely commissioned using a crossover cable or a temporary IP 417 switch together with static IP addresses. 419 Once the IP network is operational, the FMS may optionally be added 420 to the enterprise network. The wireless installation process must 421 follow the same work flow. The electrician installs the products as 422 before and executes local functional tests between the wireless 423 device to assure operation before leaving the job. The electrician 424 does not carry a laptop so the commissioning must be built into the 425 device operation. 427 5. Building Automation Routing Requirements 429 Following are the building automation routing requirements for a 430 network used to integrate building sensor, actuator and control 431 products. These requirements have been limited to routing 432 requirements only. These requirements are written not presuming any 433 preordained network topology, physical media (wired) or radio 434 technology (wireless). See Appendix A for additional requirements 435 that have been deemed outside the scope of this document yet will 436 pertain to the successful deployment of building automation systems. 438 5.1. Installation 440 Building control systems typically are installed and tested by 441 electricians having little computer knowledge and no network 442 knowledge whatsoever. These systems are often installed during the 443 building construction phase before the drywall and ceilings are in 444 place. For new construction projects, the building enterprise IP 445 network is not in place during installation of the building control 446 system. For retrofit applications, the installer will still operate 447 independently from the IP network so as not to affect network 448 operations during the installation phase. 450 Local (ad hoc) testing of sensors and room controllers must be 451 completed before the tradesperson can complete his/her work. This 452 testing allows the tradesperson to verify correct client (e.g. light 453 switch) and server (e.g. light ballast) before leaving the jobsite. 454 In traditional wired systems correct operation of a light 455 switch/ballast pair was as simple as flipping on the light switch. 456 In wireless applications, the tradesperson has to assure the same 457 operation, yet be sure the operation of the light switch is 458 associated to the proper ballast. 460 System level commissioning will later be deployed using a more 461 computer savvy person with access to a commissioning device (e.g. a 462 laptop computer). The completely installed and commissioned 463 enterprise IP network may or may not be in place at this time. 464 Following are the installation routing requirements. 466 5.1.1. Zero-Configuration Installation 468 It MUST be possible to fully commission network devices without 469 requiring any additional commissioning device (e.g. laptop). 471 5.1.2. Sleeping Devices 473 Sensing devices will, in some cases, utilize battery power or energy 474 harvesting techniques for power and will operate mostly in a sleep 475 mode to maintain power consumption within a modest budget. The 476 routing protocol MUST take into account device characteristics such 477 as power budget. If such devices provide routing, rather than merely 478 host connectivity, the energy costs associated with such routing 479 needs to fit within the power budget. If the mechanisms for duty 480 cycling dictate very long response times or specific temporal 481 scheduling, routing will need to take such constraints into account. 483 Typically, batteries need to be operational for at least 5 years when 484 the sensing device is transmitting its data(e.g. 64 octets) once per 485 minute. This requires that sleeping devices MUST have minimal link 486 on time when they awake and transmit onto the network. Moreover, 487 maintaining the ability to receive inbound data MUST be accomplished 488 with minimal link on time. 490 Proxies with unconstrained power budgets oft times are used to cache 491 the inbound data for a sleeping device until the device awakens. In 492 such cases, the routing protocol MUST discover the capability of a 493 node to act as a proxy during path calculation; then deliver the 494 packet to the assigned proxy for later delivery to the sleeping 495 device upon its next awakened cycle. 497 5.1.3. Local Testing 499 The local sensors and requisite actuators and controllers must be 500 testable within the locale (e.g. room) to assure communication 501 connectivity and local operation without requiring other systemic 502 devices. Routing should allow for temporary ad hoc paths to be 503 established that are updated as the network physically and 504 functionally expands. 506 5.1.4. Device Replacement 508 Replacement devices need to be plug-and-play with no additional setup 509 compared to what is normally required for a new device. Devices 510 referencing data in the replaced device MUST be able to reference 511 data in its replacement without being reconfigured to refer to the 512 new device. Thus, such a reference cannot be a hardware identifier, 513 such as the MAC address, nor a hard-coded route. If such a reference 514 is an IP address, the replacement device MUST be assigned the IP 515 addressed previously bound to the replaced device. Or if the logical 516 equivalent of a hostname is used for the reference, it must be 517 translated to the replacement IP address. 519 5.2. Scalability 521 Building control systems are designed for facilities from 50000 sq. 522 ft. to 1M+ sq. ft. The networks that support these systems must 523 cost-effectively scale accordingly. In larger facilities 524 installation may occur simultaneously on various wings or floors, yet 525 the end system must seamlessly merge. Following are the scalability 526 requirements. 528 5.2.1. Network Domain 530 The routing protocol MUST be able to support networks with at least 531 2000 nodes supporting at least 1000 routing devices and 1000 non- 532 routing device. Subnetworks (e.g. rooms, primary equipment) within 533 the network must support upwards to 255 sensors and/or actuators. 535 5.2.2. Peer-to-Peer Communication 537 The data domain for commercial FMS systems may sprawl across a vast 538 portion of the physical domain. For example, a chiller may reside in 539 the facility's basement due to its size, yet the associated cooling 540 towers will reside on the roof. The cold-water supply and return 541 pipes serpentine through all the intervening floors. The feedback 542 control loops for these systems require data from across the 543 facility. 545 A network device MUST be able to communicate in a peer-to-peer manner 546 with any other device on the network. Thus, the routing protocol MUST 547 provide routes between arbitrary hosts within the appropriate 548 administrative domain. 550 5.3. Mobility 552 Most devices are affixed to walls or installed on ceilings within 553 buildings. Hence the mobility requirements for commercial buildings 554 are few. However, in wireless environments location tracking of 555 occupants and assets is gaining favor. Asset tracking applications 556 require monitoring movement with granularity of a minute. This soft 557 real-time performance requirement is reflected in the performance 558 requirements below. 560 5.3.1. Mobile Device Requirements 562 To minimize network dynamics, mobile devices SHOULD not be allowed to 563 act as forwarding devices (routers) for other devices in the LLN. 565 A mobile device that moves within an LLN SHOULD reestablish end-to- 566 end communication to a fixed device also in the LLN within 2 seconds. 567 The network convergence time should be less than 5 seconds once the 568 mobile device stops moving. 570 A mobile device that moves outside of an LLN SHOULD reestablish end- 571 to-end communication to a fixed device in the new LLN within 5 572 seconds. The network convergence time should be less than 5 seconds 573 once the mobile device stops moving. 575 A mobile device that moves outside of one LLN into another LLN SHOULD 576 reestablish end-to-end communication to a fixed device in the old LLN 577 within 10 seconds. The network convergence time should be less than 578 10 seconds once the mobile device stops. 580 A mobile device that moves outside of one LLN into another LLN SHOULD 581 reestablish end-to-end communication to another mobile device in the 582 new LLN within 20 seconds. The network convergence time should be 583 less than 30 seconds once the mobile devices stop moving. 585 A mobile device that moves outside of one LLN into another LLN SHOULD 586 reestablish end-to-end communication to a mobile device in the old 587 LLN within 30 seconds. The network convergence time should be less 588 than 30 seconds once the mobile devices stop moving. 590 5.4. Resource Constrained Devices 592 Sensing and actuator device processing power and memory may be 4 593 orders of magnitude less (i.e. 10,000x) than many more traditional 594 client devices on an IP network. The routing mechanisms must 595 therefore be tailored to fit these resource constrained devices. 597 5.4.1. Limited Processing Power for Non-routing Devices. 599 The software size requirement for non-routing devices (e.g. sleeping 600 sensors and actuators) SHOULD be implementable in 8-bit devices with 601 no more than 128KB of memory. 603 5.4.2. Limited Processing Power for Routing Devices 605 The software size requirements for routing devices (e.g. room 606 controllers) SHOULD be implementable in 8-bit devices with no more 607 than 256KB of flash memory. 609 5.5. Addressing 611 Facility Management systems require different communication schemes 612 to solicit or post network information. Broadcasts or anycasts need 613 be used to resolve unresolved references within a device when the 614 device first joins the network. 616 As with any network communication, broadcasting should be minimized. 617 This is especially a problem for small embedded devices with limited 618 network bandwidth. In many cases a global broadcast could be 619 replaced with a multicast since the application knows the application 620 domain. Broadcasts and multicasts are typically used for network 621 joins and application binding in embedded systems. 623 5.5.1. Unicast/Multicast/Anycast 625 Routing MUST support anycast, unicast, and multicast. 627 5.6. Manageability 629 In addition to the initial installation of the system (see Section 630 5.1), it is equally important for the ongoing maintenance of the 631 system to be simple and inexpensive. 633 5.6.1. Diagnostics 635 To improve diagnostics, the network layer SHOULD be able to be placed 636 in and out of 'verbose' mode. Verbose mode is a temporary debugging 637 mode that provides additional communication information including at 638 least total number of routed packets sent and received, number of 639 routing failures (no route available), neighbor table members, and 640 routing table entries. 642 5.6.2. Route Tracking 644 Route diagnostics SHOULD be supported providing information such as 645 path quality; number of hops; available alternate active paths with 646 associated costs. Path quality is the relative measure of 'goodness' 647 of the selected source to destination path as compared to alternate 648 paths. This composite value may be measured as a function of hop 649 count, signal strength, available power, existing active paths or any 650 other criteria deemed by ROLL as the path cost differentiator. 652 5.7. Route Selection 654 Route selection determines reliability and quality of the 655 communication paths among the devices. Optimizing the routes over 656 time resolve any nuances developed at system startup when nodes are 657 asynchronously adding themselves to the network. Path adaptation 658 will reduce latency if the path costs consider hop count as a cost 659 attribute. 661 5.7.1. Path Cost 663 The routing protocol MUST support a metric of route quality and 664 optimize path selection according to such metrics within constraints 665 established for links along the paths. These metrics SHOULD reflect 666 metrics such as signal strength, available bandwidth, hop count, 667 energy availability and communication error rates. 669 5.7.2. Path Adaptation 671 Communication paths MUST adapt toward the chosen metric(s) (e.g. 672 signal quality) optimality in time. 674 5.7.3. Route Redundancy 676 The routing layer SHOULD be configurable to allow secondary and 677 tertiary paths to be established and used upon failure of the primary 678 path. 680 5.7.4. Route Discovery Time 682 Mission critical commercial applications (e.g. Fire, Security) 683 require reliable communication and guaranteed end-to-end delivery of 684 all messages in a timely fashion. Application layer time-outs must 685 be selected judiciously to cover anomalous conditions such as lost 686 packets and/or path discoveries; yet not be set too large to over 687 damp the network response. If route discovery occurs during packet 688 transmission time, it SHOULD NOT add more than 120ms of latency to 689 the packet delivery time. 691 5.7.5. Route Preference 693 Route cost algorithms SHOULD allow the installer to optionally select 694 'preferred' paths based on the known spatial layout of the 695 communicating devices. 697 6. Traffic Pattern 699 The independent nature of the automation systems within a building 700 plays heavy onto the network traffic patterns. Much of the real-time 701 sensor data stays within the local environment. Alarming and other 702 event data will percolate to higher layers. 704 Systemic data may be either polled or event based. Polled data 705 systems will generate a uniform packet load on the network. This 706 architecture has proven not scalable. Most vendors have developed 707 event based systems which pass data on event. These systems are 708 highly scalable and generate low data on the network at quiescence. 709 Unfortunately, the systems will generate a heavy load on startup 710 since all the initial data must migrate to the controller level. 711 They also will generate a temporary but heavy load during firmware 712 upgrades. This latter load can normally be mitigated by performing 713 these downloads during off-peak hours. 715 Devices will need to reference peers occasionally for sensor data or 716 to coordinate across systems. Normally, though, data will migrate 717 from the sensor level upwards through the local, area then 718 supervisory level. Bottlenecks will typically form at the funnel 719 point from the area controllers to the supervisory controllers. 721 Initial system startup after a controlled outage or unexpected power 722 failure puts tremendous stress on the network and on the routing 723 algorithms. An FMS system is comprised of a myriad of control 724 algorithms at the room, area, zone, and enterprise layers. When 725 these control algorithms are at quiescence, the real-time data 726 changes are small and the network will not saturate. However, upon 727 any power loss, the control loops and real-time data quickly atrophy. 728 A ten minute outage may take many hours to regain control. 730 Upon restart all lines-powered devices power-on instantaneously. 731 However due to application startup and self tests, these devices will 732 attempt to join the network randomly. Empirical testing indicates 733 that routing paths acquired during startup will tend to be very 734 oblique since the available neighbor lists are incomplete. This 735 demands an adaptive routing protocol to allow for path optimization 736 as the network stabilizes. 738 7. Security Considerations 740 Security policies, especially wireless encryption and device 741 authentication needs to be considered, especially with concern to the 742 impact on the processing capabilities and additional latency incurred 743 on the sensors, actuators and controllers. 745 FMS systems are typically highly configurable in the field and hence 746 the security policy is most often dictated by the type of building to 747 which the FMS is being installed. Single tenant owner occupied 748 office buildings installing lighting or HVAC control are candidates 749 for implementing low or even no security on the LLN. Antithetically, 750 military or pharmaceutical facilities require strong security 751 policies. As noted in the installation procedures above, security 752 policies must be facile to allow no security during the installation 753 phase (prior to building occupancy), yet easily raise the security 754 level network wide during the commissioning phase of the system. 756 7.1. Security Requirements 758 7.1.1. Authentication 760 Authentication SHOULD be optional on the LLN. Authentication SHOULD 761 be fully configurable on-site. Authentication policy and updates MUST 762 be routable over-the-air. Authentication SHOULD occur upon joining 763 or rejoining a network. However, once authenticated devices SHOULD 764 not need to reauthenticate with any other devices in the LLN. 765 Packets may need authentication at the source and destination nodes, 766 however, packets routed through intermediate hops should not need 767 reauthentication at each hop. 769 7.1.2. Encryption 771 7.1.2.1. Encryption Types 773 Data encryption of packets MUST optionally be supported by use of 774 either a network wide key and/or application key. The network key 775 would apply to all devices in the LLN. The application key would 776 apply to a subset of devices on the LLN. 778 The network key and application keys would be mutually exclusive. 779 The routing protocol MUST allow routing a packet encrypted with an 780 application key through forwarding devices that without requiring 781 each node in the path have the application key. 783 7.1.2.2. Packet Encryption 785 The encryption policy MUST support encryption of the payload only or 786 the entire packet. Payload only encryption would eliminate the 787 decryption/re-encryption overhead at every hop providing more real- 788 time performance. 790 7.1.3. Disparate Security Policies 792 Due to the limited resources of an LLN, the security policy defined 793 within the LLN MUST be able to differ from that of the rest of the IP 794 network within the facility yet packets MUST still be able to route 795 to or through the LLN from/to these networks. 797 7.1.4. Routing Security Policies To Sleeping Devices 799 The routing protocol MUST gracefully handle routing temporal security 800 updates (e.g. dynamic keys) to sleeping devices on their 'awake' 801 cycle to assure that sleeping devices can readily and efficiently 802 access then network. 804 8. IANA Considerations 806 This document includes no request to IANA. 808 9. Acknowledgments 810 In addition to the authors, J. P. Vasseur, David Culler, Ted Humpal 811 and Zach Shelby are gratefully acknowledged for their contributions 812 to this document. 814 This document was prepared using 2-Word-v2.0.template.dot. 816 10. References 818 10.1. Normative References 820 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 821 Requirement Levels", BCP 14, RFC 2119, March 1997. 823 10.2. Informative References 825 [1] [I-D.ietf-roll-home-routing-reqs] Brandt, A., Buron, J., and 826 G. Porcu, "Home Automation Routing Requirements in Low Power 827 and Lossy Networks", draft-ietf-roll-home-routing-reqs-06 (work 828 in progress), November 2008. 830 [2] [I-D.ietf-roll-indus-routing-reqs] Networks, D., Thubert, 831 P., Dwars, S., and T. Phinney, "Industrial Routing Requirements 832 in Low Power and Lossy Networks", draft-ietf-roll-indus- 833 routing-reqs-03 (work in progress), December 2008. 835 [3] [I-D.ietf-roll-terminology]Vasseur, J., "Terminology in Low 836 power And Lossy Networks", draft-ietf-roll-terminology-00 (work 837 in progress), October 2008. 839 [4] "RS-485 EIA Standard: Standard for Electrical 840 Characteristics of Generators and Receivers for use in Balanced 841 Digital Multipoint Systems", April 1983 843 [5] "BACnet: A Data Communication Protocol for Building and 844 Automation Control Networks" ANSI/ASHRAE Standard 135-2004", 845 2004 847 11. Appendix A: Additional Building Requirements 849 Appendix A contains additional building requirements that were deemed 850 out of scope for ROLL, yet provided ancillary substance for the 851 reader. 853 11.1. Additional Commercial Product Requirements 855 11.1.1. Cost 857 The total installed infrastructure cost including but not limited to 858 the media, required infrastructure devices (amortized across the 859 number of devices); labor to install and commission the network must 860 not exceed $1.00/foot for wired implementations. 862 Wireless implementations (total installed cost) must cost no more 863 than 80% of wired implementations. 865 11.1.2. Wired and Wireless Implementations 867 Vendors will likely not develop a separate product line for both 868 wired and wireless networks. Hence, the solutions set forth must 869 support both wired and wireless implementations. 871 11.1.3. World-wide Applicability 873 Wireless devices must be supportable at the 2.4Ghz ISM band. 874 Wireless devices should be supportable at the 900 and 868 ISM bands 875 as well. 877 11.1.4. Support of Application Layer Protocols 879 11.1.4.1. BACnet Building Protocol 881 BACnet is an ISO world-wide application layer IP protocol. Devices 882 implementing ROLL routing protocol should support the BACnet 883 protocol. 885 11.1.5. Use of Constrained Devices 887 The network may be composed of a heterogeneous mix of full, battery 888 and energy harvested powered devices. The routing protocol must 889 support these constrained devices. 891 11.1.5.1. Energy Harvested Sensors 893 Devices utilizing available ambient energy (e.g. solar, air flow, 894 temperature differential)for sensing and communicating should be 895 supported by the solution set. 897 11.2. Additional Installation and Commissioning Requirements 899 11.2.1. Device Setup Time 901 Device and Network setup by the installer must take no longer than 20 902 seconds per device installed. 904 11.2.2. Unavailability of an IP network 906 Product commissioning must be performed by an application engineer 907 prior to the installation of the IP network (e.g. switches, routers, 908 DHCP, DNS). 910 11.3. Additional Network Requirements 912 11.3.1. TCP/UDP 914 Connection based and connectionless services must be supported 916 11.3.2. Interference Mitigation 918 The network must automatically detect interference and seamlessly 919 migrate the network hosts channel to improve communication. Channel 920 changes and nodes response to the channel change must occur within 60 921 seconds. 923 11.3.3. Real-time Performance Measures 925 A node transmitting a 'request with expected reply' to another node 926 must send the message to the destination and receive the response in 927 not more than 120 msec. This response time should be achievable with 928 5 or less hops in each direction. This requirement assumes network 929 quiescence and a negligible turnaround time at the destination node. 931 11.3.4. Packet Reliability 933 Reliability must meet the following minimum criteria : 935 < 1% MAC layer errors on all messages; After no more than three 936 retries 938 < .1% Network layer errors on all messages; 940 After no more than three additional retries; 942 < 0.01% Application layer errors on all messages. 944 Therefore application layer messages will fail no more than once 945 every 100,000 messages. 947 11.3.5. Merging Commissioned Islands 949 Subsystems are commissioned by various vendors at various times 950 during building construction. These subnetworks must seamlessly 951 merge into networks and networks must seamlessly merge into 952 internetworks since the end user wants a holistic view of the system. 954 11.3.6. Adjustable System Table Sizes 956 Routing must support adjustable router table entry sizes on a per 957 node basis to maximize limited RAM in the devices. 959 11.3.7. Communication Distance 961 A source device may be upwards to 1000 feet from its destination. 962 Communication may need to be established between these devices 963 without needing to install other intermediate 'communication only' 964 devices such as repeaters. 966 11.3.8. Automatic Gain Control 968 For wireless implementations, the device radios should incorporate 969 automatic transmit power regulation to maximize packet transfer and 970 minimize network interference regardless of network size or density. 972 11.3.9. IPv4 Compatibility 974 The routing protocol must support cost-effective intercommunication 975 among IPv4 and IPv6 devices. 977 11.3.10. Proxying for Sleeping Devices 979 Routing must support in-bound packet caches for low-power (battery 980 and energy harvested) devices when these devices are not accessible 981 on the network. 983 These devices must have a designated powered proxying device to which 984 packets will be temporarily routed and cached until the constrained 985 device accesses the network. 987 11.3.11. Device and Network Integrity 989 Commercial Building devices must all be periodically scanned to 990 assure that the device is viable and can communicate data and alarm 991 information as needed. Network routers should maintain previous 992 packet flow information temporally to minimize overall network 993 overhead. 995 11.4. Additional Performance Requirements 997 11.4.1. Data Rate Performance 999 An effective data rate of 20kbits/s is the lowest acceptable 1000 operational data rate acceptable on the network. 1002 11.4.2. Firmware Upgrades 1004 To support high speed code downloads, routing MUST support transports 1005 that provide parallel downloads to targeted devices yet guarantee 1006 packet delivery. In cases where the spatial position of the devices 1007 requires multiple hops, the algorithm must recurse through the 1008 network until all targeted devices have been serviced. Devices 1009 receiving a download MAY cease normal operation, but upon completion 1010 of the download must automatically resume normal operation. 1012 11.4.3. Prioritized Routing 1014 Network and application routing prioritization is required to assure 1015 that mission critical applications (e.g. Fire Detection) cannot be 1016 deferred while less critical application access the network. 1018 11.4.4. Path Persistence 1020 To eliminate high network traffic in power-fail or brown-out 1021 conditions previously established routes SHOULD be remembered and 1022 invoked prior to establishing new routes for those devices reentering 1023 the network. 1025 11.5. Additional Network Security Requirements 1027 11.5.1. Encryption Levels 1029 Encryption SHOULD be optional on the LLN. Encryption SHOULD be fully 1030 configurable on-site. Encryption policy and updates SHOULD be 1031 transmittable over-the-air and in-the-clear. 1033 11.5.2. Security Policy Flexibility 1035 In most facilities authentication and encryption will be turned off 1036 during installation. 1038 More complex encryption policies might be put in force at 1039 commissioning time. New encryption policies MUST be allowed to be 1040 presented to all devices in the LLN over the network without needing 1041 to visit each device. 1043 12. Appendix B: FMS Use-Cases 1045 Appendix B contains FMS use-cases that describes the use of sensors 1046 and controllers for various applications with a commercial building 1047 and how they interplay with energy conservation and life-safety 1048 applications. 1050 The Vooruit arts centre is a restored monument which dates from 1913. 1051 This complex monument consists of over 350 different rooms including 1052 a meeting rooms, large public halls and theaters serving as many as 1053 2500 guests. A number of use cases regarding Vooruit are described 1054 in the following text. The situations and needs described in these 1055 use cases can also be found in all automated large buildings, such as 1056 airports and hospitals. 1058 12.1. Locking and Unlocking the Building 1060 The member of the cleaning staff arrives first in the morning 1061 unlocking the building (or a part of it) from the control room. This 1062 means that several doors are unlocked; the alarms are switched off; 1063 the heating turns on; some lights switch on, etc. Similarly, the 1064 last person leaving the building has to lock the building. This will 1065 lock all the outer doors, turn the alarms on, switch off heating and 1066 lights, etc. 1068 The "building locked" or "building unlocked" event needs to be 1069 delivered to a subset of all the sensors and actuators. It can be 1070 beneficial if those field devices form a group (e.g. "all-sensors- 1071 actuators-interested-in-lock/unlock-events). Alternatively, the area 1072 and zone controllers could form a group where the arrival of such an 1073 event results in each area and zone controller initiating unicast or 1074 multicast within the LLN. 1076 This use case is also described in the home automation, although the 1077 requirement about preventing the "popcorn effect" I-D.ietf-roll-home- 1078 routing-reqs] can be relaxed a bit in building automation. It would 1079 be nice if lights, roll-down shutters and other actuators in the same 1080 room or area with transparent walls execute the command around (not 1081 'at') the same time (a tolerance of 200 ms is allowed). 1083 12.2. Building Energy Conservation 1085 A room that is not in use should not be heated, air conditioned or 1086 ventilated and the lighting should be turned off or dimmed. In a 1087 building with many rooms it can happen quite frequently that someone 1088 forgets to switch off the HVAC and lighting, thereby wasting valuable 1089 energy. To prevent this occurrence, the facility manager might 1090 program the building according to the day's schedule. This way 1091 lighting and HVAC is turned on prior to the use of a room, and turned 1092 off afterwards. Using such a system Vooruit has realized a saving of 1093 35% on the gas and electricity bills. 1095 12.3. Inventory and Remote Diagnosis of Safety Equipment 1097 Each month Vooruit is obliged to make an inventory of its safety 1098 equipment. This task takes two working days. Each fire extinguisher 1099 (100), fire blanket (10), fire-resistant door (120) and evacuation 1100 plan (80) must be checked for presence and proper operation. Also 1101 the battery and lamp of every safety lamp must be checked before each 1102 public event (safety laws). Automating this process using asset 1103 tracking and low-power wireless technologies would reduce a heavy 1104 burden on working hours. 1106 It is important that these messages are delivered very reliably and 1107 that the power consumption of the sensors/actuators attached to this 1108 safety equipment is kept at a very low level. 1110 12.4. Life Cycle of Field Devices 1112 Some field devices (e.g. smoke detectors) are replaced periodically. 1113 The ease by which devices are added and deleted from the network is 1114 very important to support augmenting sensors/actuators during 1115 construction. 1117 A secure mechanism is needed to remove the old device and install the 1118 new device. New devices need to be authenticated before they can 1119 participate in the routing process of the LLN. After the 1120 authentication, zero-configuration of the routing protocol is 1121 necessary. 1123 12.5. Surveillance 1125 Ingress and egress are real-time applications needing response times 1126 below 500msec, for example for cardkey authorization. It must be 1127 possible to configure doors individually to restrict use on a per 1128 person basis with respect to time-of-day and person entering. While 1129 much of the surveillance application involves sensing and actuation 1130 at the door and communication with the centralized security system, 1131 other aspects, including tamper, door ajar, and forced entry 1132 notification, are to be delivered to one or more fixed or mobile user 1133 devices within 5 seconds. 1135 12.6. Emergency 1137 In case of an emergency it is very important that all the visitors be 1138 evacuated as quickly as possible. The fire and smoke detectors set 1139 off an alarm and alert the mobile personnel on their user device 1140 (e.g. PDA). All emergency exits are instantly unlocked and the 1141 emergency lighting guides the visitors to these exits. The necessary 1142 sprinklers are activated and the electricity grid monitored if it 1143 becomes necessary to shut down some parts of the building. Emergency 1144 services are notified instantly. 1146 A wireless system could bring in some extra safety features. 1147 Locating fire fighters and guiding them through the building could be 1148 a life-saving application. 1150 These life critical applications ought to take precedence over other 1151 network traffic. Commands entered during these emergencies have to 1152 be properly authenticated by device, user, and command request. 1154 12.7. Public Address 1156 It should be possible to send audio and text messages to the visitors 1157 in the building. These messages can be very diverse, e.g. ASCII text 1158 boards displaying the name of the event in a room, audio 1159 announcements such as delays in the program, lost and found children, 1160 evacuation orders, etc. 1162 The control network is expected be able to readily sense the presence 1163 of an audience in an area and deliver applicable message content. 1165 Authors' Addresses 1167 Jerry Martocci 1168 Johnson Control 1169 507 E. Michigan Street 1170 Milwaukee, Wisconsin, 53202 1171 USA 1173 Phone: 414.524.4010 1174 Email: jerald.p.martocci@jci.com 1176 Nicolas Riou 1177 Schneider Electric 1178 Technopole 38TEC T3 1179 37 quai Paul Louis Merlin 1180 38050 Grenoble Cedex 9 1181 France 1183 Phone: +33 4 76 57 66 15 1184 Email: nicolas.riou@fr.schneider-electric.com 1186 Pieter De Mil 1187 Ghent University - IBCN 1188 G. Crommenlaan 8 bus 201 1189 Ghent 9050 1190 Belgium 1192 Phone: +32-9331-4981 1193 Fax: +32--9331--4899 1194 Email: pieter.demil@intec.ugent.be 1196 Wouter Vermeylen 1197 Arts Centre Vooruit 1198 ??? 1199 Ghent 9000 1200 Belgium 1202 Phone: ??? 1203 Fax: ??? 1204 Email: wouter@vooruit.be