idnits 2.17.1 draft-ietf-roll-building-routing-reqs-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? -- It seems you're using the 'non-IETF stream' Licence Notice instead Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 464 has weird spacing: '... laptop comp...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: To minimize network dynamics, mobile devices SHOULD not be allowed to act as forwarding devices (routers) for other devices in the LLN. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: Authentication SHOULD be optional on the LLN. Authentication SHOULD be fully configurable on-site. Authentication policy and updates MUST be transmittable over-the-air. Authentication SHOULD occur upon joining or rejoining a network. However, once authenticated devices SHOULD not need to reauthenticate themselves with any other devices in the LLN. Packets may need authentication at the source and destination nodes, however, packets routed through intermediate hops should not need to be reauthenticated at each hop. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 2, 2009) is 5555 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'I-D.ietf-roll-terminology' is mentioned on line 161, but not defined == Unused Reference: 'RFC2119' is defined on line 838, but no explicit reference was found in the text == Unused Reference: '1' is defined on line 843, but no explicit reference was found in the text == Unused Reference: '2' is defined on line 848, but no explicit reference was found in the text == Unused Reference: '3' is defined on line 853, but no explicit reference was found in the text == Unused Reference: '4' is defined on line 857, but no explicit reference was found in the text == Unused Reference: '5' is defined on line 861, but no explicit reference was found in the text == Outdated reference: A later version (-11) exists of draft-ietf-roll-home-routing-reqs-06 == Outdated reference: A later version (-06) exists of draft-ietf-roll-indus-routing-reqs-03 == Outdated reference: A later version (-13) exists of draft-ietf-roll-terminology-00 Summary: 1 error (**), 0 flaws (~~), 14 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Networking Working Group J. Martocci, Ed. 2 Internet-Draft Johnson Controls Inc. 3 Intended status: Informational Pieter De Mil 4 Expires: August 2, 2009 Ghent University IBCN 5 W. Vermeylen 6 Arts Centre Vooruit 7 Nicolas Riou 8 Schneider Electric 9 February 2, 2009 11 Building Automation Routing Requirements in Low Power and Lossy 12 Networks 13 draft-ietf-roll-building-routing-reqs-03 15 Status of this Memo 17 This Internet-Draft is submitted to IETF in full conformance with the 18 provisions of BCP 78 and BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on August 2, 2009. 38 Copyright Notice 40 Copyright (c) 2009 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. 50 Abstract 52 The Routing Over Low power and Lossy network (ROLL) Working Group has 53 been chartered to work on routing solutions for Low Power and Lossy 54 networks (LLN) in various markets: Industrial, Commercial (Building), 55 Home and Urban. Pursuant to this effort, this document defines the 56 routing requirements for building automation. 58 Requirements Language 60 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 61 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 62 document are to be interpreted as described in RFC-2119. 64 Table of Contents 66 1. Terminology....................................................4 67 2. Introduction...................................................4 68 3. Facility Management System (FMS) Topology......................5 69 3.1. Introduction..............................................5 70 3.2. Sensors/Actuators.........................................6 71 3.3. Area Controllers..........................................7 72 3.4. Zone Controllers..........................................7 73 4. Installation Methods...........................................7 74 4.1. Wired Communication Media.................................7 75 4.2. Device Density............................................8 76 4.2.1. HVAC Device Density..................................8 77 4.2.2. Fire Device Density..................................8 78 4.2.3. Lighting Device Density..............................9 79 4.2.4. Physical Security Device Density.....................9 80 4.3. Installation Procedure....................................9 81 5. Building Automation Routing Requirements......................10 82 5.1. Installation.............................................10 83 5.1.1. Zero-Configuration Installation.....................11 84 5.1.2. Sleeping Devices....................................11 85 5.1.3. Local Testing.......................................11 86 5.1.4. Device Replacement..................................12 87 5.2. Scalability..............................................12 88 5.2.1. Network Domain......................................12 89 5.2.2. Peer-to-Peer Communication..........................12 90 5.3. Mobility.................................................13 91 5.3.1. Mobile Device Requirements..........................13 92 5.4. Resource Constrained Devices.............................14 93 5.4.1. Limited Processing Power for Non-routing Devices....14 94 5.4.2. Limited Processing Power for Routing Devices........14 95 5.5. Addressing...............................................14 96 5.5.1. Unicast/Multicast/Anycast...........................14 98 5.6. Manageability............................................14 99 5.6.1. Firmware Upgrades...................................15 100 5.6.2. Diagnostics.........................................15 101 5.6.3. Route Tracking......................................15 102 5.7. Route Selection..........................................15 103 5.7.1. Path Cost...........................................15 104 5.7.2. Path Adaptation.....................................16 105 5.7.3. Route Redundancy....................................16 106 5.7.4. Route Discovery Time................................16 107 5.7.5. Route Preference....................................16 108 6. Traffic Pattern...............................................16 109 7. Security Considerations.......................................17 110 7.1. Security Requirements....................................18 111 7.1.1. Authentication......................................18 112 7.1.2. Encryption..........................................18 113 7.1.3. Disparate Security Policies.........................19 114 8. IANA Considerations...........................................19 115 9. Acknowledgments...............................................19 116 10. References...................................................19 117 10.1. Normative References....................................19 118 10.2. Informative References..................................20 119 11. Appendix A: Additional Building Requirements.................20 120 11.1. Additional Commercial Product Requirements..............20 121 11.1.1. Wired and Wireless Implementations.................20 122 11.1.2. World-wide Applicability...........................20 123 11.1.3. Support of the BACnet Building Protocol............21 124 11.1.4. Support of the LON Building Protocol...............21 125 11.1.5. Energy Harvested Sensors...........................21 126 11.1.6. Communication Distance.............................21 127 11.1.7. Automatic Gain Control.............................21 128 11.1.8. Cost...............................................21 129 11.1.9. IPv4 Compatibility.................................21 130 11.2. Additional Installation and Commissioning Requirements..22 131 11.2.1. Device Setup Time..................................22 132 11.2.2. Unavailability of an IT network....................22 133 11.3. Additional Network Requirements.........................22 134 11.3.1. TCP/UDP............................................22 135 11.3.2. Data Rate Performance..............................22 136 11.3.3. High Speed Downloads...............................22 137 11.3.4. Interference Mitigation............................22 138 11.3.5. Real-time Performance Measures.....................22 139 11.3.6. Packet Reliability.................................22 140 11.3.7. Merging Commissioned Islands.......................23 141 11.3.8. Adjustable System Table Sizes......................23 142 11.4. Prioritized Routing.....................................23 143 11.4.1. Packet Prioritization..............................23 144 11.5. Constrained Devices.....................................23 145 11.5.1. Proxying for Constrained Devices...................24 146 11.6. Reliability.............................................24 147 11.6.1. Device Integrity...................................24 148 11.7. Path Persistence........................................24 149 12. Appendix B: FMS Use-Cases....................................24 150 12.1. Locking and Unlocking the Building......................25 151 12.2. Building Energy Conservation............................25 152 12.3. Inventory and Remote Diagnosis of Safety Equipment......25 153 12.4. Life Cycle of Field Devices.............................26 154 12.5. Surveillance............................................26 155 12.6. Emergency...............................................26 156 12.7. Public Address..........................................27 158 1. Terminology 160 For description of the terminology used in this specification, please 161 see [I-D.ietf-roll-terminology]. 163 2. Introduction 165 Commercial buildings have been fitted with pneumatic and subsequently 166 electronic communication pathways connecting sensors to their 167 controllers for over one hundred years. Recent economic and 168 technical advances in wireless communication allow facilities to 169 increasingly utilize a wireless solution in lieu of a wired solution; 170 thereby reducing installation costs while maintaining highly reliant 171 communication. 173 The cost benefits and ease of installation of wireless sensors allow 174 customers to further instrument their facilities with additional 175 sensors; providing tighter control while yielding increased energy 176 savings. 178 Wireless solutions will be adapted from their existing wired 179 counterparts in many of the building applications including, but not 180 limited to Heating, Ventilation, and Air Conditioning (HVAC), 181 Lighting, Physical Security, Fire, and Elevator systems. These 182 devices will be developed to reduce installation costs; while 183 increasing installation and retrofit flexibility, as well as 184 increasing the sensing fidelity to improve efficiency and building 185 service quality. 187 Sensing devices may be battery or mains powered. Actuators and area 188 controllers will be mains powered. Still it is envisioned to see a 189 mix of wired and wireless sensors and actuators within buildings. 191 Facility Management Systems (FMS) are deployed in a large set of 192 vertical markets including universities; hospitals; government 193 facilities; Kindergarten through High School (K-12); pharmaceutical 194 manufacturing facilities; and single-tenant or multi-tenant office 195 buildings. These buildings range in size from 100K sqft structures (5 196 story office buildings), to 1M sqft skyscrapers (100 story 197 skyscrapers) to complex government facilities such as the Pentagon. 198 The described topology is meant to be the model to be used in all 199 these types of environments, but clearly must be tailored to the 200 building class, building tenant and vertical market being served. 202 The following sections describe the sensor, actuator, area controller 203 and zone controller layers of the topology. (NOTE: The Building 204 Controller and Enterprise layers of the FMS are excluded from this 205 discussion since they typically deal in communication rates requiring 206 LAN/WLAN communication technologies). 208 Section 3 describes FMS architectures commonly installed in 209 commercial buildings. Section 4 describes installation methods 210 deployed for new and remodeled construction. Appendix B describes 211 various FMS use-cases and the interaction with humans for energy 212 conservation and life-safety applications. 214 Sections 3, 4, and Appendix B are mainly included for educational 215 purposes. The aim of this document is to provide the set of IPv6 216 routing requirements for LLNs in buildings as described in Section 5. 218 3. Facility Management System (FMS) Topology 220 3.1. Introduction 222 To understand the network systems requirements of a facility 223 management system in a commercial building, this document uses a 224 framework to describe the basic functions and composition of the 225 system. An FMS is a hierarchical system of sensors, actuators, 226 controllers and user interface devices based on spatial extent. 227 Additionally, an FMS may also be divided functionally across alike, 228 but different building subsystems such as HVAC, Fire, Security, 229 Lighting, Shutters and Elevator control systems as denoted in Figure 230 1. 232 Much of the makeup of an FMS is optional and installed at the behest 233 of the customer. Sensors and actuators have no standalone 234 functionality. All other devices support partial or complete 235 standalone functionality. These devices can optionally be tethered 236 to form a more cohesive system. The customer requirements dictate 237 the level of integration within the facility. This architecture 238 provides excellent fault tolerance since each node is designed to 239 operate in an independent mode if the higher layers are unavailable. 241 +------+ +-----+ +------+ +------+ +------+ +------+ 243 Bldg App'ns | | | | | | | | | | | | 245 | | | | | | | | | | | | 247 Building Cntl | | | | | S | | L | | S | | E | 249 | | | | | E | | I | | H | | L | 251 Area Control | H | | F | | C | | G | | U | | E | 253 | V | | I | | U | | H | | T | | V | 255 Zone Control | A | | R | | R | | T | | T | | A | 257 | C | | E | | I | | I | | E | | T | 259 Actuators | | | | | T | | N | | R | | O | 261 | | | | | Y | | G | | S | | R | 263 Sensors | | | | | | | | | | | | 265 +------+ +-----+ +------+ +------+ +------+ +------+ 267 Figure 1: Building Systems and Devices 269 3.2. Sensors/Actuators 271 As Figure 1 indicates an FMS may be composed of many functional 272 stacks or silos that are interoperably woven together via Building 273 Applications. Each silo has an array of sensors that monitor the 274 environment and actuators that effect the environment as determined 275 by the upper layers of the FMS topology. The sensors typically are 276 the fringe of the network structure providing environmental data into 277 the system. The actuators are the sensors counterparts modifying the 278 characteristics of the system based on the input sensor data and the 279 applications deployed. 281 3.3. Area Controllers 283 An area describes a small physical locale within a building, 284 typically a room. HVAC (temperature and humidity) and Lighting (room 285 lighting, shades, solar loads) vendors oft times deploy area 286 controllers. Area controls are fed by sensor inputs that monitor the 287 environmental conditions within the room. Common sensors found in 288 many rooms that feed the area controllers include temperature, 289 occupancy, lighting load, solar load and relative humidity. Sensors 290 found in specialized rooms (such as chemistry labs) might include air 291 flow, pressure, CO2 and CO particle sensors. Room actuation includes 292 temperature setpoint, lights and blinds/curtains. 294 3.4. Zone Controllers 296 Zone Control supports a similar set of characteristics as the Area 297 Control albeit to an extended space. A zone is normally a logical 298 grouping or functional division of a commercial building. A zone may 299 also coincidentally map to a physical locale such as a floor. 301 Zone Control may have direct sensor inputs (smoke detectors for 302 fire), controller inputs (room controllers for air-handlers in HVAC) 303 or both (door controllers and tamper sensors for security). Like 304 area/room controllers, zone controllers are standalone devices that 305 operate independently or may be attached to the larger network for 306 more synergistic control. 308 4. Installation Methods 310 4.1. Wired Communication Media 312 Commercial controllers are traditionally deployed in a facility using 313 twisted pair serial media following the EIA-485 electrical standard 314 operating nominally at 38400 to 76800 baud. This allows runs to 5000 315 ft without a repeater. With the maximum of three repeaters, a single 316 communication trunk can serpentine 15000 ft. EIA-485 is a multi-drop 317 media allowing upwards to 255 devices to be connected to a single 318 trunk. 320 Most sensors and virtually all actuators currently used in 321 commercial buildings are "dumb", non-communicating hardwired devices. 322 However, sensor buses are beginning to be deployed by vendors which 323 are used for smart sensors and point multiplexing. The Fire 324 industry deploys addressable fire devices, which usually use some 325 form of proprietary communication wiring driven by fire codes. 327 4.2. Device Density 329 Device density differs depending on the application and as dictated 330 by the local building code requirements. The following sections 331 detail typical installation densities for different applications. 333 4.2.1. HVAC Device Density 335 HVAC room applications typically have sensors/actuators and 336 controllers spaced about 50ft apart. In most cases there is a 3:1 337 ratio of sensors/actuators to controllers. That is, for each room 338 there is an installed temperature sensor, flow sensor and damper 339 actuator for the associated room controller. 341 HVAC equipment room applications are quite different. An air handler 342 system may have a single controller with upwards to 25 sensors and 343 actuators within 50 ft of the air handler. A chiller or boiler is 344 also controlled with a single equipment controller instrumented with 345 25 sensors and actuators. Each of these devices would be 346 individually addressed since the devices are mandated or optional as 347 defined by the specified HVAC application. Air handlers typically 348 serve one or two floors of the building. Chillers and boilers may be 349 installed per floor, but many times service a wing, building or the 350 entire complex via a central plant. 352 These numbers are typical. In special cases, such as clean rooms, 353 operating rooms, pharmaceuticals and labs, the ratio of sensors to 354 controllers can increase by a factor of three. Tenant installations 355 such as malls would opt for packaged units where much of the sensing 356 and actuation is integrated into the unit. Here a single device 357 address would serve the entire unit. 359 4.2.2. Fire Device Density 361 Fire systems are much more uniformly installed with smoke detectors 362 installed about every 50 feet. This is dictated by local building 363 codes. Fire pull boxes are installed uniformly about every 150 feet. 364 A fire controller will service a floor or wing. The fireman's fire 365 panel will service the entire building and typically is installed in 366 the atrium. 368 4.2.3. Lighting Device Density 370 Lighting is also very uniformly installed with ballasts installed 371 approximately every 10 feet. A lighting panel typically serves 48 to 372 64 zones. Wired systems typically tether many lights together into a 373 single zone. Wireless systems configure each fixture independently 374 to increase flexibility and reduce installation costs. 376 4.2.4. Physical Security Device Density 378 Security systems are non-uniformly oriented with heavy density near 379 doors and windows and lighter density in the building interior space. 380 The recent influx of interior and perimeter camera systems is 381 increasing the security footprint. These cameras are atypical 382 endpoints requiring upwards to 1 megabit/second (Mbit/s) data rates 383 per camera as contrasted by the few Kbits/s needed by most other FMS 384 sensing equipment. Previously, camera systems had been deployed on 385 proprietary wired high speed network. More recent implementations 386 utilize wired or wireless IP cameras integrated to the enterprise 387 LAN. 389 4.3. Installation Procedure 391 Wired FMS installation is a multifaceted procedure depending on the 392 extent of the system and the software interoperability requirement. 393 However, at the sensor/actuator and controller level, the procedure 394 is typically a two or three step process. 396 Most FMS equipment will utilize 24 VAC power sources that can be 397 installed by a low-voltage electrician. He/she arrives on-site 398 during the construction of the building prior to the sheet wall and 399 ceiling installation. This allows him/her to allocate wall space, 400 easily land the equipment and run the wired controller and sensor 401 networks. The Building Controllers and Enterprise network are not 402 normally installed until months later. The electrician completes his 403 task by running a wire verification procedure that shows proper 404 continuity between the devices and proper local operation of the 405 devices. 407 Later in the installation cycle, the higher order controllers are 408 installed, programmed and commissioned together with the previously 409 installed sensors, actuators and controllers. In most cases the IP 410 network is still not operable. The Building Controllers are 411 completely commissioned using a crossover cable or a temporary IP 412 switch together with static IP addresses. 414 Once the IP network is operational, the FMS may optionally be added 415 to the enterprise network. The wireless installation process must 416 follow the same work flow. The electrician will install the products 417 as before and run local functional tests between the wireless device 418 to assure operation before leaving the job. The electrician does 419 not carry a laptop so the commissioning must be built into the device 420 operation. 422 The wireless installation process must follow the same work flow. 423 The electrician will install the products as before and run local 424 functional tests between the wireless devices to assure operation 425 before leaving the job. The electrician does not carry a laptop so 426 the commissioning must be built into the device operation. 428 5. Building Automation Routing Requirements 430 Following are the building automation routing requirements for a 431 network used to integrate building sensor actuator and control 432 products. These requirements have been limited to routing 433 requirements only. These requirements are written not presuming any 434 preordained network topology, physical media (wired) or radio 435 technology (wireless). See Appendix A for additional requirements 436 that have been deemed outside the scope of this document yet will 437 pertain to the successful deployment of building automation systems. 439 5.1. Installation 441 Building control systems typically are installed and tested by 442 electricians having little computer knowledge and no network 443 knowledge whatsoever. These systems are often installed during the 444 building construction phase before the drywall and ceilings are in 445 place. For new construction projects, the building enterprise IP 446 network is not in place during installation of the building control 447 system. 449 In retrofit applications, pulling wires from sensors to controllers 450 can be costly and in some applications (e.g. museums) not feasible. 452 Local (ad hoc) testing of sensors and room controllers must be 453 completed before the tradesperson can complete his/her work. This 454 testing allows the tradesperson to verify correct client (e.g. light 455 switch) and server (e.g. light ballast) before leaving the jobsite. 456 In traditional wired systems correct operation of a light 457 switch/ballast pair was as simple as flipping on the light switch. 458 In wireless applications, the tradesperson has to assure the same 459 operation, yet be sure the operation of the light switch is 460 associated to the proper ballast. 462 System level commissioning will later be deployed using a more 463 computer savvy person with access to a commissioning device (e.g. a 464 laptop computer). The completely installed and commissioned 465 enterprise IP network may or may not be in place at this time. 466 Following are the installation routing requirements. 468 5.1.1. Zero-Configuration Installation 470 It MUST be possible to fully commission network devices without 471 requiring any additional commissioning device (e.g. laptop). 473 5.1.2. Sleeping Devices 475 Sensing devices will, in some cases, utilize battery power or energy 476 harvesting techniques for power and will operate mostly in a sleep 477 mode to maintain power consumption within a modest budget. The 478 routing protocol MUST take into account device characteristics such 479 as power budget. If such devices provide routing, rather than merely 480 host connectivity, the energy costs associated with such routing 481 needs to fit within the power budget. If the mechanisms for duty 482 cycling dictate very long response times or specific temporal 483 scheduling, routing will need to take such constraints into account. 485 Typically, batteries need to be operational for at least 5 years when 486 the sensing device is transmitting its data(e.g. 64 bytes) once per 487 minute. This requires that sleeping devices must have minimal link 488 on time when they awake and transmit onto the network. Moreover, 489 maintaining the ability to receive inbound data must be accomplished 490 with minimal link on time. 492 In many cases, proxies with unconstrained power budgets are used to 493 cache the inbound data for a sleeping device until the device 494 awakens. In such cases, the routing protocol MUST discover the 495 capability of a node to act as a proxy during path calculation; 496 deliver the packet to the assigned proxy for later delivery to the 497 sleeping device upon its next awake cycle. 499 5.1.3. Local Testing 501 The local sensors and requisite actuators and controllers must be 502 testable within the locale (e.g. room) to assure communication 503 connectivity and local operation without requiring other systemic 504 devices. Routing should allow for temporary ad hoc paths to be 505 established that are updated as the network physically and 506 functionally expands. 508 5.1.4. Device Replacement 510 Replacement devices need to be plug-and-play with no additional setup 511 compared to what is normally required for a new device. Devices 512 referencing data in the replaced device must be able to reference 513 data in its replacement without being reconfigured to refer to the 514 new device. Thus, such a reference cannot be a hardware identifier, 515 such as the MAC address, nor a hard-coded route. If such a reference 516 is an IP address, the replacement device must be assigned the IP 517 addressed previously bound to the replaced device. Or if the logical 518 equivalent of a hostname is used for the reference, it must be 519 translated to the replacement IP address. 521 5.2. Scalability 523 Building control systems are designed for facilities from 50000 sq. 524 ft. to 1M+ sq. ft. The networks that support these systems must 525 cost-effectively scale accordingly. In larger facilities 526 installation may occur simultaneously on various wings or floors, yet 527 the end system must seamlessly merge. Following are the scalability 528 requirements. 530 5.2.1. Network Domain 532 The routing protocol MUST be able to support networks with at least 533 2000 nodes supporting at least 1000 routing devices and 1000 non- 534 routing device. Subnetworks (e.g. rooms, primary equipment) within 535 the network must support upwards to 255 sensors and/or actuators. 537 5.2.2. Peer-to-Peer Communication 539 The data domain for commercial FMS systems may sprawl across a vast 540 portion of the physical domain. For example, a chiller may reside in 541 the facility's basement due to its size, yet the associated cooling 542 towers will reside on the roof. The cold-water supply and return 543 pipes serpentine through all the intervening floors. The feedback 544 control loops for these systems require data from across the 545 facility. 547 A network device must be able to communicate in a peer-to-peer manner 548 with any other device on the network. Thus, the routing protocol MUST 549 provide routes between arbitrary hosts within the appropriate 550 administrative domain. 552 5.3. Mobility 554 Most devices are affixed to walls or installed on ceilings within 555 buildings. Hence the mobility requirements for commercial buildings 556 are few. However, in wireless environments location tracking of 557 occupants and assets is gaining favor. Asset tracking applications 558 require monitoring movement with granularity of a minute. This soft 559 real-time performance requirement is reflected in the performance 560 requirements below. 562 5.3.1. Mobile Device Requirements 564 To minimize network dynamics, mobile devices SHOULD not be allowed to 565 act as forwarding devices (routers) for other devices in the LLN. 567 A mobile device that moves within an LLN SHOULD reestablish end-to- 568 end communication to a fixed device also in the LLN within 2 seconds. 569 The network convergence time should be less than 5 seconds once the 570 mobile device stops moving. 572 A mobile device that moves outside of an LLN SHOULD reestablish end- 573 to-end communication to a fixed device in the new LLN within 5 574 seconds. The network convergence time should be less than 5 seconds 575 once the mobile device stops moving. 577 A mobile device that moves outside of one LLN into another LLN SHOULD 578 reestablish end-to-end communication to a fixed device in the old LLN 579 within 10 seconds. The network convergence time should be less than 580 10 seconds once the mobile device stops. 582 A mobile device that moves outside of one LLN into another LLN SHOULD 583 reestablish end-to-end communication to another mobile device in the 584 new LLN within 20 seconds. The network convergence time should be 585 less than 30 seconds once the mobile devices stop moving. 587 A mobile device that moves outside of one LLN into another LLN SHOULD 588 reestablish end-to-end communication to a mobile device in the old 589 LLN within 30 seconds. The network convergence time should be less 590 than 30 seconds once the mobile devices stop moving. 592 5.4. Resource Constrained Devices 594 Sensing and actuator device processing power and memory may be 4 595 orders of magnitude less (i.e. 10,000x) than many more traditional 596 client devices on an IP network. The routing mechanisms must 597 therefore be tailored to fit these resource constrained devices. 599 5.4.1. Limited Processing Power for Non-routing Devices. 601 The software size requirement for non-routing devices (e.g. sleeping 602 sensors and actuators) SHOULD be implementable in 8-bit devices with 603 no more than 128KB of memory. 605 5.4.2. Limited Processing Power for Routing Devices 607 The software size requirements for routing devices (e.g. room 608 controllers) SHOULD be implementable in 8-bit devices with no more 609 than 256KB of flash memory. 611 5.5. Addressing 613 Facility Management systems require different communication schemes 614 to solicit or post network information. Broadcasts or anycasts need 615 be used to resolve unresolved references within a device when the 616 device first joins the network. 618 As with any network communication, broadcasting should be minimized. 619 This is especially a problem for small embedded devices with limited 620 network bandwidth. In many cases a global broadcast could be 621 replaced with a multicast since the application knows the application 622 domain. Broadcasts and multicasts are typically used for network 623 joins and application binding in embedded systems. 625 5.5.1. Unicast/Multicast/Anycast 627 Routing MUST support anycast, unicast, and multicast. 629 5.6. Manageability 631 In addition to the initial installation of the system (see Section 632 4.1), it is equally important for the ongoing maintenance of the 633 system to be simple and inexpensive. 635 5.6.1. Firmware Upgrades 637 To support high speed code downloads, routing MUST support transports 638 that provide parallel downloads to targeted devices yet guarantee 639 packet delivery. In cases where the spatial position of the devices 640 requires multiple hops, the algorithm must recurse through the 641 network until all targeted devices have been serviced. 643 5.6.2. Diagnostics 645 To improve diagnostics, the network layer SHOULD be able to be placed 646 in and out of 'verbose' mode. Verbose mode is a temporary debugging 647 mode that provides additional communication information including at 648 least total number of routing packets sent and received, number of 649 routing failure (no route available), neighbor table, and routing 650 table entries. 652 5.6.3. Route Tracking 654 Route diagnostics SHOULD be supported providing information such as 655 path quality; number of hops; available alternate active paths with 656 associated costs. Path quality is the relative measure of 'goodness' 657 of the selected source to destination path as compared to alternate 658 paths. This composite value may be measured as a function of hop 659 count, signal strength, available power, existing active paths or any 660 other criteria deemed by ROLL as the path cost differentiator. 662 5.7. Route Selection 664 Route selection determines reliability and quality of the 665 communication paths among the devices. Optimizing the routes over 666 time resolve any nuances developed at system startup when nodes are 667 asynchronously adding themselves to the network. Path adaptation 668 will reduce latency if the path costs consider hop count as a cost 669 attribute. 671 5.7.1. Path Cost 673 The routing protocol MUST support a metric of route quality and 674 optimize path selection according to such metrics within constraints 675 established for links along the paths. These metrics SHOULD reflect 676 metrics such as signal strength, available bandwidth, hop count, 677 energy availability and communication error rates. 679 5.7.2. Path Adaptation 681 Communication paths MUST adapt toward the chosen metric(s) (e.g. 682 signal quality) optimality in time. 684 5.7.3. Route Redundancy 686 The network layer SHOULD be configurable to allow secondary and 687 tertiary paths to be established and used upon failure of the primary 688 path. 690 5.7.4. Route Discovery Time 692 Mission critical commercial applications (e.g. Fire, Security) 693 require reliable communication and guaranteed end-to-end delivery of 694 all messages in a timely fashion. Application layer time-outs must 695 be selected judiciously to cover anomalous conditions such as lost 696 packets and/or path discoveries; yet not be set too large to over 697 damp the network response. If route discovery occurs during packet 698 transmission time, it SHOULD NOT add more than 120ms of latency to 699 the packet delivery time. 701 5.7.5. Route Preference 703 Route cost algorithms SHOULD allow the installer to optionally select 704 'preferred' paths based on the known spatial layout of the 705 communicating devices. 707 6. Traffic Pattern 709 The independent nature of the automation systems within a building 710 plays heavy onto the network traffic patterns. Much of the real-time 711 sensor data stays within the local environment. Alarming and other 712 event data will percolate to higher layers. 714 Systemic data may be either polled or event based. Polled data 715 systems will generate a uniform packet load on the network. This 716 architecture has proven not scalable. Most vendors have developed 717 event based systems which pass data on event. These systems are 718 highly scalable and generate low data on the network at quiescence. 719 Unfortunately, the systems will generate a heavy load on startup 720 since all the initial data must migrate to the controller level. 721 They also will generate a temporary but heavy load during firmware 722 upgrades. This latter load can normally be mitigated by performing 723 these downloads during off-peak hours. 725 Devices will need to reference peers occasionally for sensor data or 726 to coordinate across systems. Normally, though, data will migrate 727 from the sensor level upwards through the local, area then 728 supervisory level. Bottlenecks will typically form at the funnel 729 point from the area controllers to the supervisory controllers. 731 Initial system startup after a controlled outage or unexpected power 732 failure puts tremendous stress on the network and on the routing 733 algorithms. An FMS system is comprised of a myriad of control 734 algorithms at the room, area, zone, and enterprise layers. When 735 these control algorithms are at quiescence, the real-time data 736 changes are small and the network will not saturate. However, upon 737 any power loss, the control loops and real-time data quickly atrophy. 738 A ten minute outage may take many hours to regain control. 740 Upon restart all lines-powered devices power-on instantaneously. 741 However due to application startup and self tests, these devices will 742 attempt to join the network randomly. Empirical testing indicates 743 that routing paths acquired during startup will tend to be very 744 oblique since the available neighbor lists are incomplete. This 745 demands an adaptive routing protocol to allow for path optimization 746 as the network stabilizes. 748 7. Security Considerations 750 Security policies, especially wireless encryption and device 751 authentication needs to be considered, especially with concern to the 752 impact on the processing capabilities and additional latency incurred 753 on the sensors, actuators and controllers. 755 FMS systems are typically highly configurable in the field and hence 756 the security policy is most often dictated by the type of building to 757 which the FMS is being installed. Single tenant owner occupied 758 office buildings installing lighting or HVAC control are candidates 759 for implementing low or even no security on the LLN. Antithetically, 760 military or pharmaceutical facilities require strong security 761 policies. As noted in the installation procedures above, security 762 policies must be facile to allow no security during the installation 763 phase (prior to building occupancy), yet easily raise the security 764 level network wide during the commissioning phase of the system. 766 7.1. Security Requirements 768 7.1.1. Authentication 770 Authentication SHOULD be optional on the LLN. Authentication SHOULD 771 be fully configurable on-site. Authentication policy and updates MUST 772 be transmittable over-the-air. Authentication SHOULD occur upon 773 joining or rejoining a network. However, once authenticated devices 774 SHOULD not need to reauthenticate themselves with any other devices 775 in the LLN. Packets may need authentication at the source and 776 destination nodes, however, packets routed through intermediate hops 777 should not need to be reauthenticated at each hop. 779 7.1.2. Encryption 781 7.1.2.1. Encryption Levels 783 Encryption SHOULD be optional on the LLN. Encryption SHOULD be fully 784 configurable on-site. Encryption policy and updates SHOULD be 785 transmittable over-the-air and in-the-clear. 787 7.1.2.2. Security Policy Flexibility 789 In most facilities authentication and encryption will be turned off 790 during installation. 792 More complex encryption policies might be put in force at 793 commissioning time. New encryption policies MUST be allowed to be 794 presented to all devices in the LLN over the network without needing 795 to visit each device. 797 7.1.2.3. Encryption Types 799 Data encryption of packets MUST optionally be supported by use of 800 either a network wide key and/or application key. The network key 801 would apply to all devices in the LLN. The application key would 802 apply to a subset of devices on the LLN. 804 The network key and application keys would be mutually exclusive. 805 Forwarding devices in the mesh MUST be able to forward a packet 806 encrypted with an application key without needing to have the 807 application key. 809 7.1.2.4. Packet Encryption 811 The encryption policy MUST support encryption of the payload only or 812 the entire packet. Payload only encryption would eliminate the 813 decryption/re-encryption overhead at every hop. 815 7.1.3. Disparate Security Policies 817 Due to the limited resources of an LLN, the security policy defined 818 within the LLN MUST be able to differ from that of the rest of the IP 819 network within the facility yet packets MUST still be able to route 820 to or through the LLN from/to these networks. 822 8. IANA Considerations 824 This document includes no request to IANA. 826 9. Acknowledgments 828 In addition to the authors, J. P. Vasseur, David Culler, Ted Humpal 829 and Zach Shelby are gratefully acknowledged for their contributions 830 to this document. 832 This document was prepared using 2-Word-v2.0.template.dot. 834 10. References 836 10.1. Normative References 838 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 839 Requirement Levels", BCP 14, RFC 2119, March 1997. 841 10.2. Informative References 843 [1] [I-D.ietf-roll-home-routing-reqs] Brandt, A., Buron, J., and 844 G. Porcu, "Home Automation Routing Requirements in Low Power 845 and Lossy Networks", draft-ietf-roll-home-routing-reqs-06 (work 846 in progress), November 2008. 848 [2] [I-D.ietf-roll-indus-routing-reqs] Networks, D., Thubert, 849 P., Dwars, S., and T. Phinney, "Industrial Routing Requirements 850 in Low Power and Lossy Networks", draft-ietf-roll-indus- 851 routing-reqs-03 (work in progress), December 2008. 853 [3] [I-D.ietf-roll-terminology]Vasseur, J., "Terminology in Low 854 power And Lossy Networks", draft-ietf-roll-terminology-00 (work 855 in progress), October 2008. 857 [4] "RS-485 EIA Standard: Standard for Electrical 858 Characteristics of Generators and Receivers for use in Balanced 859 Digital Multipoint Systems", April 1983 861 [5] "BACnet: A Data Communication Protocol for Building and 862 Automation Control Networks" ANSI/ASHRAE Standard 135-2004", 863 2004 865 11. Appendix A: Additional Building Requirements 867 Appendix A contains additional informative building requirements that 868 were deemed out of scope for the routing document yet provided 869 ancillary informational substance to the reader. The requirements 870 should be addressed by ROLL or other WGs before adoption by the 871 building automation industry. 873 11.1. Additional Commercial Product Requirements 875 11.1.1. Wired and Wireless Implementations 877 Solutions must support both wired and wireless implementations. 879 11.1.2. World-wide Applicability 881 Wireless devices must be supportable at the 2.4Ghz ISM band. 882 Wireless devices should be supportable at the 900 and 868 ISM bands 883 as well. 885 11.1.3. Support of the BACnet Building Protocol 887 Devices implementing the ROLL features should support the BACnet 888 protocol. 890 11.1.4. Support of the LON Building Protocol 892 Devices implementing the ROLL features should support the LON 893 protocol. 895 11.1.5. Energy Harvested Sensors 897 RFDs should target for operation using viable energy harvesting 898 techniques such as ambient light, mechanical action, solar load, air 899 pressure and differential temperature. 901 11.1.6. Communication Distance 903 A source device may be upwards to 1000 feet from its destination. 904 Communication may need to be established between these devices 905 without needing to install other intermediate 'communication only' 906 devices such as repeaters 908 11.1.7. Automatic Gain Control 910 For wireless implementations, the device radios should incorporate 911 automatic transmit power regulation to maximize packet transfer and 912 minimize network interference regardless of network size or density. 914 11.1.8. Cost 916 The total installed infrastructure cost including but not limited to 917 the media, required infrastructure devices (amortized across the 918 number of devices); labor to install and commission the network must 919 not exceed $1.00/foot for wired implementations. 921 Wireless implementations (total installed cost) must cost no more 922 than 80% of wired implementations. 924 11.1.9. IPv4 Compatibility 926 The routing protocol must support cost-effective intercommunication 927 among IPv4 and IPv6 devices. 929 11.2. Additional Installation and Commissioning Requirements 931 11.2.1. Device Setup Time 933 Network setup by the installer must take no longer than 20 seconds 934 per device installed. 936 11.2.2. Unavailability of an IT network 938 Product commissioning must be performed by an application engineer 939 prior to the installation of the IT network. 941 11.3. Additional Network Requirements 943 11.3.1. TCP/UDP 945 Connection based and connectionless services must be supported 947 11.3.2. Data Rate Performance 949 An effective data rate of 20kbits/s is the lowest acceptable 950 operational data rate acceptable on the network. 952 11.3.3. High Speed Downloads 954 Devices receiving a download MAY cease normal operation, but upon 955 completion of the download must automatically resume normal 956 operation. 958 11.3.4. Interference Mitigation 960 The network must automatically detect interference and seamlessly 961 migrate the network hosts channel to improve communication. Channel 962 changes and nodes response to the channel change must occur within 60 963 seconds. 965 11.3.5. Real-time Performance Measures 967 A node transmitting a 'request with expected reply' to another node 968 must send the message to the destination and receive the response in 969 not more than 120 msec. This response time should be achievable with 970 5 or less hops in each direction. This requirement assumes network 971 quiescence and a negligible turnaround time at the destination node. 973 11.3.6. Packet Reliability 975 Reliability must meet the following minimum criteria : 977 < 1% MAC layer errors on all messages; After no more than three 978 retries 980 < .1% Network layer errors on all messages; 982 After no more than three additional retries; 984 < 0.01% Application layer errors on all messages. 986 Therefore application layer messages will fail no more than once 987 every 100,000 messages. 989 11.3.7. Merging Commissioned Islands 991 Subsystems are commissioned by various vendors at various times 992 during building construction. These subnetworks must seamlessly 993 merge into networks and networks must seamlessly merge into 994 internetworks since the end user wants a holistic view of the system. 996 11.3.8. Adjustable System Table Sizes 998 Routing must support adjustable router table entry sizes on a per 999 node basis to maximize limited RAM in the devices. 1001 11.4. Prioritized Routing 1003 Network and application routing prioritization is required to assure 1004 that mission critical applications (e.g. Fire Detection) cannot be 1005 deferred while less critical application access the network. 1007 11.4.1. Packet Prioritization 1009 Routers must support quality of service prioritization to assure 1010 timely response for critical FMS packets. 1012 11.5. Constrained Devices 1014 The network may be composed of a heterogeneous mix of full, battery 1015 and energy harvested devices. The routing protocol must support 1016 these constrained devices. 1018 11.5.1. Proxying for Constrained Devices 1020 Routing must support in-bound packet caches for low-power (battery 1021 and energy harvested) devices when these devices are not accessible 1022 on the network. 1024 These devices must have a designated powered proxying device to which 1025 packets will be temporarily routed and cached until the constrained 1026 device accesses the network. 1028 11.6. Reliability 1030 11.6.1. Device Integrity 1032 Commercial Building devices must all be periodically scanned to 1033 assure that the device is viable and can communicate data and alarm 1034 information as needed. Network routers should maintain previous 1035 packet flow information temporally to minimize overall network 1036 overhead. 1038 11.7. Path Persistence 1040 To eliminate high network traffic in power-fail or brown-out 1041 conditions previously established routes SHOULD be remembered and 1042 invoked prior to establishing new routes for those devices reentering 1043 the network. 1045 12. Appendix B: FMS Use-Cases 1047 Appendix B contains FMS use-cases that describes the use of sensors 1048 and controllers for various applications with a commercial building 1049 and how they interplay with energy conservation and life-safety 1050 applications. 1052 The Vooruit arts centre is a restored monument which dates from 1913. 1053 This complex monument consists of over 350 different rooms including 1054 a meeting rooms, large public halls and theaters serving as many as 1055 2500 guests. A number of use cases regarding Vooruit are described 1056 in the following text. The situations and needs described in these 1057 use cases can also be found in all automated large buildings, such as 1058 airports and hospitals. 1060 12.1. Locking and Unlocking the Building 1062 The member of the cleaning staff arrives first in the morning 1063 unlocking the building (or a part of it) from the control room. This 1064 means that several doors are unlocked; the alarms are switched off; 1065 the heating turns on; some lights switch on, etc. Similarly, the 1066 last person leaving the building has to lock the building. This will 1067 lock all the outer doors, turn the alarms on, switch off heating and 1068 lights, etc. 1070 The "building locked" or "building unlocked" event needs to be 1071 delivered to a subset of all the sensors and actuators. It can be 1072 beneficial if those field devices form a group (e.g. "all-sensors- 1073 actuators-interested-in-lock/unlock-events). Alternatively, the area 1074 and zone controllers could form a group where the arrival of such an 1075 event results in each area and zone controller initiating unicast or 1076 multicast within the LLN. 1078 This use case is also described in the home automation, although the 1079 requirement about preventing the "popcorn effect" I-D.ietf-roll-home- 1080 routing-reqs] can be relaxed a bit in building automation. It would 1081 be nice if lights, roll-down shutters and other actuators in the same 1082 room or area with transparent walls execute the command around (not 1083 'at') the same time (a tolerance of 200 ms is allowed). 1085 12.2. Building Energy Conservation 1087 A room that is not in use should not be heated, air conditioned or 1088 ventilated and the lighting should be turned off or dimmed. In a 1089 building with many rooms it can happen quite frequently that someone 1090 forgets to switch off the HVAC and lighting, thereby wasting valuable 1091 energy. To prevent this occurrence, the facility manager might 1092 program the building according to the day's schedule. This way 1093 lighting and HVAC is turned on prior to the use of a room, and turned 1094 off afterwards. Using such a system Vooruit has realized a saving of 1095 35% on the gas and electricity bills. 1097 12.3. Inventory and Remote Diagnosis of Safety Equipment 1099 Each month Vooruit is obliged to make an inventory of its safety 1100 equipment. This task takes two working days. Each fire extinguisher 1101 (100), fire blanket (10), fire-resistant door (120) and evacuation 1102 plan (80) must be checked for presence and proper operation. Also 1103 the battery and lamp of every safety lamp must be checked before each 1104 public event (safety laws). Automating this process using asset 1105 tracking and low-power wireless technologies would reduce a heavy 1106 burden on working hours. 1108 It is important that these messages are delivered very reliably and 1109 that the power consumption of the sensors/actuators attached to this 1110 safety equipment is kept at a very low level. 1112 12.4. Life Cycle of Field Devices 1114 Some field devices (e.g. smoke detectors) are replaced periodically. 1115 The ease by which devices are added and deleted from the network is 1116 very important to support augmenting sensors/actuators during 1117 construction. 1119 A secure mechanism is needed to remove the old device and install the 1120 new device. New devices need to be authenticated before they can 1121 participate in the routing process of the LLN. After the 1122 authentication, zero-configuration of the routing protocol is 1123 necessary. 1125 12.5. Surveillance 1127 Ingress and egress are real-time applications needing response times 1128 below 500msec, for example for cardkey authorization. It must be 1129 possible to configure doors individually to restrict use on a per 1130 person basis with respect to time-of-day and person entering. While 1131 much of the surveillance application involves sensing and actuation 1132 at the door and communication with the centralized security system, 1133 other aspects, including tamper, door ajar, and forced entry 1134 notification, are to be delivered to one or more fixed or mobile user 1135 devices within 5 seconds. 1137 12.6. Emergency 1139 In case of an emergency it is very important that all the visitors be 1140 evacuated as quickly as possible. The fire and smoke detectors set 1141 off an alarm and alert the mobile personnel on their user device 1142 (e.g. PDA). All emergency exits are instantly unlocked and the 1143 emergency lighting guides the visitors to these exits. The necessary 1144 sprinklers are activated and the electricity grid monitored if it 1145 becomes necessary to shut down some parts of the building. Emergency 1146 services are notified instantly. 1148 A wireless system could bring in some extra safety features. 1149 Locating fire fighters and guiding them through the building could be 1150 a life-saving application. 1152 These life critical applications ought to take precedence over other 1153 network traffic. Commands entered during these emergencies have to 1154 be properly authenticated by device, user, and command request. 1156 12.7. Public Address 1158 It should be possible to send audio and text messages to the visitors 1159 in the building. These messages can be very diverse, e.g. ASCII text 1160 boards displaying the name of the event in a room, audio 1161 announcements such as delays in the program, lost and found children, 1162 evacuation orders, etc. 1164 The control network is expected be able to readily sense the presence 1165 of an audience in an area and deliver applicable message content. 1167 Authors' Addresses 1169 Jerry Martocci 1170 Johnson Control 1171 507 E. Michigan Street 1172 Milwaukee, Wisconsin, 53202 1173 USA 1175 Phone: 414.524.4010 1176 Email: jerald.p.martocci@jci.com 1178 Nicolas Riou 1179 Schneider Electric 1180 Technopole 38TEC T3 1181 37 quai Paul Louis Merlin 1182 38050 Grenoble Cedex 9 1183 France 1185 Phone: +33 4 76 57 66 15 1186 Email: nicolas.riou@fr.schneider-electric.com 1188 Pieter De Mil 1189 Ghent University - IBCN 1190 G. Crommenlaan 8 bus 201 1191 Ghent 9050 1192 Belgium 1194 Phone: +32-9331-4981 1195 Fax: +32--9331--4899 1196 Email: pieter.demil@intec.ugent.be 1198 Wouter Vermeylen 1199 Arts Centre Vooruit 1200 ??? 1201 Ghent 9000 1202 Belgium 1204 Phone: ??? 1205 Fax: ??? 1206 Email: wouter@vooruit.be