idnits 2.17.1 draft-ietf-roll-building-routing-reqs-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 21. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1077. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1054. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1061. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1067. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 934 has weird spacing: '...esponse in...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Replacement devices must be plug-and-play with no additional setup compared to what is normally required for a new device. Devices referencing data in the replaced device MUST not need to be reconfigured to the new device. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Route discovery occurring during packet transmission MUST not exceed 120 msecs. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The total installed infrastructure cost including but not limited to the media, required infrastructure devices (amortized across the number of devices); labor to install and commission the network MUST not exceed $1.00/foot for wired implementations. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 29, 2008) is 5657 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Networking Working Group J. Martocci, Ed. 2 Internet-Draft Johnson Controls Inc. 3 Intended status: Informational Pieter De Mil 4 Expires: April 29, 2009 Ghent University IBCN 5 W. Vermeylen 6 Arts Centre Vooruit 7 Nicolas Riou 8 Schneider Electric 9 October 29, 2008 11 Building Automation Routing Requirements in Low Power and Lossy 12 Networks 13 draft-ietf-roll-building-routing-reqs-01 15 Status of this Memo 17 By submitting this Internet-Draft, each author represents that 18 any applicable patent or other IPR claims of which he or she is 19 aware have been or will be disclosed, and any of which he or she 20 becomes aware will be disclosed, in accordance with Section 6 of 21 BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering 24 Task Force (IETF), its areas, and its working groups. Note that other 25 groups may also distribute working documents as Internet-Drafts. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 The list of current Internet-Drafts can be accessed at 33 http://www.ietf.org/ietf/1id-abstracts.txt 35 The list of Internet-Draft Shadow Directories can be accessed at 36 http://www.ietf.org/shadow.html 38 This Internet-Draft will expire on April 29, 2009. 40 Copyright Notice 42 Copyright (C) The IETF Trust (2008). 44 Abstract 45 The Routing Over Low power and Lossy network (ROLL) Working Group has 46 been chartered to work on routing solutions for Low Power and Lossy 47 networks (LLN) in various markets: Industrial, Commercial (Building), 48 Home and Urban. Pursuant to this effort, this document defines the 49 routing requirements for building automation. 51 Requirements Language 53 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 54 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 55 document are to be interpreted as described in RFC-2119. 57 Table of Contents 59 1. Terminology....................................................4 60 2. Introduction...................................................4 61 2.1. Facility Management System (FMS) Topology.................5 62 2.1.1. Introduction.........................................5 63 2.1.2. Sensors/Actuators....................................6 64 2.1.3. Area Controllers.....................................6 65 2.1.4. Zone Controllers.....................................6 66 2.2. Installation Methods......................................7 67 2.2.1. Wired Communication Media............................7 68 2.2.2. Device Density.......................................7 69 2.2.3. Installation Procedure...............................9 70 3. Building Automation Applications...............................9 71 3.1. Locking and Unlocking the Building.......................10 72 3.2. Building Energy Conservation.............................10 73 3.3. Inventory and Remote Diagnosis of Safety Equipment.......10 74 3.4. Life Cycle of Field Devices..............................11 75 3.5. Surveillance.............................................11 76 3.6. Emergency................................................11 77 3.7. Public Address...........................................12 78 4. Building Automation Routing Requirements......................12 79 4.1. Installation.............................................12 80 4.1.1. Zero-Configuration installation.....................13 81 4.1.2. Sleeping devices....................................13 82 4.1.3. Local Testing.......................................13 83 4.1.4. Device Replacement..................................13 84 4.2. Scalability..............................................14 85 4.2.1. Network Domain......................................14 86 4.2.2. Peer-to-peer Communication..........................14 87 4.3. Mobility.................................................14 88 4.3.1. Mobile Device Association...........................15 90 4.4. Resource Constrained Devices.............................15 91 4.4.1. Limited Processing Power Sensors/Actuators..........15 92 4.4.2. Limited Processing Power Controllers................15 93 4.5. Addressing...............................................15 94 4.5.1. Unicast/Multicast/Anycast...........................16 95 4.6. Manageability............................................16 96 4.6.1. Firmware Upgrades...................................16 97 4.6.2. Diagnostics.........................................16 98 4.6.3. Route Tracking......................................16 99 4.7. Compatibility............................................16 100 4.7.1. IPv4 Compatibility..................................17 101 4.7.2. Maximum Packet Size.................................17 102 4.8. Route Selection..........................................17 103 4.8.1. Path Cost...........................................17 104 4.8.2. Path Adaptation.....................................17 105 4.8.3. Route Redundancy....................................17 106 4.8.4. Route Discovery Time................................17 107 4.8.5. Route Preference....................................18 108 4.8.6. Path Symmetry.......................................18 109 4.8.7. Path Persistence....................................18 110 5. Traffic Pattern...............................................18 111 6. Open issues...................................................19 112 7. Security Considerations.......................................19 113 8. IANA Considerations...........................................19 114 9. Acknowledgments...............................................19 115 10. References...................................................19 116 10.1. Normative References....................................19 117 10.2. Informative References..................................20 118 11. Appendix A: Additional Building Requirements.................20 119 11.1. Additional Commercial Product Requirements..............20 120 11.1.1. Wired and Wireless Implementations.................20 121 11.1.2. World-wide Applicability...........................20 122 11.1.3. Support of the BACnet Building Protocol............20 123 11.1.4. Support of the LON Building Protocol...............20 124 11.1.5. Energy Harvested Sensors...........................21 125 11.1.6. Communication Distance.............................21 126 11.1.7. Automatic Gain Control.............................21 127 11.1.8. Cost...............................................21 128 11.2. Additional Installation and Commissioning Requirements..21 129 11.2.1. Device Setup Time..................................21 130 11.2.2. Unavailability of an IT network....................21 131 11.3. Additional Network Requirements.........................21 132 11.3.1. TCP/UDP............................................21 133 11.3.2. Data Rate Performance..............................22 134 11.3.3. High Speed Downloads...............................22 135 11.3.4. Interference Mitigation............................22 136 11.3.5. Real-time Performance Measures.....................22 137 11.3.6. Packet Reliability.................................22 138 11.3.7. Merging Commissioned Islands.......................22 139 11.3.8. Adjustable System Table Sizes......................23 140 11.4. Prioritized Routing.....................................23 141 11.4.1. Packet Prioritization..............................23 142 11.5. Constrained Devices.....................................23 143 11.5.1. Proxying for Constrained Devices...................23 144 11.6. Reliability.............................................23 145 11.6.1. Device Integrity...................................23 146 Disclaimer of Validity...........................................26 148 1. Terminology 150 For description of the terminology used in this specification, please 151 see the Terminology ID referenced in Section 10.1. 153 2. Introduction 155 Commercial buildings have been fitted with pneumatic and subsequently 156 electronic communication pathways connecting sensors to their 157 controllers for over one hundred years. Recent economic and 158 technical advances in wireless communication allow facilities to 159 increasingly utilize a wireless solution in lieu of a wired solution; 160 thereby reducing installation costs while maintaining highly reliant 161 communication. Wireless solutions will be adapted from their 162 existing wired counterparts in many of the building applications 163 including, but not limited to Heating, Ventilation, and Air 164 Conditioning (HVAC), Lighting, Physical Security, Fire, and Elevator 165 systems. These devices will be developed to reduce installation 166 costs; while increasing installation and retrofit flexibility. 167 Sensing devices may be battery or mains powered. Actuators and area 168 controllers will be mains powered. 170 Facility Management Systems (FMS) are deployed in a large set of 171 vertical markets including universities; hospitals; government 172 facilities; Kindergarten through High School (K-12); pharmaceutical 173 manufacturing facilities; and single-tenant or multi-tenant office 174 buildings. These buildings range in size from 100K sqft structures (5 175 story office buildings), to 1M sqft skyscrapers (100 story 176 skyscrapers) to complex government facilities such as the Pentagon. 177 The described topology is meant to be the model to be used in all 178 these types of environments, but clearly must be tailored to the 179 building class, building tenant and vertical market being served. 181 The following sections describe the sensor, actuator, area controller 182 and zone controller layers of the topology. (NOTE: The Building 183 Controller and Enterprise layers of the FMS are excluded from this 184 discussion since they typically deal in communication rates requiring 185 WLAN communication technologies). 187 2.1. Facility Management System (FMS) Topology 189 2.1.1. Introduction 191 To understand the network systems requirements of a facility 192 management system in a commercial building, this document uses a 193 framework to describe the basic functions and composition of the 194 system. An FMS is a horizontally layered system of sensors, 195 actuators, controllers and user interface devices. Additionally, an 196 FMS may also be divided vertically across alike, but different 197 building subsystems such as HVAC, Fire, Security, Lighting, Shutters 198 and Elevator control systems as denoted in Figure 1. 200 Much of the makeup of an FMS is optional and installed at the behest 201 of the customer. Sensors and actuators have no standalone 202 functionality. All other devices support partial or complete 203 standalone functionality. These devices can optionally be tethered 204 to form a more cohesive system. The customer requirements dictate 205 the level of integration within the facility. This architecture 206 provides excellent fault tolerance since each node is designed to 207 operate in an independent mode if the higher layers are unavailable. 209 +------+ +-----+ +------+ +------+ +------+ +------+ 211 Bldg App'ns | | | | | | | | | | | | 213 | | | | | | | | | | | | 215 Building Cntl | | | | | S | | L | | S | | E | 217 | | | | | E | | I | | H | | L | 219 Area Control | H | | F | | C | | G | | U | | E | 220 | V | | I | | U | | H | | T | | V | 222 Zone Control | A | | R | | R | | T | | T | | A | 224 | C | | E | | I | | I | | E | | T | 226 Actuators | | | | | T | | N | | R | | O | 228 | | | | | Y | | G | | S | | R | 230 Sensors | | | | | | | | | | | | 232 +------+ +-----+ +------+ +------+ +------+ +------+ 234 Figure 1: Building Systems and Devices 236 2.1.2. Sensors/Actuators 238 As Figure 1 indicates an FMS may be composed of many functional 239 stacks or silos that are interoperably woven together via Building 240 Applications. Each silo has an array of sensors that monitor the 241 environment and actuators that effect the environment as determined 242 by the upper layers of the FMS topology. The sensors typically are 243 the leaves of the network tree structure providing environmental data 244 into the system. The actuators are the sensors counterparts 245 modifying the characteristics of the system based on the input sensor 246 data and the applications deployed. 248 2.1.3. Area Controllers 250 An area describes a small physical locale within a building, 251 typically a room. HVAC (temperature and humidity) and Lighting (room 252 lighting, shades, solar loads) vendors oft times deploy area 253 controllers. Area controls are fed by sensor inputs that monitor the 254 environmental conditions within the room. Common sensors found in 255 many rooms that feed the area controllers include temperature, 256 occupancy, lighting load, solar load and relative humidity. Sensors 257 found in specialized rooms (such as chemistry labs) might include air 258 flow, pressure, CO2 and CO particle sensors. Room actuation includes 259 temperature setpoint, lights and blinds/curtains. 261 2.1.4. Zone Controllers 263 Zone Control supports a similar set of characteristics as the Area 264 Control albeit to an extended space. A zone is normally a logical 265 grouping or functional division of a commercial building. A zone may 266 also coincidentally map to a physical locale such as a floor. 268 Zone Control may have direct sensor inputs (smoke detectors for 269 fire), controller inputs (room controllers for air-handlers in HVAC) 270 or both (door controllers and tamper sensors for security). Like 271 area/room controllers, zone controllers are standalone devices that 272 operate independently or may be attached to the larger network for 273 more synergistic control. 275 2.2. Installation Methods 277 2.2.1. Wired Communication Media 279 Commercial controllers are traditionally deployed in a facility using 280 twisted pair serial media following the EIA-485 electrical standard 281 operating nominally at 38400 to 76800 baud. This allows runs to 5000 282 ft without a repeater. With the maximum of three repeaters, a single 283 communication trunk can serpentine 15000 ft. EIA-485 is a multi-drop 284 media allowing upwards to 255 devices to be connected to a single 285 trunk. 287 Most sensors and virtually all actuators currently used in 288 commercial buildings are "dumb", non-communicating hardwired devices. 289 However, sensor buses are beginning to be deployed by vendors which 290 are used for smart sensors and point multiplexing. The Fire 291 industry deploys addressable fire devices, which usually use some 292 form of proprietary communication wiring driven by fire codes. 294 2.2.2. Device Density 296 Device density differs depending on the application and as dictated 297 by the local building code requirements. The following sections 298 detail typical installation densities for different applications. 300 2.2.2.1. HVAC Device Density 302 HVAC room applications typically have sensors/actuators and 303 controllers spaced about 50ft apart. In most cases there is a 3:1 304 ratio of sensors/actuators to controllers. That is, for each room 305 there is an installed temperature sensor, flow sensor and damper 306 actuator for the associated room controller. 308 HVAC equipment room applications are quite different. An air handler 309 system may have a single controller with upwards to 25 sensors and 310 actuators within 50 ft of the air handler. A chiller or boiler is 311 also controlled with a single equipment controller instrumented with 312 25 sensors and actuators. Each of these devices would be 313 individually addressed since the devices are mandated or optional as 314 defined by the specified HVAC application. Air handlers typically 315 serve one or two floors of the building. Chillers and boilers may be 316 installed per floor, but many times service a wing, building or the 317 entire complex via a central plant. 319 These numbers are typical. In special cases, such as clean rooms, 320 operating rooms, pharmaceuticals and labs, the ratio of sensors to 321 controllers can increase by a factor of three. Tenant installations 322 such as malls would opt for packaged units where much of the sensing 323 and actuation is integrated into the unit. Here a single device 324 address would serve the entire unit. 326 2.2.2.2. Fire Device Density 328 Fire systems are much more uniformly installed with smoke detectors 329 installed about every 50 feet. This is dictated by local building 330 codes. Fire pull boxes are installed uniformly about every 150 feet. 331 A fire controller will service a floor or wing. The fireman's fire 332 panel will service the entire building and typically is installed in 333 the atrium. 335 2.2.2.3. Lighting Device Density 337 Lighting is also very uniformly installed with ballasts installed 338 approximately every 10 feet. A lighting panel typically serves 48 to 339 64 zones. Wired systems typically tether many lights together into a 340 single zone. Wireless systems configure each fixture independently 341 to increase flexibility and reduce installation costs. 343 2.2.2.4. Physical Security Device Density 345 Security systems are non-uniformly oriented with heavy density near 346 doors and windows and lighter density in the building interior space. 347 The recent influx of interior and perimeter camera systems is 348 increasing the security footprint. These cameras are atypical 349 endpoints requiring upwards to 1 megabit/second (Mbit/s) data rates 350 per camera as contrasted by the few Kbits/s needed by most other FMS 351 sensing equipment. Previously, camera systems had been deployed on 352 proprietary wired high speed network. More recent implementations 353 utilize wired or wireless IP cameras integrated to the enterprise 354 LAN. 356 2.2.3. Installation Procedure 358 Wired FMS installation is a multifaceted procedure depending on the 359 extent of the system and the software interoperability requirement. 360 However, at the sensor/actuator and controller level, the procedure 361 is typically a two or three step process. 363 Most FMS equipment is 24 VAC equipment that can be installed by a 364 low-voltage electrician. He/she arrives on-site during the 365 construction of the building prior to the sheet wall and ceiling 366 installation. This allows him/her to allocate wall space, easily 367 land the equipment and run the wired controller and sensor networks. 368 The Building Controllers and Enterprise network are not normally 369 installed until months later. The electrician completes his task by 370 running a wire verification procedure that shows proper continuity 371 between the devices and proper local operation of the devices. 373 Later in the installation cycle, the higher order controllers are 374 installed, programmed and commissioned together with the previously 375 installed sensors, actuators and controllers. In most cases the IP 376 network is still not operable. The Building Controllers are 377 completely commissioned using a crossover cable or a temporary IP 378 switch together with static IP addresses. 380 Once the IP network is operational, the FMS may optionally be added 381 to the enterprise network. The wireless installation process must 382 follow the same work flow. The electrician will install the products 383 as before and run local functional tests between the wireless devices 384 to assure operation before leaving the job. The electrician does 385 not carry a laptop so the commissioning must be built into the device 386 operation. 388 3. Building Automation Applications 390 Vooruit is an arts centre in a restored monument which dates from 391 1913. This complex monument consists of over 350 different rooms 392 including a meeting rooms, large public halls and theaters serving as 393 many as 2500 guests. A number of use cases regarding Vooruit are 394 described in the following text. The situations and needs described 395 in these use cases can also be found in all automated large 396 buildings, such as airports and hospitals. 398 3.1. Locking and Unlocking the Building 400 The member of the cleaning staff arrives first in the morning 401 unlocking the building (or a part of it) from the control room. This 402 means that several doors are unlocked; the alarms are switched off; 403 the heating turns on; some lights switch on, etc. Similarly, the 404 last person leaving the building has to lock the building. This will 405 lock all the outer doors, turn the alarms on, switch off heating and 406 lights, etc. 408 The ''building locked'' or ''building unlocked'' event needs to be 409 delivered to a subset of all the sensors and actuators. It can be 410 beneficial if those field devices form a group (e.g. ''all-sensors- 411 actuators-interested-in-lock/unlock-events). Alternatively, the area 412 and zone controllers could form a group where the arrival of such an 413 event results in each area and zone controller initiating unicast or 414 multicast within the LLN. 416 This use case is also described in the home automation, although the 417 requirement about preventing the "popcorn effect" draft [I-D.ietf- 418 roll-home-routing-reqs] can be relaxed a little bit in building 419 automation. It would be nice if lights, roll-down shutters and other 420 actuators in the same room or area with transparent walls execute the 421 command around (not 'at') the same time (a tolerance of 200 ms is 422 allowed). 424 3.2. Building Energy Conservation 426 A room that is not in use should not be heated, air conditioned or 427 ventilated and the lighting should be turned off. In a building with 428 a lot of rooms it can happen quite frequently that someone forgets to 429 switch off the HVAC and lighting. This is a real waste of valuable 430 energy. To prevent this from happening, the janitor can program the 431 building according to the day's schedule. This way lighting and HVAC 432 is turned on prior to the use of a room, and turned off afterwards. 433 Using such a system Vooruit has realized a saving of 35% on the gas 434 and electricity bills. 436 3.3. Inventory and Remote Diagnosis of Safety Equipment 438 Each month Vooruit is obliged to make an inventory of its safety 439 equipment. This task takes two working days. Each fire extinguisher 440 (100), fire blanket (10), fire-resistant door (120) and evacuation 441 plan (80) must be checked for presence and proper operation. Also 442 the battery and lamp of every safety lamp must be checked before each 443 public event (safety laws). Automating this process using asset 444 tracking and low-power wireless technologies would heavily cut into 445 working hours. 447 It is important that these messages are delivered very reliably and 448 that the power consumption of the sensors/actuators attached to this 449 safety equipment is kept at a very low level. 451 3.4. Life Cycle of Field Devices 453 Some field devices (e.g. smoke detectors) must be replaced 454 periodically. Devices must be easily added and deleted from the 455 network to support augmenting sensors/actuators during construction. 457 A secure mechanism is needed to remove the old device and install the 458 new device. New devices need to be authenticated before they can 459 participate in the routing process of the LLN. After the 460 authentication, zero-configuration of the routing protocol is 461 necessary. 463 3.5. Surveillance 465 Ingress and egress are real-time applications needing response times 466 below 500msec. Each door must support local programming to restrict 467 use on a per person basis with respect to time-of-day and person 468 entering. While much of the application is localized at the door, 469 tamper, door ajar, forced entry must be routed to one or more fixed 470 or mobile user devices within 5 seconds. 472 3.6. Emergency 474 In case of an emergency it is very important that all the visitors be 475 evacuated as quickly as possible. The fire and smoke detectors have 476 to set off an alarm, and alert the mobile personnel on their user 477 device (e.g. PDA). All emergency exits have to be instantly unlocked 478 and the emergency lighting has to guide the visitors to these exits. 479 The necessary sprinklers have to be activated and the electricity 480 grid has to be monitored if it becomes necessary to shut down some 481 parts of the building. Emergency services have to be notified 482 instantly. 484 A wireless system could bring in some extra safety features. 485 Locating fire fighters and guiding them through the building could be 486 a life-saving application. 488 These life critical applications must take routing precedence over 489 other network traffic. Commands entered during these emergencies 490 must be properly authenticated by device, user, and command request. 492 3.7. Public Address 494 It should be possible to send audio and text messages to the visitors 495 in the building. These messages can be very diverse, e.g. ASCII text 496 boards displaying the name of the event in a room, audio 497 announcements such as delays in the program, lost and found children, 498 evacuation orders, etc. 500 The control network must be able to readily sense the audience in an 501 area and deliver applicable message content. 503 4. Building Automation Routing Requirements 505 Following are the building automation routing requirements for a 506 network used to integrate building sensor actuator and control 507 products. These requirements have been limited to routing 508 requirements only. These requirements are written not presuming any 509 preordained network topology, physical media (wired) or radio 510 technology (wireless). See Appendix A for additional requirements 511 that have been deemed outside the scope of this document yet will 512 pertain to the successful deployment of building automation systems. 514 4.1. Installation 516 Building control systems typically are installed and tested by 517 electricians having little computer knowledge and no network 518 knowledge whatsoever. These systems are often installed during the 519 building construction phase before the drywall and ceilings are in 520 place. There is never an IP network in place during this 521 installation. 523 In retrofit applications, pulling wires from sensors to controllers 524 can be costly and in some applications (e.g. museums) not feasible. 526 Local (ad hoc) testing of sensors and room controllers must be 527 completed before the tradesperson can complete his/her work. System 528 level commissioning will later be deployed using a more computer 529 savvy person with access to a commissioning device (e.g. a laptop 530 computer). The completely installed and commissioned IP network may 531 or may not be in place at this time. Following are the installation 532 routing requirements. 534 4.1.1. Zero-Configuration installation 536 It MUST be possible to fully commission network devices without 537 requiring any additional commissioning device (e.g. laptop). The 538 device MAY support up to sixteen integrated switches to uniquely 539 identify the device on the network. 541 4.1.2. Sleeping devices 543 Sensing devices must be able to utilize battery power or Energy 544 Harvesting techniques for power. This presumes a need for devices 545 that most often sleep. Routing must support these catatonic devices 546 to assure that established routes do not utilize sleeping devices. 547 It must also define routing rules when these devices need to access 548 the network. Communication to these devices must be bidirectional. 549 Routing must support proxies that can cache the inbound data for the 550 sleeping device until the device awakens. Routing must understand 551 the selected proxy for the sleeping device. 553 Batteries must be operational for at least 5 years when the sensing 554 device is transmitting its data (e.g. 64 bytes) once per minute. 555 This requires that sleeping devices must have minimal network access 556 time when they awake and transmit onto the network. 558 4.1.3. Local Testing 560 The local sensors and requisite actuators and controllers must be 561 testable within the locale (e.g. room) to assure communication 562 connectivity and local operation without requiring other systemic 563 devices. Routing must allow for temporary ad hoc paths to be 564 established that are updated as the network physically and 565 functionally expands. 567 4.1.4. Device Replacement 569 Replacement devices must be plug-and-play with no additional setup 570 compared to what is normally required for a new device. Devices 571 referencing data in the replaced device MUST not need to be 572 reconfigured to the new device. 574 4.2. Scalability 576 Building control systems are designed for facilities from 50000 sq. 577 ft. to 1M+ sq. ft. The networks that support these systems must 578 cost-effectively scale accordingly. In larger facilities 579 installation may occur simultaneously on various wings or floors, yet 580 the end system must seamlessly merge. Following are the scalability 581 requirements. 583 4.2.1. Network Domain 585 The routing protocol MUST be able to support networks with at least 586 1000 routers and 1000 hosts. Subnetworks (e.g. rooms, primary 587 equipment) within the network must support upwards to 255 sensors 588 and/or actuators. 590 . 592 4.2.2. Peer-to-peer Communication 594 The data domain for commercial FMS systems may sprawl across a vast 595 portion of the physical domain. For example, a chiller may reside in 596 the facility's basement due to its size, yet the associated cooling 597 towers will reside on the roof. The cold-water supply and return 598 pipes serpentine through all the intervening floors. The feedback 599 control loops for these systems require data from across the 600 facility. 602 Network devices must be able to communicate in a peer-to-peer manner 603 with all other devices on the network. Thus the routing protocol MUST 604 provide routes to any other devices without being subject to a 605 constrained path via a gating device. 607 4.3. Mobility 609 Most devices are affixed to walls or installed on ceilings within 610 buildings. Hence the mobility requirements for commercial buildings 611 are few. However, in wireless environments location tracking of 612 occupants and assets is gaining favor. 614 4.3.1. Mobile Device Association 616 Mobile devices SHOULD be capable of unjoining (handing-off) from an 617 old network joining onto a new network within 15 seconds. 619 4.4. Resource Constrained Devices 621 Sensing and actuator device processing power and memory may be 4 622 orders of magnitude less (i.e. 10,000x) than many more traditional 623 client devices on an IP network. The routing mechanisms must 624 therefore be tailored to fit these resource constrained devices. 626 4.4.1. Limited Processing Power Sensors/Actuators 628 The software stack requirements for sensors and actuators MUST be 629 implementable in 8-bit devices with no more than 128KB of flash 630 memory (including at least 32KB for the application code) and no more 631 than 8KB of RAM (including at least 1KB RAM available for the 632 application). 634 4.4.2. Limited Processing Power Controllers 636 The software stack requirements for room controllers SHOULD be 637 implementable in 8-bit devices with no more than 256KB of flash 638 memory (including at least 32KB for the application code) and no more 639 than 8KB of RAM (including at least 1KB RAM available for the 640 application) 642 4.5. Addressing 644 Facility Management systems require different communication schema to 645 solicit or post network information. Broadcasts or anycasts need be 646 used to resolve unresolved references within a device when the device 647 first joins the network. 649 As with any network communication, broadcasting should be minimized. 650 This is especially a problem for small embedded devices with limited 651 network bandwidth. In many cases a global broadcast could be 652 replaced with a multicast since the application knows the application 653 domain. Broadcasts and multicasts are typically used for network 654 joins and application binding in embedded systems. 656 4.5.1. Unicast/Multicast/Anycast 658 Routing MUST support anycast, unicast, multicast and broadcast 659 services (or IPv6 equivalent). 661 4.6. Manageability 663 In addition to the initial installation of the system (see Section 664 4.1), it is equally important for the ongoing maintenance of the 665 system to be simple and inexpensive. 667 4.6.1. Firmware Upgrades 669 To support high speed code downloads, routing MUST support parallel 670 downloads to targeted devices yet guarantee packet delivery. 672 4.6.2. Diagnostics 674 To improve diagnostics, the network layer SHOULD be able to be placed 675 in and out of 'verbose' mode. Verbose mode is a temporary debugging 676 mode that provides additional communication information including at 677 least total number of packets sent, packets received, number of 678 failed communication attempts, neighbor table and routing table 679 entries. 681 4.6.3. Route Tracking 683 Route diagnostics SHOULD be supported providing information such as 684 path quality; number of hops; available alternate active paths with 685 associated costs. 687 4.7. Compatibility 689 The building automation industry adheres to application layer 690 protocol standards to achieve vendor interoperability. These 691 standards are BACnet and LON. It is estimated that fully 80% of the 692 customer bid requests received world-wide will require compliance to 693 one or both of these standards. ROLL routing will therefore need to 694 dovetail to these application protocols to assure acceptance in the 695 building automation industry. These protocols have been in place for 696 over 10 years. Many sites will require backwards compatibility with 697 the existing legacy devices. 699 4.7.1. IPv4 Compatibility 701 The routing protocol MUST define a communication scheme to assure 702 compatibility of IPv4 and IPv6 devices. 704 4.7.2. Maximum Packet Size 706 Routing MUST support packet sizes to 1526 octets (to be backwards 707 compatible with 802.3 subnetworks) 709 4.8. Route Selection 711 Route selection determines reliability and quality of the 712 communication paths among the devices. Optimizing the routes over 713 time resolve any nuances developed at system startup when nodes are 714 asynchronously adding themselves to the network. Path adaptation 715 will reduce latency if the path costs consider hop count as a cost 716 attribute. 718 4.8.1. Path Cost 720 The routing protocol MUST support a range of metrics and optimize 721 (constrained) path according to these metrics. These metrics SHOULD 722 include signal strength, available bandwidth, hop count and 723 communication error rates. 725 4.8.2. Path Adaptation 727 Communication paths MUST adapt toward the chosen metric(s) (e.g. 728 signal quality) optimality in time. 730 4.8.3. Route Redundancy 732 To reduce real-time latency, the network layer SHOULD be configurable 733 to allow secondary and tertiary paths to be established and used upon 734 failure of the primary path. 736 4.8.4. Route Discovery Time 738 Route discovery occurring during packet transmission MUST not exceed 739 120 msecs. 741 4.8.5. Route Preference 743 The route discovery mechanism SHOULD allow a source node (sensor) to 744 dictate a configured destination node (controller) as a preferred 745 routing path. 747 4.8.6. Path Symmetry 749 The network layer SHOULD support both asymmetric and symmetric routes 750 as requested by the application layer. When the application layer 751 selects asymmetry the network layer MAY elect to find either 752 asymmetric or symmetric routes. When the application layer requests 753 symmetric routes, then only symmetric routes MUST be utilized. 755 4.8.7. Path Persistence 757 To eliminate high network traffic in power-fail or brown-out 758 conditions previously established routes SHOULD be remembered and 759 invoked prior to establishing new routes for those devices reentering 760 the network. 762 5. Traffic Pattern 764 The independent nature of the automation systems within a building 765 plays heavy onto the network traffic patterns. Much of the real-time 766 sensor data stays within the local environment. Alarming and other 767 event data will percolate to higher layers. 769 Systemic data may be either polled or event based. Polled data 770 systems will generate a uniform packet load on the network. This 771 architecture has proven not scalable. Most vendors have developed 772 event based systems which passes data on event. These systems are 773 highly scalable and generate low data on the network at quiescence. 774 Unfortunately, the systems will generate a heavy load on startup 775 since all the initial data must migrate to the controller level. 776 They also will generate a temporary but heavy load during firmware 777 upgrades. This latter load can normally be mitigated by performing 778 these downloads during off-peak hours. 780 Devices will need to reference peers occasionally for sensor data or 781 to coordinate across systems. Normally, though, data will migrate 782 from the sensor level upwards through the local, area then 783 supervisory level. Bottlenecks will typically form at the funnel 784 point from the area controllers to the supervisory controllers. 786 6. Open issues 788 Other items to be addressed in further revisions of this document 789 include: 791 All known open items completed 793 7. Security Considerations 795 Security policies, especially wireless encryption and overall device 796 authentication need to be considered. These issues are out of scope 797 for the routing requirements, but could have an impact on the 798 processing capabilities of the sensors and controllers. 800 As noted above, the FMS systems are typically highly configurable in 801 the field and hence the security policy is most often dictated by the 802 type of building to which the FMS is being installed. 804 8. IANA Considerations 806 This document includes no request to IANA. 808 9. Acknowledgments 810 J. P. Vasseur, Ted Humpal and Zach Shelby are gratefully acknowledged 811 for their contributions to this document. 813 This document was prepared using 2-Word-v2.0.template.dot. 815 10. References 817 10.1. Normative References 819 draft-ietf-roll-home-routing-reqs-03 821 draft-ietf-roll-terminology-00.txt 823 10.2. Informative References 825 ''RS-485 EIA Standard: Standard for Electrical Characteristics of 826 Generators and Receivers for use in Balanced Digital Multipoint 827 Systems'', April 1983 829 ''BACnet: A Data Communication Protocol for Building and Automation 830 Control Networks'' ANSI/ASHRAE Standard 135-2004'', 2004 832 ''LON: OPEN DATA COMMUNICATION IN BUILDING AUTOMATION, CONTROLS AND 833 BUILDING MANAGEMENT - BUILDING NETWORK PROTOCOL - PART 1: PROTOCOL 834 STACK'', 11/25/2005 836 11. Appendix A: Additional Building Requirements 838 Appendix A contains additional building requirements that were deemed 839 out of scope for the routing document yet provided ancillary 840 informational substance to the reader. The requirements will need to 841 be addressed by ROLL or other WGs before adoption by the building 842 automation industry will be considered. 844 11.1. Additional Commercial Product Requirements 846 11.1.1. Wired and Wireless Implementations 848 Solutions MUST support both wired and wireless implementations. 850 11.1.2. World-wide Applicability 852 Wireless devices MUST be supportable at the 2.4Ghz ISM band. 853 Wireless devices SHOULD be supportable at the 900 and 868 ISM bands 854 as well. 856 11.1.3. Support of the BACnet Building Protocol 858 Devices implementing the ROLL features MUST be able to support the 859 BACnet protocol. 861 11.1.4. Support of the LON Building Protocol 863 Devices implementing the ROLL features MUST be able to support the 864 LON protocol. 866 11.1.5. Energy Harvested Sensors 868 RFDs SHOULD target for operation using viable energy harvesting 869 techniques such as ambient light, mechanical action, solar load, air 870 pressure and differential temperature. 872 11.1.6. Communication Distance 874 A source device may be upwards to 1000 feet from its destination. 875 Communication MUST be established between these devices without 876 needing to install other intermediate 'communication only' devices 877 such as repeaters 879 11.1.7. Automatic Gain Control 881 For wireless implementations, the device radios SHOULD incorporate 882 automatic transmit power regulation to maximize packet transfer and 883 minimize network interference regardless of network size or density. 885 11.1.8. Cost 887 The total installed infrastructure cost including but not limited to 888 the media, required infrastructure devices (amortized across the 889 number of devices); labor to install and commission the network MUST 890 not exceed $1.00/foot for wired implementations. 892 Wireless implementations (total installed cost) must cost no more 893 than 80% of wired implementations. 895 11.2. Additional Installation and Commissioning Requirements 897 11.2.1. Device Setup Time 899 Network setup by the installer MUST take no longer than 20 seconds 900 per device installed. 902 11.2.2. Unavailability of an IT network 904 Product commissioning MUST be performed by an application engineer 905 prior to the installation of the IT network. 907 11.3. Additional Network Requirements 909 11.3.1. TCP/UDP 911 Connection based and connectionless services MUST be supported 913 11.3.2. Data Rate Performance 915 An effective data rate of 20kbits/s is the lowest acceptable 916 operational data rate acceptable on the network. 918 11.3.3. High Speed Downloads 920 Devices receiving a download MAY cease normal operation, but upon 921 completion of the download MUST automatically resume normal 922 operation. 924 11.3.4. Interference Mitigation 926 The network MUST automatically detect interference and migrate the 927 network to a better 802.15.4 channel to improve communication. 928 Channel changes and nodes response to the channel change MUST occur 929 within 60 seconds. 931 11.3.5. Real-time Performance Measures 933 A node transmitting a 'request with expected reply' to another node 934 MUST send the message to the destination and receive the response in 935 not more than 120 msec. This response time SHOULD be achievable with 936 5 or less hops in each direction. This requirement assumes network 937 quiescence and a negligible turnaround time at the destination node. 939 11.3.6. Packet Reliability 941 Reliability MUST meet the following minimum criteria : 943 < 1% MAC layer errors on all messages; After no more than three 944 retries 946 < .1% Network layer errors on all messages; 948 After no more than three additional retries; 950 < 0.01% Application layer errors on all messages. 952 Therefore application layer messages will fail no more than once 953 every 100,000 messages. 955 11.3.7. Merging Commissioned Islands 957 Subsystems are commissioned by various vendors at various times 958 during building construction. These subnetworks MUST seamlessly 959 merge into networks and networks MUST seamlessly merge into 960 internetworks since the end user wants a holistic view of the system. 962 11.3.8. Adjustable System Table Sizes 964 Routing MUST support adjustable router table entry sizes on a per 965 node basis to maximize limited RAM in the devices. 967 11.4. Prioritized Routing 969 Network and application routing prioritization is required to assure 970 that mission critical applications (e.g. Fire Detection) cannot be 971 deferred while less critical application access the network. 973 11.4.1. Packet Prioritization 975 Routers MUST support quality of service prioritization to assure 976 timely response for critical FMS packets. 978 11.5. Constrained Devices 980 The network may be composed of a heterogeneous mix of full, battery 981 and energy harvested devices. The routing protocol must support 982 these constrained devices. 984 11.5.1. Proxying for Constrained Devices 986 Routing MUST support in-bound packet caches for low-power (battery 987 and energy harvested) devices when these devices are not accessible 988 on the network. 990 These devices MUST have a designated powered proxying device to which 991 packets will be temporarily routed and cached until the constrained 992 device accesses the network. 994 11.6. Reliability 996 11.6.1. Device Integrity 998 Commercial Building devices MUST all be periodically scanned to 999 assure that the device is viable and can communicate data and alarm 1000 information as needed. Network routers SHOULD maintain previous 1001 packet flow information temporally to minimize overall network 1002 overhead. 1004 Authors' Addresses 1006 Jerry Martocci 1007 Johnson Control 1008 507 E. Michigan Street 1009 Milwaukee, Wisconsin, 53202 1010 USA 1012 Phone: 414.524.4010 1013 Email: jerald.p.martocci@jci.com 1015 Nicolas Riou 1016 Schneider Electric 1017 Technopole 38TEC T3 1018 37 quai Paul Louis Merlin 1019 38050 Grenoble Cedex 9 1020 France 1022 Phone: +33 4 76 57 66 15 1023 Email: nicolas.riou@fr.schneider-electric.com 1025 Pieter De Mil 1026 Ghent University - IBCN 1027 G. Crommenlaan 8 bus 201 1028 Ghent 9050 1029 Belgium 1031 Phone: +32-9331-4981 1032 Fax: +32--9331--4899 1033 Email: pieter.demil@intec.ugent.be 1035 Wouter Vermeylen 1036 Arts Centre Vooruit 1037 ??? 1038 Ghent 9000 1039 Belgium 1041 Phone: ??? 1042 Fax: ??? 1043 Email: wouter@vooruit.be 1045 Intellectual Property Statement 1047 The IETF takes no position regarding the validity or scope of any 1048 Intellectual Property Rights or other rights that might be claimed to 1049 pertain to the implementation or use of the technology described in 1050 this document or the extent to which any license under such rights 1051 might or might not be available; nor does it represent that it has 1052 made any independent effort to identify any such rights. Information 1053 on the procedures with respect to rights in RFC documents can be 1054 found in BCP 78 and BCP 79. 1056 Copies of IPR disclosures made to the IETF Secretariat and any 1057 assurances of licenses to be made available, or the result of an 1058 attempt made to obtain a general license or permission for the use of 1059 such proprietary rights by implementers or users of this 1060 specification can be obtained from the IETF on-line IPR repository at 1061 http://www.ietf.org/ipr. 1063 The IETF invites any interested party to bring to its attention any 1064 copyrights, patents or patent applications, or other proprietary 1065 rights that may cover technology that may be required to implement 1066 this standard. Please address the information to the IETF at 1067 ietf-ipr@ietf.org. 1069 Disclaimer of Validity 1071 This document and the information contained herein are provided on an 1072 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1073 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1074 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1075 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1076 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1077 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1079 Copyright Statement 1081 Copyright (C) The IETF Trust (2008). 1083 This document is subject to the rights, licenses and restrictions 1084 contained in BCP 78, and except as set forth therein, the authors 1085 retain all their rights. 1087 Acknowledgment 1089 Funding for the RFC Editor function is currently provided by the 1090 Internet Society.