idnits 2.17.1 draft-ietf-roll-building-routing-reqs-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 19. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1071. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1048. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1055. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1061. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 1 character in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 965 has weird spacing: '...esponse in...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Route discovery occurring during packet transmission MUST not exceed 120 msecs. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The total installed infrastructure cost including but not limited to the media, required infrastructure devices (amortized across the number of devices); labor to install and commission the network MUST not exceed $1.00/foot for wired implementations. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 24, 2008) is 5634 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'I-D.ietf-roll-home-routing-reqs' is defined on line 860, but no explicit reference was found in the text Summary: 2 errors (**), 0 flaws (~~), 5 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Networking Working Group J. Martocci, Ed. 2 Internet-Draft Johnson Controls Inc. 3 Intended status: Informational Pieter De Mil 4 Expires: April 24, 2009 Ghent University IBCN 5 W. Vermeylen 6 Arts Centre Vooruit 7 October 24, 2008 9 Building Automation Routing Requirements in Low Power and Lossy 10 Networks 11 draft-ietf-roll-building-routing-reqs-00 13 Status of this Memo 15 By submitting this Internet-Draft, each author represents that 16 any applicable patent or other IPR claims of which he or she is 17 aware have been or will be disclosed, and any of which he or she 18 becomes aware will be disclosed, in accordance with Section 6 of 19 BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that other 23 groups may also distribute working documents as Internet-Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html 36 This Internet-Draft will expire on April 24, 2009. 38 Copyright Notice 40 Copyright (C) The IETF Trust (2008). 42 Abstract 44 The Routing Over Low power and Lossy network (ROLL) Working Group has 45 been chartered to work on routing solutions for Low Power and Lossy 46 networks (LLN) in various markets: Industrial, Commercial (Building), 47 Home and Urban. Pursuant to this effort, this document defines the 48 routing requirements for building automation. 50 Requirements Language 52 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 53 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 54 document are to be interpreted as described in RFC-2119. 56 Table of Contents 58 1. Terminology....................................................4 59 2. Introduction...................................................4 60 2.1. Facility Management System (FMS) Topology.................5 61 2.1.1. Introduction.........................................5 62 2.1.2. Sensors/Actuators....................................6 63 2.1.3. Area Controllers.....................................6 64 2.1.4. Zone Controllers.....................................6 65 2.2. Installation Methods......................................7 66 2.2.1. Wired Communication Media............................7 67 2.2.2. Device Density.......................................7 68 2.2.3. Installation Procedure...............................8 69 3. Building Automation Applications...............................9 70 3.1. Locking and Unlocking the Building........................9 71 3.2. Building Energy Conservation.............................10 72 3.3. Inventory and Remote Diagnosis of Safety Equipment.......10 73 3.4. Life Cycle of Field Devices..............................11 74 3.5. Surveillance.............................................11 75 3.6. Emergency................................................11 76 3.7. Public Address...........................................11 77 4. Building Automation Routing Requirements......................12 78 4.1. Installation.............................................12 79 4.1.1. Zero-Configuration installation.....................13 80 4.1.2. Sleeping devices....................................13 81 4.1.3. Local Testing.......................................13 82 4.1.4. Device Replacement..................................13 83 4.2. Scalability..............................................13 84 4.2.1. Network Domain......................................14 85 4.2.2. Peer-to-peer Communication..........................14 86 4.3. Mobility.................................................14 87 4.3.1. Mobile Device Association...........................14 88 4.4. Resource Constrained Devices.............................15 89 4.4.1. Limited Processing Power Sensors/Actuators..........15 90 4.4.2. Limited Processing Power Controllers................15 91 4.4.3. Proxying for Constrained Devices....................15 92 4.5. Prioritized Routing......................................15 93 4.5.1. Packet Prioritization...............................15 94 4.6. Addressing...............................................16 95 4.6.1. Unicast/Multicast/Anycast...........................16 96 4.7. Manageability............................................16 97 4.7.1. Firmware Upgrades...................................16 98 4.7.2. Diagnostics.........................................16 99 4.7.3. Route Tracking......................................17 100 4.8. Compatibility............................................17 101 4.8.1. IPv4 Compatibility..................................17 102 4.8.2. Maximum Packet Size.................................17 103 4.9. Route Selection..........................................17 104 4.9.1. Path Cost...........................................17 105 4.9.2. Path Adaptation.....................................18 106 4.9.3. Route Redundancy....................................18 107 4.9.4. Route Discovery Time................................18 108 4.9.5. Route Preference....................................18 109 4.9.6. Path Symmetry.......................................18 110 4.9.7. Path Persistence....................................18 111 4.10. Reliability.............................................18 112 4.10.1. Device Integrity...................................18 113 5. Traffic Pattern...............................................19 114 6. Open issues...................................................19 115 7. Security Considerations.......................................19 116 8. IANA Considerations...........................................20 117 9. Acknowledgments...............................................20 118 10. References...................................................20 119 10.1. Normative References....................................20 120 10.2. Informative References..................................20 121 11. Appendix A: Additional Building Requirements.................21 122 11.1. Additional Commercial Product Requirements..............21 123 11.1.1. Wired and Wireless Implementations.................21 124 11.1.2. World-wide Applicability...........................21 125 11.1.3. Support of the BACnet Building Protocol............21 126 11.1.4. Support of the LON Building Protocol...............21 127 11.1.5. Energy Harvested Sensors...........................21 128 11.1.6. Communication Distance.............................21 129 11.1.7. Automatic Gain Control.............................21 130 11.1.8. Cost...............................................22 131 11.2. Additional Installation and Commissioning Requirements..22 132 11.2.1. Device Setup Time..................................22 133 11.2.2. Unavailability of an IT network....................22 134 11.3. Additional Network Requirements.........................22 135 11.3.1. TCP/UDP............................................22 136 11.3.2. Data Rate Performance..............................22 137 11.3.3. High Speed Downloads...............................22 138 11.3.4. Interference Mitigation............................22 139 11.3.5. Real-time Performance Measures.....................23 140 11.3.6. Packet Reliability.................................23 141 11.3.7. Merging Commissioned Islands.......................23 142 11.3.8. Adjustable System Table Sizes......................23 143 Disclaimer of Validity...........................................25 145 1. Terminology 147 For description of the terminology used in this specification, please 148 see the Terminology ID referenced in Section 10.2 150 2. Introduction 152 Commercial buildings have been fitted with pneumatic and subsequently 153 electronic communication pathways connecting sensors to their 154 controllers for over one hundred years. Recent economic and 155 technical advances in wireless communication allow facilities to 156 increasingly utilize a wireless solution in lieu of a wired solution; 157 thereby reducing installation costs while maintaining highly reliant 158 communication. Wireless solutions will be adapted from their 159 existing wired counterparts in many of the building applications 160 including, but not limited to Heating, Ventilation, and Air 161 Conditioning (HVAC), Lighting, Physical Security, Fire, and Elevator 162 systems. These devices will be developed to reduce installation 163 costs; while increasing installation and retrofit flexibility. 164 Sensing devices may be battery or mains powered. Actuators and area 165 controllers will be mains powered. 167 Facility Management Systems (FMS) are deployed in a large set of 168 vertical markets including universities; hospitals; government 169 facilities; Kindergarten through High School (K-12); pharmaceutical 170 manufacturing facilities; and single-tenant or multi-tenant office 171 buildings. These buildings range in size from 100K sqft structures (5 172 story office buildings), to 1M sqft skyscrapers (100 story 173 skyscrapers) to complex government facilities such as the Pentagon. 174 The described topology is meant to be the model to be used in all 175 these types of environments, but clearly must be tailored to the 176 building class, building tenant and vertical market being served. 178 The following sections describe the sensor, actuator, area controller 179 and zone controller layers of the topology. (NOTE: The Building 180 Controller and Enterprise layers of the FMS are excluded from this 181 discussion since they typically deal in communication rates requiring 182 WLAN communication technologies). 184 2.1. Facility Management System (FMS) Topology 186 2.1.1. Introduction 188 To understand the network systems requirements of a facility 189 management system in a commercial building, this document uses a 190 framework to describe the basic functions and composition of the 191 system. An FMS is a horizontally layered system of sensors, 192 actuators, controllers and user interface devices. Additionally, an 193 FMS may also be divided vertically across alike, but different 194 building subsystems such as HVAC, Fire, Security, Lighting, Shutters 195 and Elevator control systems as denoted in Figure 1. 197 Much of the makeup of an FMS is optional and installed at the behest 198 of the customer. Sensors and actuators have no standalone 199 functionality. All other devices support partial or complete 200 standalone functionality. These devices can optionally be tethered 201 to form a more cohesive system. The customer requirements dictate 202 the level of integration within the facility. This architecture 203 provides excellent fault tolerance since each node is designed to 204 operate in an independent mode if the higher layers are unavailable. 206 +------+ +-----+ +------+ +------+ +------+ +------+ 208 Bldg App'ns | | | | | | | | | | | | 210 | | | | | | | | | | | | 212 Building Cntl | | | | | S | | L | | S | | E | 214 | | | | | E | | I | | H | | L | 216 Area Control | H | | F | | C | | G | | U | | E | 218 | V | | I | | U | | H | | T | | V | 220 Zone Control | A | | R | | R | | T | | T | | A | 221 | C | | E | | I | | I | | E | | T | 223 Actuators | | | | | T | | N | | R | | O | 225 | | | | | Y | | G | | S | | R | 227 Sensors | | | | | | | | | | | | 229 +------+ +-----+ +------+ +------+ +------+ +------+ 231 Figure 1: Building Systems and Devices 233 2.1.2. Sensors/Actuators 235 As Figure 1 indicates an FMS may be composed of many functional 236 stacks or silos that are interoperably woven together via Building 237 Applications. Each silo has an array of sensors that monitor the 238 environment and actuators that effect the environment as determined 239 by the upper layers of the FMS topology. The sensors typically are 240 the leaves of the network tree structure providing environmental data 241 into the system. The actuators are the sensors counterparts 242 modifying the characteristics of the system based on the input sensor 243 data and the applications deployed. 245 2.1.3. Area Controllers 247 An area describes a small physical locale within a building, 248 typically a room. As noted in Figure 1 the HVAC, Security and 249 Lighting functions within a building address area or room level 250 applications. Area controls are fed by sensor inputs that monitor 251 the environmental conditions within the room. Common sensors found 252 in many rooms that feed the area controllers include temperature, 253 occupancy, lighting load, solar load and relative humidity. Sensors 254 found in specialized rooms (such as chemistry labs) might include air 255 flow, pressure, CO2 and CO particle sensors. Room actuation includes 256 temperature setpoint, lights and blinds/curtains. 258 2.1.4. Zone Controllers 260 Zone Control supports a similar set of characteristics as the Area 261 Control albeit to an extended space. A zone is normally a logical 262 grouping or functional division of a commercial building. A zone may 263 also coincidentally map to a physical locale such as a floor. 265 Zone Control may have direct sensor inputs (smoke detectors for 266 fire), controller inputs (room controllers for air-handlers in HVAC) 267 or both (door controllers and tamper sensors for security). Like 268 area/room controllers, zone controllers are standalone devices that 269 operate independently or may be attached to the larger network for 270 more synergistic control. 272 2.2. Installation Methods 274 2.2.1. Wired Communication Media 276 Commercial controllers are traditionally deployed in a facility using 277 twisted pair serial media following the EIA-485 electrical standard 278 operating nominally at 38400 to 76800 baud. This allows runs to 5000 279 ft without a repeater. With the maximum of three repeaters, a single 280 communication trunk can serpentine 15000 ft. EIA-485 is a multi-drop 281 media allowing upwards to 255 devices to be connected to a single 282 trunk. 284 Most sensors and virtually all actuators currently used in 285 commercial buildings are "dumb", non-communicating hardwired devices. 286 However, sensor buses are beginning to be deployed by vendors which 287 are used for smart sensors and point multiplexing. The Fire 288 industry deploys addressable fire devices, which usually use some 289 form of proprietary communication wiring driven by fire codes. 291 2.2.2. Device Density 293 Device density differs depending on the application and as dictated 294 by the local building code requirements. The following sections 295 detail typical installation densities for different applications. 297 2.2.2.1. HVAC Device Density 299 HVAC room applications typically have sensors/actuators and 300 controllers spaced about 50ft apart. In most cases there is a 3:1 301 ratio of sensors/actuators to controllers. That is, for each room 302 there is an installed temperature sensor, flow sensor and damper 303 actuator for the associated room controller. 305 HVAC equipment room applications are quite different. An air handler 306 system may have a single controller with upwards to 25 sensors and 307 actuators within 50 ft of the air handler. A chiller or boiler is 308 also controlled with a single equipment controller instrumented with 309 25 sensors and actuators. Each of these devices would be 310 individually addressed since the devices are mandated or optional as 311 defined by the specified HVAC application. Air handlers typically 312 serve one or two floors of the building. Chillers and boilers may be 313 installed per floor, but many times service a wing, building or the 314 entire complex via a central plant. 316 These numbers are typical. In special cases, such as clean rooms, 317 operating rooms, pharmaceuticals and labs, the ratio of sensors to 318 controllers can increase by a factor of three. Tenant installations 319 such as malls would opt for packaged units where much of the sensing 320 and actuation is integrated into the unit. Here a single device 321 address would serve the entire unit. 323 2.2.2.2. Fire Device Density 325 Fire systems are much more uniformly installed with smoke detectors 326 installed about every 50 feet. This is dictated by local building 327 codes. Fire pull boxes are installed uniformly about every 150 feet. 328 A fire controller will service a floor or wing. The fireman's fire 329 panel will service the entire building and typically is installed in 330 the atrium. 332 2.2.2.3. Lighting Device Density 334 Lighting is also very uniformly installed with ballasts installed 335 approximately every 10 feet. A lighting panel typically serves 48 to 336 64 zones. Wired systems typically tether many lights together into a 337 single zone. Wireless systems configure each fixture independently 338 to increase flexibility and reduce installation costs. 340 2.2.2.4. Physical Security Device Density 342 Security systems are non-uniformly oriented with heavy density near 343 doors and windows and lighter density in the building interior space. 344 The recent influx of interior and perimeter camera systems is 345 increasing the security footprint. These cameras are atypical 346 endpoints requiring upwards to 1 Mbit/s data rates per camera as 347 contrasted by the few Kbits/s needed by most other FMS sensing 348 equipment. Previously, camera systems had been deployed on 349 proprietary wired high speed network. More recent implementations 350 utilize wired or wireless IP cameras integrated to the enterprise 351 LAN. 353 2.2.3. Installation Procedure 355 Wired FMS installation is a multifaceted procedure depending on the 356 extent of the system and the software interoperability requirement. 358 However, at the sensor/actuator and controller level, the procedure 359 is typically a two or three step process. 361 Most FMS equipment is 24 VAC equipment that can be installed by a 362 low-voltage electrician. He/she arrives on-site during the 363 construction of the building prior to the sheet wall and ceiling 364 installation. This allows him/her to allocate wall space, easily 365 land the equipment and run the wired controller and sensor networks. 366 The Building Controllers and Enterprise network are not normally 367 installed until months later. The electrician completes his task by 368 running a wire verification procedure that shows proper continuity 369 between the devices and proper local operation of the devices. 371 Later in the installation cycle, the higher order controllers are 372 installed, programmed and commissioned together with the previously 373 installed sensors, actuators and controllers. In most cases the IP 374 network is still not operable. The Building Controllers are 375 completely commissioned using a crossover cable or a temporary IP 376 switch together with static IP addresses. 378 Once the IP network is operational, the FMS may optionally be added 379 to the enterprise network. Wireless installation will necessarily 380 need to keep the same work flow. The electrician will install the 381 products as before and run local functional tests between the 382 wireless devices to assure operation before leaving the job. The 383 electrician does not carry a laptop so the commissioning must be 384 built into the device operation. 386 3. Building Automation Applications 388 Vooruit is an arts centre in a restored monument which dates from 389 1913. This complex monument consists of over 350 different rooms 390 including a meeting rooms, large public halls and theaters serving as 391 many as 2500 guests. A number of use cases regarding Vooruit are 392 described in the following text. The situations and needs described 393 in these use cases can also be found in all automated large 394 buildings, such as airports and hospitals. 396 3.1. Locking and Unlocking the Building 398 The member of the cleaning staff arrives first in the morning 399 unlocking the building (or a part of it) from the control room. This 400 means that several doors are unlocked; the alarms are switched off; 401 the heating turns on; some lights switch on, etc. Similarly, the 402 last person leaving the building has to lock the building. This will 403 lock all the outer doors, turn the alarms on, switch off heating and 404 lights, etc. 406 The ''building locked'' or ''building unlocked'' event needs to be 407 delivered to a subset of all the sensors and actuators. It can be 408 beneficial if those field devices form a group (e.g. ''all-sensors- 409 actuators-interested-in-lock/unlock-events). Alternatively, the area 410 and zone controllers could form a group where the arrival of such an 411 event results in each area and zone controller initiating unicast or 412 multicast within the LLN. 414 This use case is also described in the home automation, although the 415 requirement about preventing the "popcorn effect" draft [I-D.ietf- 416 roll-home-routing-reqs] can be relaxed a little bit in building 417 automation. It would be nice if lights, roll-down shutters and other 418 actuators in the same room or area with transparent walls execute the 419 command around (not 'at') the same time (a tolerance of 200 ms is 420 allowed). 422 3.2. Building Energy Conservation 424 A room that is not in use should not be heated, air conditioned or 425 ventilated and the lighting should be turned off. In a building with 426 a lot of rooms it can happen quite frequently that someone forgets to 427 switch off the HVAC and lighting. This is a real waste of valuable 428 energy. To prevent this from happening, the janitor can program the 429 building according to the day's schedule. This way lighting and HVAC 430 is turned on prior to the use of a room, and turned off afterwards. 431 Using such a system Vooruit has realized a saving of 35% on the gas 432 and electricity bills. 434 3.3. Inventory and Remote Diagnosis of Safety Equipment 436 Each month Vooruit is obliged to make an inventory of its safety 437 equipment. This task takes two working days. Each fire extinguisher 438 (100), fire blanket (10), fire-resisted door (120) and evacuation 439 plan (80) must be checked for presence and proper operation. Also 440 the battery and lamp of every safety lamp must be checked before each 441 public event (safety laws). Automating this process using asset 442 tracking and low-power wireless technologies would heavily cut into 443 working hours. 445 It is important that these messages are delivered very reliably and 446 that the power consumption of the sensors/actuators attached to this 447 safety equipment is kept at a very low level. 449 3.4. Life Cycle of Field Devices 451 Some field devices (e.g. smoke detectors) must be replaced 452 periodically. Devices must be easily added and deleted from the 453 network to support augmenting sensors/actuators during construction. 455 A secure mechanism is needed to remove the old device and install the 456 new device. New devices need to be authenticated before they can 457 participate in the routing process of the LLN. After the 458 authentication, zero-configuration of the routing protocol is 459 necessary. 461 3.5. Surveillance 463 Ingress and egress are real-time applications needing response times 464 below 500msec. Each door must support local programming to restrict 465 use on a per person basis with respect to time-of-day and person 466 entering. While much of the application is localized at the door, 467 tamper, door ajar, forced entry must be routed to one or more fixed 468 or mobile user devices within 5 seconds. 470 3.6. Emergency 472 In case of an emergency it is very important that all the visitors be 473 evacuated as quickly as possible. The fire and smoke detectors have 474 to set off an alarm, and alert the mobile personnel on their user 475 device (e.g. PDA). All emergency exits have to be instantly unlocked 476 and the emergency lighting has to guide the visitors to these exits. 477 The necessary sprinklers have to be activated and the electricity 478 grid has to be monitored if it becomes necessary to shut down some 479 parts of the building. Emergency services have to be notified 480 instantly. 482 A wireless system could bring in some extra safety features. 483 Locating fire fighters and guiding them through the building could be 484 a life-saving application. 486 These life critical applications must take routing precedence over 487 other network traffic. Commands entered during these emergencies 488 must be properly authenticated by device, user, and command request. 490 3.7. Public Address 492 It should be possible to send audio and text messages to the visitors 493 in the building. These messages can be very diverse, e.g. ASCII text 494 boards displaying the name of the event in a room, audio 495 announcements such as delays in the program, lost and found children, 496 evacuation orders, etc. 498 The control network must be able to readily sense the audience in an 499 area and deliver applicable message content. 501 4. Building Automation Routing Requirements 503 Following are the building automation routing requirements for a 504 network used to integrate building sensor actuator and control 505 products. These requirements have been limited to routing 506 requirements only. These requirements are written not presuming any 507 preordained network topology, physical media (wired) or radio 508 technology (wireless). See Appendix A for additional requirements 509 that have been deemed outside the scope of this document yet will 510 pertain to the successful deployment of building automation systems. 512 4.1. Installation 514 Building control systems typically are installed and tested by 515 electricians having little computer knowledge and no network 516 knowledge whatsoever. These systems are often installed during the 517 building construction phase before the drywall and ceilings are in 518 place. There is never an IP network in place during this 519 installation. 521 In retrofit applications, pulling wires from sensors to controllers 522 can be costly and in some applications (e.g. museums) not feasible. 524 Local (ad hoc) testing of sensors and room controllers must be 525 completed before the tradesperson can complete his/her work. System 526 level commissioning will later be deployed using a more computer 527 savvy person with access to a commissioning device (e.g. a laptop 528 computer). The completely installed and commissioned IP network may 529 or may not be in place at this time. Following are the installation 530 routing requirements. 532 4.1.1. Zero-Configuration installation 534 It MUST be possible to fully commission network devices without 535 requiring any additional commissioning device (e.g. laptop). The 536 device MAY support up to sixteen integrated switches to uniquely 537 identify the device on the network. 539 4.1.2. Sleeping devices 541 Sensing devices must be able to utilize battery power or Energy 542 Harvesting techniques for power. This presumes a need for devices 543 that most often sleep. Routing must support these catatonic devices 544 to assure that established routes do not utilize sleeping devices. 545 It must also define routing rules when these devices need to access 546 the network. Communication to these devices must be bidirectional. 547 Routing must support proxies that can cache the inbound data for the 548 sleeping device until the device awakens. Routing must understand 549 the selected proxy for the sleeping device. 551 Batteries must be operational for at least 5 years when the sensing 552 device is transmitting its data (e.g. 64 bytes) once per minute. 553 This requires that sleeping devices must have minimal network access 554 time when they awake and transmit onto the network. 556 4.1.3. Local Testing 558 The local sensors and requisite actuators and controllers must be 559 testable within the locale (e.g. room) to assure communication 560 connectivity and local operation without requiring other systemic 561 devices. Routing must allow for temporary ad hoc paths to be 562 established that are updated as the network physically and 563 functionally expands. 565 4.1.4. Device Replacement 567 Replacement devices must be plug-and-play with no additional setup 568 than what is normally required for a new device. No bound 569 information from other nodes MUST need be reconfigured. 571 4.2. Scalability 573 Building control systems are designed for facilities from 50000 sq. 574 ft. to 1M+ sq. ft. The networks that support these systems must 575 cost-effectively scale accordingly. In larger facilities 576 installation may occur simultaneously on various wings or floors, yet 577 the end system must seamlessly merge. Following are the scalability 578 requirements. 580 4.2.1. Network Domain 582 The routing protocol MUST be able to support networks with at least 583 1000 routers and 1000 hosts. Subnetworks (e.g. rooms, primary 584 equipment) within the network must support upwards to 255 sensors 585 and/or actuators. 587 . 589 4.2.2. Peer-to-peer Communication 591 The data domain for commercial FMS systems may sprawl across a vast 592 portion of the physical domain. For example, a chiller may reside in 593 the facility's basement due to its size, yet the associated cooling 594 towers will reside on the roof. The cold-water supply and return 595 pipes serpentine through all the intervening floors. The feedback 596 control loops for these systems require data from across the 597 facility. 599 Network devices must be able to communicate in a peer-to-peer manner 600 with all other devices on the network. Thus the routing protocol MUST 601 provide routes to any other devices without being subject to a 602 constrained path via a gating device. 604 4.3. Mobility 606 Most devices are affixed to walls or installed on ceilings within 607 buildings. Hence the mobility requirements for commercial buildings 608 are few. However, in wireless environments location tracking of 609 occupants and assets is gaining favor. 611 4.3.1. Mobile Device Association 613 Mobile devices SHOULD be capable of unjoining (handing-off) from an 614 old network joining onto a new network within 15 seconds. 616 4.4. Resource Constrained Devices 618 Sensing and actuator device processing power and memory may be 4 619 orders of magnitude less (i.e. 10,000x) than many more traditional 620 client devices on an IP network. The routing mechanisms must 621 therefore be tailored to fit these resource constrained devices. 623 4.4.1. Limited Processing Power Sensors/Actuators 625 The software stack requirements for sensors and actuators MUST be 626 implementable in 8-bit devices with no more than 128kb of flash 627 memory (including at least 32Kb for the application code) and no more 628 than 8Kb of RAM (including at least 1Kb RAM available for the 629 application). 631 4.4.2. Limited Processing Power Controllers 633 The software stack requirements for room controllers SHOULD be 634 implementable in 8-bit devices with no more than 256kb of flash 635 memory (including at least 32Kb for the application code) and no more 636 than 8Kb of RAM (including at least 1Kb RAM available for the 637 application) 639 4.4.3. Proxying for Constrained Devices 641 Routing MUST support in-bound packet caches for low-power (battery 642 and energy harvested) devices when these devices are not accessible 643 on the network. 645 These devices MUST have a designated powered proxying device to which 646 packets will be temporarily routed and cached until the constrained 647 device accesses the network. 649 4.5. Prioritized Routing 651 Network and application routing prioritization is required to assure 652 that mission critical applications (e.g. Fire Detection) cannot be 653 deferred while less critical application access the network. 655 4.5.1. Packet Prioritization 657 Routers MUST support quality of service prioritization to assure 658 timely response for critical FMS packets. 660 4.6. Addressing 662 Facility Management systems require different communication schema to 663 solicit or post network information. Broadcasts or anycasts need be 664 used to resolve unresolved references within a device when the device 665 first joins the network. 667 As with any network communication, broadcasting should be minimized. 668 This is especially a problem for small embedded devices with limited 669 network bandwidth. In many cases a global broadcast could be 670 replaced with a multicast since the application knows the application 671 domain. Broadcasts and multicasts are typically used for network 672 joins and application binding in embedded systems. 674 4.6.1. Unicast/Multicast/Anycast 676 Routing MUST support anycast, unicast, multicast and broadcast 677 services (or IPv6 equivalent). 679 4.7. Manageability 681 In addition to the initial installation of the system (see Section 682 4.1), the ongoing maintenance of the system is equally important to 683 be simple and inexpensive. 685 4.7.1. Firmware Upgrades 687 To support high speed code downloads, routing MUST support parallel 688 downloads to targeted devices yet guarantee packet delivery. 690 4.7.2. Diagnostics 692 To improve diagnostics, the network layer SHOULD be able to be placed 693 in and out of 'verbose' mode. Verbose mode is a temporary debugging 694 mode that provides additional communication information including at 695 least total number of packets sent, packets received, number of 696 failed communication attempts, neighbor table and routing table 697 entries. 699 4.7.3. Route Tracking 701 Route diagnostics SHOULD be supported providing information such as 702 path quality; number of hops; available alternate active paths with 703 associated costs. 705 4.8. Compatibility 707 The building automation industry adheres to application layer 708 protocol standards to achieve vendor interoperability. These 709 standards are BACnet and LON. It is estimated that fully 80% of the 710 customer bid requests received world-wide will require compliance to 711 one or both of these standards. ROLL routing will therefore need to 712 dovetail to these application protocols to assure acceptance in the 713 building automation industry. These protocols have been in place for 714 over 10 years. Many sites will require backwards compatibility with 715 the existing legacy devices. 717 4.8.1. IPv4 Compatibility 719 The routing protocol MUST define a communication scheme to assure 720 compatibility of IPv4 and IPv6 devices. 722 4.8.2. Maximum Packet Size 724 Routing MUST support packet sizes to 1526 octets (to be backwards 725 compatible with 802.3 subnetworks) 727 4.9. Route Selection 729 Route selection determines reliability and quality of the 730 communication paths among the devices. Optimizing the routes over 731 time resolve any nuances developed at system startup when nodes are 732 asynchronously adding themselves to the network. Path adaptation 733 will reduce latency if the path costs consider hop count as a cost 734 attribute. 736 4.9.1. Path Cost 738 The routing protocol MUST support a range of metrics and optimize 739 (constrained) path according to these metrics. These metrics SHOULD 740 include signal strength, available bandwidth, hop count and 741 communication error rates. 743 4.9.2. Path Adaptation 745 Communication paths MUST adapt toward the chosen metric(s) (e.g. 746 signal quality) optimality in time. 748 4.9.3. Route Redundancy 750 To reduce real-time latency, the network layer SHOULD be configurable 751 to allow secondary and tertiary paths to be established and used upon 752 failure of the primary path. 754 4.9.4. Route Discovery Time 756 Route discovery occurring during packet transmission MUST not exceed 757 120 msecs. 759 4.9.5. Route Preference 761 The route discovery mechanism SHOULD allow a source node (sensor) to 762 dictate a configured destination node (controller) as a preferred 763 routing path. 765 4.9.6. Path Symmetry 767 The network layer SHOULD support both asymmetric and symmetric routes 768 as requested by the application layer. When the application layer 769 selects asymmetry the network layer MAY elect to find either 770 asymmetric or symmetric routes. When the application layer requests 771 symmetric routes, then only symmetric routes MUST be utilized. 773 4.9.7. Path Persistence 775 Devices SHOULD optionally persist communication paths across boots 777 4.10. Reliability 779 4.10.1. Device Integrity 781 Commercial Building devices MUST all be periodically scanned to 782 assure that the device is viable and can communicate data and alarm 783 information as needed. Network routers SHOULD maintain previous 784 packet flow information temporally to minimize overall network 785 overhead. 787 5. Traffic Pattern 789 The independent nature of the automation systems within a building 790 plays heavy onto the network traffic patterns. Much of the real-time 791 sensor data stays within the local environment. Alarming and other 792 event data will percolate to higher layers. 794 Systemic data may be either polled or event based. Polled data 795 systems will generate a uniform packet load on the network. This 796 architecture has proven not scalable. Most vendors have developed 797 event based systems which passes data on event. These systems are 798 highly scalable and generate low data on the network at quiescence. 799 Unfortunately, the systems will generate a heavy load on startup 800 since all the initial data must migrate to the controller level. 801 They also will generate a temporary but heavy load during firmware 802 upgrades. This latter load can normally be mitigated by performing 803 these downloads during off-peak hours. 805 Devices will need to reference peers occasionally for sensor data or 806 to coordinate across systems. Normally, though, data will migrate 807 from the sensor level upwards through the local, area then 808 supervisory level. Bottlenecks will typically form at the funnel 809 point from the area controllers to the supervisory controllers. 811 6. Open issues 813 Other items to be addressed in further revisions of this document 814 include: 816 All known open items completed 818 7. Security Considerations 820 Security policies, especially wireless encryption and overall device 821 authentication need to be considered. These issues are out of scope 822 for the routing requirements, but could have an impact on the 823 processing capabilities of the sensors and controllers. 825 As noted above, the FMS systems are typically highly configurable in 826 the field and hence the security policy is most often dictated by the 827 type of building to which the FMS is being installed. 829 8. IANA Considerations 831 This document includes no request to IANA. 833 9. Acknowledgments 835 J. P. Vasseur, Ted Humpal and Zach Shelby are gratefully acknowledged 836 for their contributions to this document. 838 This document was prepared using 2-Word-v2.0.template.dot. 840 10. References 842 10.1. Normative References 844 10.2. Informative References 846 ''RS-485 EIA Standard: Standard for Electrical Characteristics of 847 Generators and Receivers for use in Balanced Digital Multipoint 848 Systems'', April 1983 850 ''BACnet: A Data Communication Protocol for Building and Automation 851 Control Networks'' ANSI/ASHRAE Standard 135-2004'', 2004 853 ''LON: OPEN DATA COMMUNICATION IN BUILDING AUTOMATION, CONTROLS AND 854 BUILDING MANAGEMENT - BUILDING NETWORK PROTOCOL - PART 1: PROTOCOL 855 STACK'', 11/25/2005 857 Terminology in Low power And Lossy Networks: draft-vasseur-roll- 858 terminology-02.txt 860 [I-D.ietf-roll-home-routing-reqs] 862 Brandt, A., ''Home Automation Routing Requirements in Low Power and 863 Lossy Networks'', 865 draft-ietf-roll-home-routing-reqs-03 (work in progress), September 866 2008 868 11. Appendix A: Additional Building Requirements 870 Appendix A contains additional building requirements that were deemed 871 out of scope for the routing document yet provided ancillary 872 informational substance to the reader. The requirements will need to 873 be addressed by ROLL or other WGs before adoption by the building 874 automation industry will be considered. 876 11.1. Additional Commercial Product Requirements 878 11.1.1. Wired and Wireless Implementations 880 Solutions MUST support both wired and wireless implementations. 882 11.1.2. World-wide Applicability 884 Wireless devices MUST be supportable at the 2.4Ghz ISM band Wireless 885 devices SHOULD be supportable at the 900 and 868 ISM bands as well. 887 11.1.3. Support of the BACnet Building Protocol 889 Devices implementing the ROLL features MUST be able to support the 890 BACnet protocol. 892 11.1.4. Support of the LON Building Protocol 894 Devices implementing the ROLL features MUST be able to support the 895 LON protocol. 897 11.1.5. Energy Harvested Sensors 899 RFDs SHOULD target for operation using viable energy harvesting 900 techniques such as ambient light, mechanical action, solar load, air 901 pressure and differential temperature. 903 11.1.6. Communication Distance 905 A source device may be upwards to 1000 feet from its destination. 906 Communication MUST be established between these devices without 907 needing to install other intermediate 'communication only' devices 908 such as repeaters 910 11.1.7. Automatic Gain Control 912 For wireless implementations, the device radios SHOULD incorporate 913 automatic transmit power regulation to maximize packet transfer and 914 minimize network interference regardless of network size or density. 916 11.1.8. Cost 918 The total installed infrastructure cost including but not limited to 919 the media, required infrastructure devices (amortized across the 920 number of devices); labor to install and commission the network MUST 921 not exceed $1.00/foot for wired implementations. 923 Wireless implementations (total installed cost) must cost no more 924 than 80% of wired implementations. 926 11.2. Additional Installation and Commissioning Requirements 928 11.2.1. Device Setup Time 930 Network setup by the installer MUST take no longer than 20 seconds 931 per device installed. 933 11.2.2. Unavailability of an IT network 935 Product commissioning MUST be performed by an application engineer 936 prior to the installation of the IT network. 938 11.3. Additional Network Requirements 940 11.3.1. TCP/UDP 942 Connection based and connectionless services MUST be supported 944 11.3.2. Data Rate Performance 946 An effective data rate of 20kbps is the lowest acceptable operational 947 data rate acceptable on the network. 949 11.3.3. High Speed Downloads 951 Devices receiving a download MAY cease normal operation, but upon 952 completion of the download MUST automatically resume normal 953 operation. 955 11.3.4. Interference Mitigation 957 The network MUST automatically detect interference and migrate the 958 network to a better 802.15.4 channel to improve communication. 959 Channel changes and nodes response to the channel change MUST occur 960 within 60 seconds. 962 11.3.5. Real-time Performance Measures 964 A node transmitting a 'request with expected reply' to another node 965 MUST send the message to the destination and receive the response in 966 not more than 120 msec. This response time SHOULD be achievable with 967 5 or less hops in each direction. This requirement assumes network 968 quiescence and a negligible turnaround time at the destination node. 970 11.3.6. Packet Reliability 972 Reliability MUST meet the following minimum criteria : 974 < 1% MAC layer errors on all messages; After no more than three 975 retries 977 < .1% Network layer errors on all messages; 979 After no more than three additional retries; 981 < 0.01% Application layer errors on all messages. 983 Therefore application layer messages will fail no more than once 984 every 100,000 messages. 986 11.3.7. Merging Commissioned Islands 988 Subsystems are commissioned by various vendors at various times 989 during building construction. These subnetworks MUST seamlessly 990 merge into networks and networks MUST seamlessly merge into 991 internetworks since the end user wants a holistic view of the system. 993 11.3.8. Adjustable System Table Sizes 995 Routing MUST support adjustable router table entry sizes on a per 996 node basis to maximize limited RAM in the devices. 998 Authors' Addresses 1000 Jerry Martocci 1001 Johnson Control 1002 507 E. Michigan Street 1003 Milwaukee, Wisconsin, 53202 1004 USA 1006 Phone: 414.524.4010 1007 Email: jerald.p.martocci@jci.com 1009 Nicolas Riou 1010 Schneider Electric 1011 Technopole 38TEC T3 1012 37 quai Paul Louis Merlin 1013 38050 Grenoble Cedex 9 1014 France 1016 Phone: +33 4 76 57 66 15 1017 Email: nicolas.riou@fr.schneider-electric.com 1019 Pieter De Mil 1020 Ghent University - IBCN 1021 G. Crommenlaan 8 bus 201 1022 Ghent 9050 1023 Belgium 1025 Phone: +32-9331-4981 1026 Fax: +32--9331--4899 1027 Email: pieter.demil@intec.ugent.be 1029 Wouter Vermeylen 1030 Arts Centre Vooruit 1031 ??? 1032 Ghent 9000 1033 Belgium 1035 Phone: ??? 1036 Fax: ??? 1037 Email: wouter@vooruit.be 1039 Intellectual Property Statement 1041 The IETF takes no position regarding the validity or scope of any 1042 Intellectual Property Rights or other rights that might be claimed to 1043 pertain to the implementation or use of the technology described in 1044 this document or the extent to which any license under such rights 1045 might or might not be available; nor does it represent that it has 1046 made any independent effort to identify any such rights. Information 1047 on the procedures with respect to rights in RFC documents can be 1048 found in BCP 78 and BCP 79. 1050 Copies of IPR disclosures made to the IETF Secretariat and any 1051 assurances of licenses to be made available, or the result of an 1052 attempt made to obtain a general license or permission for the use of 1053 such proprietary rights by implementers or users of this 1054 specification can be obtained from the IETF on-line IPR repository at 1055 http://www.ietf.org/ipr. 1057 The IETF invites any interested party to bring to its attention any 1058 copyrights, patents or patent applications, or other proprietary 1059 rights that may cover technology that may be required to implement 1060 this standard. Please address the information to the IETF at 1061 ietf-ipr@ietf.org. 1063 Disclaimer of Validity 1065 This document and the information contained herein are provided on an 1066 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1067 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1068 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1069 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1070 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1071 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1073 Copyright Statement 1075 Copyright (C) The IETF Trust (2008). 1077 This document is subject to the rights, licenses and restrictions 1078 contained in BCP 78, and except as set forth therein, the authors 1079 retain all their rights. 1081 Acknowledgment 1083 Funding for the RFC Editor function is currently provided by the 1084 Internet Society.