idnits 2.17.1 draft-ietf-roll-building-routing-reqs-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document seems to contain a disclaimer for pre-RFC5378 work, and may have content which was first submitted before 10 November 2008. The disclaimer is necessary when there are original authors that you have been unable to contact, or if some do not wish to grant the BCP78 rights to the IETF Trust. If you are able to get all authors (current and original) to grant those rights, you can and should remove the disclaimer; otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (January 28, 2010) is 5194 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'I-D.ietf-roll-terminology' is mentioned on line 159, but not defined == Unused Reference: 'RFC2119' is defined on line 979, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Networking Working Group J. Martocci, Ed. 2 Internet-Draft Johnson Controls Inc. 3 Intended status: Informational Pieter De Mil 4 Expires: July 28, 2010 Ghent University IBCN 5 W. Vermeylen 6 Arts Centre Vooruit 7 Nicolas Riou 8 Schneider Electric 9 January 28, 2010 11 Building Automation Routing Requirements in Low Power and Lossy 12 Networks 13 draft-ietf-roll-building-routing-reqs-09 15 Status of this Memo 17 This Internet-Draft is submitted to IETF in full conformance with the 18 provisions of BCP 78 and BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on July 28, 2010. 38 Copyright Notice 40 Copyright (c) 2009 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 This document may contain material from IETF Documents or IETF 54 Contributions published or made publicly available before November 55 10, 2008. The person(s) controlling the copyright in some of this 56 material may not have granted the IETF Trust the right to allow 57 modifications of such material outside the IETF Standards Process. 58 Without obtaining an adequate license from the person(s) controlling 59 the copyright in such materials, this document may not be modified 60 outside the IETF Standards Process, and derivative works of it may 61 not be created outside the IETF Standards Process, except to format 62 it for publication as an RFC or to translate it into languages other 63 than English. 65 Abstract 67 The Routing Over Low power and Lossy network (ROLL) Working Group has 68 been chartered to work on routing solutions for Low Power and Lossy 69 networks (LLN) in various markets: Industrial, Commercial (Building), 70 Home and Urban networks. Pursuant to this effort, this document 71 defines the IPv6 routing requirements for building automation. 73 Requirements Language 75 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 76 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 77 document are to be interpreted as described in (RFC2119). 79 Table of Contents 81 1. Terminology....................................................4 82 2. Introduction...................................................4 83 3. Overview of Building Automation Networks.......................6 84 3.1. Introduction..............................................6 85 3.2. Building Systems Equipment................................7 86 3.2.1. Sensors/Actuators....................................7 87 3.2.2. Area Controllers.....................................7 88 3.2.3. Zone Controllers.....................................7 89 3.3. Equipment Installation Methods............................8 90 3.4. Device Density............................................8 91 3.4.1. HVAC Device Density..................................9 92 3.4.2. Fire Device Density..................................9 93 3.4.3. Lighting Device Density..............................9 94 3.4.4. Physical Security Device Density.....................9 95 4. Traffic Pattern...............................................10 96 5. Building Automation Routing Requirements......................11 97 5.1. Device and Network Commissioning.........................12 98 5.1.1. Zero-Configuration Installation.....................12 99 5.1.2. Local Testing.......................................12 100 5.1.3. Device Replacement..................................12 101 5.2. Scalability..............................................13 102 5.2.1. Network Domain......................................13 103 5.2.2. Peer-to-Peer Communication..........................13 104 5.3. Mobility.................................................13 105 5.3.1. Mobile Device Requirements..........................14 106 5.4. Resource Constrained Devices.............................15 107 5.4.1. Limited Memory Footprint on Host Devices............15 108 5.4.2. Limited Processing Power for Routers................15 109 5.4.3. Sleeping Devices....................................15 110 5.5. Addressing...............................................16 111 5.6. Manageability............................................16 112 5.6.1. Diagnostics.........................................17 113 5.6.2. Route Tracking......................................17 114 5.7. Route Selection..........................................17 115 5.7.1. Route Cost..........................................17 116 5.7.2. Route Adaptation....................................18 117 5.7.3. Route Redundancy....................................18 118 5.7.4. Route Discovery Time................................18 119 5.7.5. Route Preference....................................18 120 5.7.6. Real-time Performance Measures......................18 121 5.7.7. Prioritized Routing.................................18 122 5.8. Security Requirements....................................19 123 5.8.1. Building Security Use Case..........................19 124 5.8.2. Authentication......................................20 125 5.8.3. Encryption..........................................20 126 5.8.4. Disparate Security Policies.........................21 127 5.8.5. Routing Security Policies To Sleeping Devices.......21 128 6. Security Considerations.......................................21 129 7. IANA Considerations...........................................22 130 8. Acknowledgments...............................................22 131 9. Disclaimer for pre-RFC5378 work...............................22 132 10. References...................................................22 133 10.1. Normative References....................................22 134 10.2. Informative References..................................23 135 11. Appendix A: Additional Building Requirements.................23 136 11.1. Additional Commercial Product Requirements..............23 137 11.1.1. Wired and Wireless Implementations.................23 138 11.1.2. World-wide Applicability...........................23 139 11.2. Additional Installation and Commissioning Requirements..23 140 11.2.1. Unavailability of an IP network....................23 142 11.3. Additional Network Requirements.........................23 143 11.3.1. TCP/UDP............................................23 144 11.3.2. Interference Mitigation............................24 145 11.3.3. Packet Reliability.................................24 146 11.3.4. Merging Commissioned Islands.......................24 147 11.3.5. Adjustable Routing Table Sizes.....................24 148 11.3.6. Automatic Gain Control.............................24 149 11.3.7. Device and Network Integrity.......................24 150 11.4. Additional Performance Requirements.....................25 151 11.4.1. Data Rate Performance..............................25 152 11.4.2. Firmware Upgrades..................................25 153 11.4.3. Route Persistence..................................25 154 12. Authors' Addresses...........................................26 156 1. Terminology 158 For description of the terminology used in this specification, please 159 see [I-D.ietf-roll-terminology]. 161 2. Introduction 163 The Routing Over Low power and Lossy network (ROLL) Working Group has 164 been chartered to work on routing solutions for Low Power and Lossy 165 networks (LLN) in various markets: Industrial, Commercial (Building), 166 Home and Urban networks. Pursuant to this effort, this document 167 defines the IPv6 routing requirements for building automation. 169 Commercial buildings have been fitted with pneumatic and subsequently 170 electronic communication routes connecting sensors to their 171 controllers for over one hundred years. Recent economic and 172 technical advances in wireless communication allow facilities to 173 increasingly utilize a wireless solution in lieu of a wired solution; 174 thereby reducing installation costs while maintaining highly reliant 175 communication. 177 The cost benefits and ease of installation of wireless sensors allow 178 customers to further instrument their facilities with additional 179 sensors; providing tighter control while yielding increased energy 180 savings. 182 Wireless solutions will be adapted from their existing wired 183 counterparts in many of the building applications including, but not 184 limited to Heating, Ventilation, and Air Conditioning (HVAC), 185 Lighting, Physical Security, Fire, and Elevator/Lift systems. These 186 devices will be developed to reduce installation costs; while 187 increasing installation and retrofit flexibility, as well as 188 increasing the sensing fidelity to improve efficiency and building 189 service quality. 191 Sensing devices may be battery-less; battery or mains powered. 192 Actuators and area controllers will be mains powered. Due to 193 building code and/or device density (e.g., equipment room), it is 194 envisioned that a mix of wired and wireless sensors and actuators 195 will be deployed within a building. 197 Facility Management Systems (FMS) are deployed in a large set of 198 vertical markets including universities; hospitals; government 199 facilities; Kindergarten through High School (K-12); pharmaceutical 200 manufacturing facilities; and single-tenant or multi-tenant office 201 buildings. These buildings range in size from 100K sqft structures (5 202 story office buildings), to 1M sqft skyscrapers (100 story 203 skyscrapers) to complex government facilities such as the Pentagon. 204 The described topology is meant to be the model to be used in all 205 these types of environments, but clearly must be tailored to the 206 building class, building tenant and vertical market being served. 208 Section 3 describes the necessary background to understand the 209 context of building automation including the sensor, actuator, area 210 controller and zone controller layers of the topology; typical device 211 density; and installation practices. 213 Section 4 defines the traffic flow of the aforementioned sensors, 214 actuators and controllers in commercial buildings. 216 Section 5 defines the full set of IPv6 routing requirements for 217 commercial buildings. 219 Appendix A documents important commercial building requirements that 220 are out of scope for routing yet will be essential to the final 221 acceptance of the protocols used within the building. 223 Sections 3 and Appendix A are mainly included for educational 224 purposes. 226 The expressed aim of this document is to provide the set of IPv6 227 routing requirements for LLNs in buildings as described in Section 5. 229 3. Overview of Building Automation Networks 231 3.1. Introduction 233 To understand the network systems requirements of a facility 234 management system in a commercial building, this document uses a 235 framework to describe the basic functions and composition of the 236 system. An FMS is a hierarchical system of sensors, actuators, 237 controllers and user interface devices that interoperate to provide a 238 safe and comfortable environment while constraining energy costs. 240 An FMS is divided functionally across alike, but different building 241 subsystems such as heating, ventilation and air conditioning (HVAC); 242 Fire; Security; Lighting; Shutters and Elevator/Lift control systems 243 as denoted in Figure 1. 245 Much of the makeup of an FMS is optional and installed at the behest 246 of the customer. Sensors and actuators have no standalone 247 functionality. All other devices support partial or complete 248 standalone functionality. These devices can optionally be tethered 249 to form a more cohesive system. The customer requirements dictate 250 the level of integration within the facility. This architecture 251 provides excellent fault tolerance since each node is designed to 252 operate in an independent mode if the higher layers are unavailable. 254 +------+ +-----+ +------+ +------+ +------+ +------+ 256 Bldg App'ns | | | | | | | | | | | | 258 | | | | | | | | | | | | 260 Building Cntl | | | | | S | | L | | S | | E | 262 | | | | | E | | I | | H | | L | 264 Area Control | H | | F | | C | | G | | U | | E | 266 | V | | I | | U | | H | | T | | V | 268 Zone Control | A | | R | | R | | T | | T | | A | 270 | C | | E | | I | | I | | E | | T | 272 Actuators | | | | | T | | N | | R | | O | 273 | | | | | Y | | G | | S | | R | 275 Sensors | | | | | | | | | | | | 277 +------+ +-----+ +------+ +------+ +------+ +------+ 279 Figure 1: Building Systems and Devices 281 3.2. Building Systems Equipment 283 3.2.1. Sensors/Actuators 285 As Figure 1 indicates an FMS may be composed of many functional 286 stacks or silos that are interoperably woven together via Building 287 Applications. Each silo has an array of sensors that monitor the 288 environment and actuators that effect the environment as determined 289 by the upper layers of the FMS topology. The sensors typically are 290 at the edge of the network structure providing environmental data 291 into the system. The actuators are the sensors' counterparts 292 modifying the characteristics of the system based on the sensor data 293 and the applications deployed. 295 3.2.2. Area Controllers 297 An area describes a small physical locale within a building, 298 typically a room. HVAC (temperature and humidity) and Lighting (room 299 lighting, shades, solar loads) vendors often times deploy area 300 controllers. Area controls are fed by sensor inputs that monitor the 301 environmental conditions within the room. Common sensors found in 302 many rooms that feed the area controllers include temperature, 303 occupancy, lighting load, solar load and relative humidity. Sensors 304 found in specialized rooms (such as chemistry labs) might include air 305 flow, pressure, CO2 and CO particle sensors. Room actuation includes 306 temperature setpoint, lights and blinds/curtains. 308 3.2.3. Zone Controllers 310 Zone Control supports a similar set of characteristics as the Area 311 Control albeit to an extended space. A zone is normally a logical 312 grouping or functional division of a commercial building. A zone may 313 also coincidentally map to a physical locale such as a floor. 315 Zone Control may have direct sensor inputs (smoke detectors for 316 fire), controller inputs (room controllers for air-handlers in HVAC) 317 or both (door controllers and tamper sensors for security). Like 318 area/room controllers, zone controllers are standalone devices that 319 operate independently or may be attached to the larger network for 320 more synergistic control. 322 3.3. Equipment Installation Methods 324 An FMS is installed very differently from most other IT networks. IT 325 networks are typically installed as an overlay onto the existing 326 environment and are installed from the inside out. That is, the 327 network wiring infrastructure is installed; the switches, routers and 328 servers are connected and made operational; and finally the endpoints 329 (e.g., PCs, VoIP phones) added. 331 FMS systems, on the other hand, are installed from the outside in. 332 That is, the endpoints (thermostats, lights, smoke detectors) are 333 installed in the spaces first; local control is established in each 334 room and tested for proper operation. The individual rooms are later 335 lashed together into a subsystem (e.g. Lighting). The individual 336 subsystems (e.g., lighting, HVAC) then coalesce. Later the entire 337 system may be merged onto the enterprise network. 339 The rational for this is partly due to the different construction 340 trades having access to a building under construction at different 341 times. The sheer size of a building often dictates that even a 342 single trade may have multiple independent teams working 343 simultaneously. Furthermore, the HVAC, lighting and fire systems 344 must be fully operational before the building can obtain its 345 occupancy permit. Hence, the FMS must be in place and configured 346 well before any of the IT servers (DHCP, AAA, DNS, etc) are 347 operational. 349 This implies that the FMS cannot rely on the availability of the IT 350 network infrastructure or application servers. Rather, the FMS 351 installation should be planned to dovetail to the IT system once the 352 IT system is available for easy migration onto the IT network. 353 Front-end planning of available switch ports, cable runs, AP 354 placement, firewalls and security policies will facilitate this 355 adoption. 357 3.4. Device Density 359 Device density differs depending on the application and as dictated 360 by the local building code requirements. The following sections 361 detail typical installation densities for different applications. 363 3.4.1. HVAC Device Density 365 HVAC room applications typically have sensors/actuators and 366 controllers spaced about 50ft apart. In most cases there is a 3:1 367 ratio of sensors/actuators to controllers. That is, for each room 368 there is an installed temperature sensor, flow sensor and damper 369 actuator for the associated room controller. 371 HVAC equipment room applications are quite different. An air handler 372 system may have a single controller with upwards to 25 sensors and 373 actuators within 50 ft of the air handler. A chiller or boiler is 374 also controlled with a single equipment controller instrumented with 375 25 sensors and actuators. Each of these devices would be 376 individually addressed since the devices are mandated or optional as 377 defined by the specified HVAC application. Air handlers typically 378 serve one or two floors of the building. Chillers and boilers may be 379 installed per floor, but many times service a wing, building or the 380 entire complex via a central plant. 382 These numbers are typical. In special cases, such as clean rooms, 383 operating rooms, pharmaceuticals and labs, the ratio of sensors to 384 controllers can increase by a factor of three. Tenant installations 385 such as malls would opt for packaged units where much of the sensing 386 and actuation is integrated into the unit. Here a single device 387 address would serve the entire unit. 389 3.4.2. Fire Device Density 391 Fire systems are much more uniformly installed with smoke detectors 392 installed about every 50 feet. This is dictated by local building 393 codes. Fire pull boxes are installed uniformly about every 150 feet. 394 A fire controller will service a floor or wing. The fireman's fire 395 panel will service the entire building and typically is installed in 396 the atrium. 398 3.4.3. Lighting Device Density 400 Lighting is also very uniformly installed with ballasts installed 401 approximately every 10 feet. A lighting panel typically serves 48 to 402 64 zones. Wired systems tether many lights together into a single 403 zone. Wireless systems configure each fixture independently to 404 increase flexibility and reduce installation costs. 406 3.4.4. Physical Security Device Density 408 Security systems are non-uniformly oriented with heavy density near 409 doors and windows and lighter density in the building interior space. 411 The recent influx of interior and perimeter camera systems is 412 increasing the security footprint. These cameras are atypical 413 endpoints requiring upwards to 1 megabit/second (Mbit/s) data rates 414 per camera as contrasted by the few Kbits/s needed by most other FMS 415 sensing equipment. Previously, camera systems had been deployed on 416 proprietary wired high speed network. More recent implementations 417 utilize wired or wireless IP cameras integrated to the enterprise 418 LAN. 420 4. Traffic Pattern 422 The independent nature of the automation subsystems within a building 423 plays heavy onto the network traffic patterns. Much of the real-time 424 sensor environmental data and actuator control stays within the local 425 LLN environment; while alarming and other event data will percolate 426 to higher layers. 428 Each sensor in the LLN unicasts P2P about 200 bytes of sensor data to 429 its associated controller each minute and expects an application 430 acknowledgment unicast returned from the destination. Each 431 controller unicasts messages at a nominal rate of 6kB/min to peer or 432 supervisory controllers. 30% of each node's packets are destined for 433 other nodes within the LLN. 70% of each node's packets are destined 434 for an aggregation device (MP2P)and routed off the LLN. These 435 messages also require a unicast acknowledgment from the destination. 436 The above values assume direct node-to-node communication; meshing 437 and error retransmissions are not considered. 439 Multicasts (P2MP) to all nodes in the LLN occur for node and object 440 discovery when the network is first commissioned. This data is 441 typically a one-time bind that is henceforth persisted. Lighting 442 systems will also readily use multicasting during normal operations 443 to turn banks of lights 'on' and 'off' simultaneously. 445 FMS systems may be either polled or event based. Polled data systems 446 will generate a uniform and constant packet load on the network. 447 Polled architectures, however have proven not scalable. Today, most 448 vendors have developed event based systems which pass data on event. 449 These systems are highly scalable and generate low data on the 450 network at quiescence. Unfortunately, the systems will generate a 451 heavy load on startup since all initial sensor data must migrate to 452 the controller level. They also will generate a temporary but heavy 453 load during firmware upgrades. This latter load can normally be 454 mitigated by performing these downloads during off-peak hours. 456 Devices will also need to reference peers periodically for sensor 457 data or to coordinate operation across systems. Normally, though, 458 data will migrate from the sensor level upwards through the local, 459 area then supervisory level. Traffic bottlenecks will typically form 460 at the funnel point from the area controllers to the supervisory 461 controllers. 463 Initial system startup after a controlled outage or unexpected power 464 failure puts tremendous stress on the network and on the routing 465 algorithms. An FMS system is comprised of a myriad of control 466 algorithms at the room, area, zone, and enterprise layers. When 467 these control algorithms are at quiescence, the real-time data rate 468 is small and the network will not saturate. An overall network 469 traffic load of 6KBps is typical at quiescence. However, upon any 470 power loss, the control loops and real-time data quickly atrophy. A 471 short power disruption of only ten minutes may have a long-term 472 deleterious impact on the building control systems taking many hours 473 to regain proper control. Control application that cannot handle 474 this level of disruption (e.g., Hospital Operating Rooms) must be 475 fitted with a secondary power source. 477 Power disruptions are unexpected and in most cases will immediately 478 impact lines-powered devices. Power disruptions however, are 479 transparent to battery powered devices. These devices will continue 480 to attempt to access the LLN during the outage. Battery powered 481 devices designed to buffer data that has not been delivered will 482 further stress the network operation when power returns. 484 Upon restart, lines-powered devices will naturally dither due to 485 primary equipment delays or variance in the device self-tests. 486 However, most lines-powered devices will be ready to access the LLN 487 network within 10 seconds of power up. Empirical testing indicates 488 that routes acquired during startup will tend to be very oblique 489 since the available neighbor lists are incomplete. This demands an 490 adaptive routing protocol to allow for route optimization as the 491 network stabilizes. 493 5. Building Automation Routing Requirements 495 Following are the building automation routing requirements for 496 networks used to integrate building sensor, actuator and control 497 products. These requirements are written not presuming any 498 preordained network topology, physical media (wired) or radio 499 technology (wireless). 501 5.1. Device and Network Commissioning 503 Building control systems typically are installed and tested by 504 electricians having little computer knowledge and no network 505 communication knowledge whatsoever. These systems are often 506 installed during the building construction phase before the drywall 507 and ceilings are in place. For new construction projects, the 508 building enterprise IP network is not in place during installation of 509 the building control system. For retrofit applications, the 510 installer will still operate independently from the IP network so as 511 not to affect network operations during the installation phase. 513 In traditional wired systems correct operation of a light 514 switch/ballast pair was as simple as flipping on the light switch. 515 In wireless applications, the tradesperson has to assure the same 516 operation, yet be sure the operation of the light switch is 517 associated to the proper ballast. 519 System level commissioning will later be deployed using a more 520 computer savvy person with access to a commissioning device (e.g., a 521 laptop computer). The completely installed and commissioned 522 enterprise IP network may or may not be in place at this time. 523 Following are the installation routing requirements. 525 5.1.1. Zero-Configuration Installation 527 It MUST be possible to fully commission network devices without 528 requiring any additional commissioning device (e.g., laptop). From 529 the ROLL perspective, zero-configuration means that a node can obtain 530 an address and join the network on its own, without human 531 intervention. 533 5.1.2. Local Testing 535 During installation, the room sensors, actuators and controllers 536 SHOULD be able to route packets amongst themselves and to any other 537 device within the LLN without requiring any additional routing 538 infrastructure or routing configuration. 540 5.1.3. Device Replacement 542 To eliminate the need to reconfigure the application upon replacing a 543 failed device in the LLN; the replaced device must be able to 544 advertise the old IP address of the failed device in addition to its 545 new IP address. The routing protocols MUST support hosts and routers 546 that advertise multiple IPv6 addresses. 548 5.2. Scalability 550 Building control systems are designed for facilities from 50000 sq. 551 ft. to 1M+ sq. ft. The networks that support these systems must 552 cost-effectively scale accordingly. In larger facilities 553 installation may occur simultaneously on various wings or floors, yet 554 the end system must seamlessly merge. Following are the scalability 555 requirements. 557 5.2.1. Network Domain 559 The routing protocol MUST be able to support networks with at least 560 2000 nodes where 1000 nodes would act as routers and the other 1000 561 nodes would be hosts. Subnetworks (e.g., rooms, primary equipment) 562 within the network must support upwards to 255 sensors and/or 563 actuators. 565 5.2.2. Peer-to-Peer Communication 567 The data domain for commercial FMS systems may sprawl across a vast 568 portion of the physical domain. For example, a chiller may reside in 569 the facility's basement due to its size, yet the associated cooling 570 towers will reside on the roof. The cold-water supply and return 571 pipes serpentine through all the intervening floors. The feedback 572 control loops for these systems require data from across the 573 facility. 575 A network device MUST be able to communicate in an end-to-end manner 576 with any other device on the network. Thus, the routing protocol MUST 577 provide routes between arbitrary hosts within the appropriate 578 administrative domain. 580 5.3. Mobility 582 Most devices are affixed to walls or installed on ceilings within 583 buildings. Hence the mobility requirements for commercial buildings 584 are few. However, in wireless environments location tracking of 585 occupants and assets is gaining favor. Asset tracking applications, 586 such as tracking capital equipment (e.g., wheel chairs) in medical 587 facilities, require monitoring movement with granularity of a minute; 588 however tracking babies in a pediatric ward would require latencies 589 less than a few seconds. 591 The following subsections document the mobility requirements in the 592 routing layer for mobile devices. Note however; that mobility can be 593 implemented at various layers of the system, and the specific 594 requirements depend on the chosen layer. For instance, some devices 595 may not depend on a static IP address and are capable of re- 596 establishing application-level communications when given a new IP 597 address. Alternatively, mobile IP may be used or the set of routers 598 in a building may give an impression of a building-wide network and 599 allow devices to retain their addresses regardless of where they are, 600 handling routing between the devices in the background. 602 5.3.1. Mobile Device Requirements 604 To minimize network dynamics, mobile devices while in motion should 605 not be allowed to act as forwarding devices (routers) for other 606 devices in the LLN. Network configuration should allow devices to be 607 configured as routers or hosts. 609 5.3.1.1. Device Mobility within the LLN 611 An LLN typically spans a single floor in a commercial building. 612 Mobile devices may move within this LLN. For example, a wheel chair 613 may be moved from one room on the floor to another room on the same 614 floor. 616 A mobile LLN device that moves within the confines of the same LLN 617 SHOULD reestablish end-to-end communication to a fixed device also in 618 the LLN within 5 seconds after it ceases movement. The LLN network 619 convergence time should be less than 10 seconds once the mobile 620 device stops moving. 622 5.3.1.2. Device Mobility across LLNs 624 A mobile device may move across LLNs, such as a wheel chair being 625 moved to a different floor. 627 A mobile device that moves outside its original LLN SHOULD 628 reestablish end-to-end communication to a fixed device also in the 629 new LLN within 10 seconds after the mobile device ceases movement. 630 The network convergence time should be less than 20 seconds once the 631 mobile device stops moving. 633 5.4. Resource Constrained Devices 635 Sensing and actuator device processing power and memory may be 4 636 orders of magnitude less (i.e., 10,000x) than many more traditional 637 client devices on an IP network. The routing mechanisms must 638 therefore be tailored to fit these resource constrained devices. 640 5.4.1. Limited Memory Footprint on Host Devices 642 The software size requirement for non-routing devices (e.g., sleeping 643 sensors and actuators) SHOULD be implementable in 8-bit devices with 644 no more than 128KB of memory. 646 5.4.2. Limited Processing Power for Routers 648 The software size requirements for routing devices (e.g., room 649 controllers) SHOULD be implementable in 8-bit devices with no more 650 than 256KB of flash memory. 652 5.4.3. Sleeping Devices 654 Sensing devices will, in some cases, utilize battery power or energy 655 harvesting techniques for power and will operate mostly in a sleep 656 mode to maintain power consumption within a modest budget. The 657 routing protocol MUST take into account device characteristics such 658 as power budget. 660 Typically, sensor battery life (2000mAh) needs to extend for at least 661 5 years when the device is transmitting its data (200 octets) once 662 per minute over a low power transceiver (25ma) and expecting an 663 application acknowledgment. In this case the transmitting device 664 must leave its receiver in a high powered state awaiting the return 665 of the application ACK. To minimize this latency, a highly efficient 666 routing protocol that minimizes hops and hence end-to-end 667 communication is required. The routing protocol MUST take into 668 account node properties such as 'Low-powered node' which produce 669 efficient low latency routes that minimize radio 'on' time for these 670 devices. 672 Sleeping devices MUST be able to receive inbound data. Messages sent 673 to battery powered nodes MUST be buffered and retried by the last hop 674 router for a period of at least 20 seconds when the destination node 675 is currently in its sleep cycle. 677 5.5. Addressing 679 Facility Management systems require different communication schemes 680 to solicit or post network information. Multicasts or anycasts need 681 be used to resolve unresolved references within a device when the 682 device first joins the network. 684 As with any network communication, multicasting should be minimized. 685 This is especially a problem for small embedded devices with limited 686 network bandwidth. Multicasts are typically used for network joins 687 and application binding in embedded systems. Routing MUST support 688 anycast, unicast, and multicast. 690 5.6. Manageability 692 As previously noted in Clause 3.3, installation of LLN devices 693 follows a bottoms-up work flow. Edge devices are installed first and 694 tested for communication and application integrity. These devices 695 are then aggregated into islands, then LLNs and later affixed onto 696 the enterprise network. 698 The need for diagnostics most often occurs during the installation 699 and commissioning phase; although at times diagnostic information may 700 be requested during normal operation. Battery powered wireless 701 devices typically will have a self diagnostic mode that can be 702 initiated via a button press on the device. The device will display 703 its link status and/or end-to-end connectivity when the button is 704 depressed. Lines-powered devices will continuously display 705 communication status via a bank of LEDs; possibly denoting signal 706 strength and end-to-end application connectivity. 708 The local diagnostics noted above often times are suitable for 709 defining room level networks. However, as these devices aggregate, 710 system level diagnostics may need to be executed to ameliorate route 711 vacillation, excessive hops, communication retries and/or network 712 bottlenecks. 714 On operational networks, due to the mission critical nature of the 715 application, the LLN devices will be temporally monitored by the 716 higher layers to assure communication integrity is maintained. 717 Failure to maintain this communication will result in an alarm being 718 forwarded to the enterprise network from the monitoring node for 719 analysis and remediation. 721 In addition to the initial installation and commissioning of the 722 system, it is equally important for the ongoing maintenance of the 723 system to be simple and inexpensive. This implies a straightforward 724 device swap when a failed device is replaced as noted in Clause 725 5.1.3. 727 5.6.1. Diagnostics 729 To improve diagnostics, the routing protocol SHOULD be able to be 730 placed in and out of 'verbose' mode. Verbose mode is a temporary 731 debugging mode that provides additional communication information 732 including at least total number of routed packets sent and received, 733 number of routing failures (no route available), neighbor table 734 members, and routing table entries. The data provided in verbose 735 mode should be sufficient that a network connection graph could be 736 constructed and maintained by the monitoring node. 738 Diagnostic data should be kept by the routers continuously and be 739 available for solicitation at anytime by any other node on the 740 internetwork. Verbose mode will be activated/deactivated via a 741 unicast, multicast or other means. Devices having available 742 resources may elect to support verbose mode continually. 744 5.6.2. Route Tracking 746 Route diagnostics SHOULD be supported providing information such as 747 route quality; number of hops; available alternate active routes with 748 associated costs. Route quality is the relative measure of 749 'goodness' of the selected source to destination route as compared to 750 alternate routes. This composite value may be measured as a function 751 of hop count, signal strength, available power, existing active 752 routes or any other criteria deemed by ROLL as the route cost 753 differentiator. 755 5.7. Route Selection 757 Route selection determines reliability and quality of the 758 communication among the devices by optimizing routes over time and 759 resolving any nuances developed at system startup when nodes are 760 asynchronously adding themselves to the network. 762 5.7.1. Route Cost 764 The routing protocol MUST support a metric of route quality and 765 optimize selection according to such metrics within constraints 766 established for links along the routes. These metrics SHOULD reflect 767 metrics such as signal strength, available bandwidth, hop count, 768 energy availability and communication error rates. 770 5.7.2. Route Adaptation 772 Communication routes MUST be adaptive and converge toward optimality 773 of the chosen metric (e.g., signal quality, hop count) in time. 775 5.7.3. Route Redundancy 777 The routing layer SHOULD be configurable to allow secondary and 778 tertiary routes to be established and used upon failure of the 779 primary route. 781 5.7.4. Route Discovery Time 783 Mission critical commercial applications (e.g., Fire, Security) 784 require reliable communication and guaranteed end-to-end delivery of 785 all messages in a timely fashion. Application layer time-outs must 786 be selected judiciously to cover anomalous conditions such as lost 787 packets and/or route discoveries; yet not be set too large to over 788 damp the network response. If route discovery occurs during packet 789 transmission time (proactive routing), it SHOULD NOT add more than 790 120ms of latency to the packet delivery time. 792 5.7.5. Route Preference 794 The routing protocol SHOULD allow for the support of manually 795 configured static preferred routes. 797 5.7.6. Real-time Performance Measures 799 A node transmitting a 'request with expected reply' to another node 800 must send the message to the destination and receive the response in 801 not more than 120ms. This response time should be achievable with 5 802 or less hops in each direction. This requirement assumes network 803 quiescence and a negligible turnaround time at the destination node. 805 5.7.7. Prioritized Routing 807 Network and application packet routing prioritization must be 808 supported to assure that mission critical applications (e.g., Fire 809 Detection) cannot be deferred while less critical applications access 810 the network. The routing protocol MUST be able to provide routes 811 with different characteristics, also referred to as "QoS" routing. 813 5.8. Security Requirements 815 Due to the variety of buildings and tenants, the FMS systems must be 816 completely configurable on-site. 818 Due to the quantity of the BMS devices (1000s) and their 819 inaccessibility (often times above the ceilings) security 820 configuration over the network is preferred over local configuration 822 Wireless encryption and device authentication security policies need 823 to be considered in commercial buildings, while keeping in mind the 824 impact on the limited processing capabilities and additional latency 825 incurred on the sensors, actuators and controllers. 827 FMS systems are typically highly configurable in the field and hence 828 the security policy is most often dictated by the type of building to 829 which the FMS is being installed. Single tenant owner occupied 830 office buildings installing lighting or HVAC control are candidates 831 for implementing a low level of security on the LLN. Antithetically, 832 military or pharmaceutical facilities require strong security 833 policies. 835 5.8.1. Building Security Use Case 837 LLNs for commercial building applications would always implement and 838 use encrypted packets. However, depending on the state of the LLN, 839 the security keys may either be: 841 1) a key obtained from a trust center already operable on the LLN; 843 2) a pre-shared static key as defined by the general contractor or 844 its designee or 846 3)a well-known default static key. 848 Unless a node entering the network had previously received its 849 credentials from the trust center, the entering node will try to 850 solicit the trust center for the network key. If the trust center is 851 accessible, the trust center will MAC authenticate the entering node 852 and return the security keys. If the Trust Center is not available, 853 the entering node will check if it has been given a network key in an 854 off-band means and use it to access the network. If no network key 855 has been configured in the device it will revert to the default 856 network key and enter the network. If neither of these keys were 857 valid, the device would signal via a fault LED. 859 This approach would allow for independent simplified commissioning, 860 yet centralized authentication. The building owner or building type 861 would then dictate when the trust center would be deployed. In many 862 cases the trust center need not be deployed until all the local room 863 commissioning was complete. Yet at the province of the owner, the 864 trust center may be deployed from the onset thereby trading 865 installation and commissioning flexibility for tighter security. 867 5.8.2. Authentication 869 Authentication SHOULD be optional on the LLN. Authentication SHOULD 870 be fully configurable on-site. Authentication policy and updates MUST 871 be routable over-the-air. Authentication SHOULD occur upon joining 872 or rejoining a network. However, once authenticated devices SHOULD 873 NOT need to reauthenticate with any other devices in the LLN. 874 Packets may need authentication at the source and destination nodes, 875 however, packets routed through intermediate hops should not need 876 reauthentication at each hop. 878 These requirements mean that at least one LLN routing protocol 879 solution specification MUST include support for authentication. 881 5.8.3. Encryption 883 5.8.3.1. Encryption Types 885 Data encryption of packets MUST be supported by all protocol solution 886 specifications. Support can be provided by use of either a network 887 wide key and/or an application key. The network key would apply to 888 all devices in the LLN. The application key would apply to a subset 889 of devices on the LLN. 891 The network key and application keys would be mutually exclusive. 892 The routing protocol MUST allow routing a packet encrypted with an 893 application key through forwarding devices without requiring each 894 node in the route to have the application key. 896 5.8.3.2. Packet Encryption 898 The encryption policy MUST support both encryption of the payload 899 only or of the entire packet. Payload only encryption would 900 eliminate the decryption/re-encryption overhead at every hop 901 providing more real-time performance. 903 5.8.4. Disparate Security Policies 905 Due to the limited resources of an LLN, the security policy defined 906 within the LLN MUST be able to differ from that of the rest of the IP 907 network within the facility yet packets MUST still be able to route 908 to or through the LLN from/to these networks. 910 5.8.5. Routing Security Policies To Sleeping Devices 912 The routing protocol MUST gracefully handle routing temporal security 913 updates (e.g., dynamic keys) to sleeping devices on their 'awake' 914 cycle to assure that sleeping devices can readily and efficiently 915 access the network. 917 6. Security Considerations 919 The requirements placed on the LLN routing protocol in order to 920 provide the correct level of security support are presented in 921 Section 5.8. 923 LLNs deployed in a building environment may be entirely isolated from 924 other networks, attached to normal IP networks within the building 925 yet physically disjoint from the wider Internet, or connected either 926 directly or through other IP networks to the Internet. Additionally, 927 even where no wired connectivity exists out of the building, the use 928 of wireless infrastructure within the building means that physical 929 connectivity to the LLN is possible for an attacker. 931 Therefore, it is important that any routing protocol solution 932 designed to meet the requirements included in this document addresses 933 the security features requirements described in Section 5.8. 934 Implementations of these protocols will be required in the protocol 935 specifications to provide the level of support indicated in Section 936 5.8, and will be encouraged to make the support flexibly configurable 937 to enable an operator to make a judgment of the level of security 938 that they want to deploy at any time. 940 As noted in Section 5.8, use/deployment of the different security 941 features is intended to be optional. This means that, although the 942 protocols developed must conform to the requirements specified, the 943 operator is free to determine the level of risk and the trade-offs 944 against performance. An implementation must not make those choices on 945 behalf of the operator by avoiding implementing any mandatory-to- 946 implement security features. 948 This informational requirements specification introduces no new 949 security concerns. 951 7. IANA Considerations 953 This document includes no request to IANA. 955 8. Acknowledgments 957 In addition to the authors; JP. Vasseur, David Culler, Ted Humpal and 958 Zach Shelby are gratefully acknowledged for their contributions to 959 this document. 961 9. Disclaimer for pre-RFC5378 work 963 This document may contain material from IETF Documents or IETF 964 Contributions published or made publicly available before November 965 10, 2008. The person(s) controlling the copyright in some of this 966 material may not have granted the IETF Trust the right to allow 967 modifications of such material outside the IETF Standards Process. 968 Without obtaining an adequate license from the person(s) controlling 969 the copyright in such materials, this document may not be modified 970 outside the IETF Standards Process, and derivative works of it may 971 not be created outside the IETF Standards Process, except to format 972 it for publication as an RFC or to translate it into languages other 973 than English. 975 10. References 977 10.1. Normative References 979 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 980 Requirement Levels", BCP 14, RFC 2119, March 1997. 982 10.2. Informative References 984 [I-D.ietf-roll-terminology]Vasseur, JP., "Terminology in Low power 985 And Lossy Networks", draft-ietf-roll-terminology-00 (work in 986 progress), October 2008. 988 11. Appendix A: Additional Building Requirements 990 Appendix A contains additional building requirements that were deemed 991 out of scope for ROLL, yet provided ancillary substance for the 992 reader. 994 11.1. Additional Commercial Product Requirements 996 11.1.1. Wired and Wireless Implementations 998 Vendors will likely not develop a separate product line for both 999 wired and wireless networks. Hence, the solutions set forth must 1000 support both wired and wireless implementations. 1002 11.1.2. World-wide Applicability 1004 Wireless devices must be supportable unlicensed bands. 1006 11.2. Additional Installation and Commissioning Requirements 1008 11.2.1. Unavailability of an IP network 1010 Product commissioning must be performed by an application engineer 1011 prior to the installation of the IP network (e.g., switches, routers, 1012 DHCP, DNS). 1014 11.3. Additional Network Requirements 1016 11.3.1. TCP/UDP 1018 Connection based and connectionless services must be supported 1020 11.3.2. Interference Mitigation 1022 The network must automatically detect interference and seamlessly 1023 migrate the network hosts channel to improve communication. Channel 1024 changes and nodes response to the channel change must occur within 60 1025 seconds. 1027 11.3.3. Packet Reliability 1029 In building automation, it is required for the network to meet the 1030 following minimum criteria: 1032 < 1% MAC layer errors on all messages; After no more than three 1033 retries 1035 < .1% Network layer errors on all messages; 1037 After no more than three additional retries; 1039 < 0.01% Application layer errors on all messages. 1041 Therefore application layer messages will fail no more than once 1042 every 100,000 messages. 1044 11.3.4. Merging Commissioned Islands 1046 Subsystems are commissioned by various vendors at various times 1047 during building construction. These subnetworks must seamlessly 1048 merge into networks and networks must seamlessly merge into 1049 internetworks since the end user wants a holistic view of the system. 1051 11.3.5. Adjustable Routing Table Sizes 1053 The routing protocol must allow constrained nodes to hold an 1054 abbreviated set of routes. That is, the protocol should not mandate 1055 that the node routing tables be exhaustive. 1057 11.3.6. Automatic Gain Control 1059 For wireless implementations, the device radios should incorporate 1060 automatic transmit power regulation to maximize packet transfer and 1061 minimize network interference regardless of network size or density. 1063 11.3.7. Device and Network Integrity 1065 Commercial Building devices must all be periodically scanned to 1066 assure that the device is viable and can communicate data and alarm 1067 information as needed. Router should maintain previous packet flow 1068 information temporally to minimize overall network overhead. 1070 11.4. Additional Performance Requirements 1072 11.4.1. Data Rate Performance 1074 An effective data rate of 20kbits/s is the lowest acceptable 1075 operational data rate acceptable on the network. 1077 11.4.2. Firmware Upgrades 1079 To support high speed code downloads, routing should support 1080 transports that provide parallel downloads to targeted devices yet 1081 guarantee packet delivery. In cases where the spatial position of 1082 the devices requires multiple hops, the algorithm should recurse 1083 through the network until all targeted devices have been serviced. 1084 Devices receiving a download may cease normal operation, but upon 1085 completion of the download must automatically resume normal 1086 operation. 1088 11.4.3. Route Persistence 1090 To eliminate high network traffic in power-fail or brown-out 1091 conditions previously established routes should be remembered and 1092 invoked prior to establishing new routes for those devices reentering 1093 the network. 1095 12. Authors' Addresses 1097 Jerry Martocci 1098 Johnson Control 1099 507 E. Michigan Street 1100 Milwaukee, Wisconsin, 53202 1101 USA 1102 Phone: +1 414 524 4010 1103 Email: jerald.p.martocci@jci.com 1105 Nicolas Riou 1106 Schneider Electric 1107 Technopole 38TEC T3 1108 37 quai Paul Louis Merlin 1109 38050 Grenoble Cedex 9 1110 France 1111 Phone: +33 4 76 57 66 15 1112 Email: nicolas.riou@fr.schneider-electric.com 1114 Pieter De Mil 1115 Ghent University - IBCN 1116 G. Crommenlaan 8 bus 201 1117 Ghent 9050 1118 Belgium 1119 Phone: +32 9331 4981 1120 Fax: +32 9331 4899 1121 Email: pieter.demil@intec.ugent.be 1123 Wouter Vermeylen 1124 Arts Centre Vooruit 1125 Ghent 9000 1126 Belgium 1127 Email: wouter@vooruit.be