idnits 2.17.1 draft-ietf-opsawg-coman-use-cases-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 4, 2014) is 3584 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 6779 (Obsoleted by RFC 7939) Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Ersue, Ed. 3 Internet-Draft Nokia Networks 4 Intended status: Informational D. Romascanu 5 Expires: January 5, 2015 Avaya 6 J. Schoenwaelder 7 A. Sehgal 8 Jacobs University Bremen 9 July 4, 2014 11 Management of Networks with Constrained Devices: Use Cases 12 draft-ietf-opsawg-coman-use-cases-02 14 Abstract 16 This document discusses use cases concerning the management of 17 networks, where constrained devices are involved. A problem 18 statement, deployment options and the requirements on the networks 19 with constrained devices can be found in the companion document on 20 "Management of Networks with Constrained Devices: Problem Statement 21 and Requirements". 23 Status of This Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at http://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on January 5, 2015. 40 Copyright Notice 42 Copyright (c) 2014 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 58 2. Access Technologies . . . . . . . . . . . . . . . . . . . . . 4 59 2.1. Constrained Access Technologies . . . . . . . . . . . . . 4 60 2.2. Cellular Access Technologies . . . . . . . . . . . . . . 4 61 3. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 5 62 3.1. Environmental Monitoring . . . . . . . . . . . . . . . . 6 63 3.2. Infrastructure Monitoring . . . . . . . . . . . . . . . . 6 64 3.3. Industrial Applications . . . . . . . . . . . . . . . . . 7 65 3.4. Energy Management . . . . . . . . . . . . . . . . . . . . 9 66 3.5. Medical Applications . . . . . . . . . . . . . . . . . . 11 67 3.6. Building Automation . . . . . . . . . . . . . . . . . . . 12 68 3.7. Home Automation . . . . . . . . . . . . . . . . . . . . . 13 69 3.8. Transport Applications . . . . . . . . . . . . . . . . . 14 70 3.9. Community Network Applications . . . . . . . . . . . . . 16 71 3.10. Field Operations . . . . . . . . . . . . . . . . . . . . 18 72 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 73 5. Security Considerations . . . . . . . . . . . . . . . . . . . 19 74 6. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 19 75 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 20 76 8. Informative References . . . . . . . . . . . . . . . . . . . 20 77 Appendix A. Change Log . . . . . . . . . . . . . . . . . . . . . 21 78 A.1. draft-ietf-opsawg-coman-use-cases-01 - draft-ietf-opsawg- 79 coman-use-cases-02 . . . . . . . . . . . . . . . . . . . 21 80 A.2. draft-ietf-opsawg-coman-use-cases-00 - draft-ietf-opsawg- 81 coman-use-cases-01 . . . . . . . . . . . . . . . . . . . 22 82 A.3. draft-ersue-constrained-mgmt-03 - draft-ersue-opsawg- 83 coman-use-cases-00 . . . . . . . . . . . . . . . . . . . 23 84 A.4. draft-ersue-constrained-mgmt-02-03 . . . . . . . . . . . 23 85 A.5. draft-ersue-constrained-mgmt-01-02 . . . . . . . . . . . 24 86 A.6. draft-ersue-constrained-mgmt-00-01 . . . . . . . . . . . 25 87 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 89 1. Introduction 91 Small devices with limited CPU, memory, and power resources, so 92 called constrained devices (aka. sensor, smart object, or smart 93 device) can be connected to a network. Such a network of constrained 94 devices itself may be constrained or challenged, e.g., with 95 unreliable or lossy channels, wireless technologies with limited 96 bandwidth and a dynamic topology, needing the service of a gateway or 97 proxy to connect to the Internet. In other scenarios, the 98 constrained devices can be connected to a non-constrained network 99 using off-the-shelf protocol stacks. Constrained devices might be in 100 charge of gathering information in diverse settings including natural 101 ecosystems, buildings, and factories and send the information to one 102 or more server stations. 104 Network management is characterized by monitoring network status, 105 detecting faults, and inferring their causes, setting network 106 parameters, and carrying out actions to remove faults, maintain 107 normal operation, and improve network efficiency and application 108 performance. The traditional network management application 109 periodically collects information from a set of elements that are 110 needed to manage, processes the data, and presents them to the 111 network management users. Constrained devices, however, often have 112 limited power, low transmission range, and might be unreliable. They 113 might also need to work in hostile environments with advanced 114 security requirements or need to be used in harsh environments for a 115 long time without supervision. Due to such constraints, the 116 management of a network with constrained devices offers different 117 type of challenges compared to the management of a traditional IP 118 network. 120 This document aims to understand use cases for the management of a 121 network, where constrained devices are involved. The document lists 122 and discusses diverse use cases for the management from the network 123 as well as from the application point of view. The list of discussed 124 use cases is not an exhaustive one since other scenarios, currently 125 unknown to the authors, are possible. are The application scenarios 126 discussed aim to show where networks of constrained devices are 127 expected to be deployed. For each application scenario, we first 128 briefly describe the characteristics followed by a discussion on how 129 network management can be provided, who is likely going to be 130 responsible for it, and on which time-scale management operations are 131 likely to be carried out. 133 A problem statement, deployment and management topology options as 134 well as the requirements on the networks with constrained devices can 135 be found in the companion document [COM-REQ]. 137 This documents builds on the terminology defined in [RFC7228] and 138 [COM-REQ]. [RFC7228] is a base document for the terminology 139 concerning constrained devices and constrained networks. Some use 140 cases specific to IPv6 over Low-Power Wireless Personal Area Networks 141 (6LoWPANs) can be found in [RFC6568]. 143 2. Access Technologies 145 Besides the management requirements imposed by the different use 146 cases, the access technologies used by constrained devices can impose 147 restrictions and requirements upon the Network Management System 148 (NMS) and protocol of choice. 150 It is possible that some networks of constrained devices might 151 utilize traditional non-constrained access technologies for network 152 access, e.g., local area networks with plenty of capacity. In such 153 scenarios, the constrainedness of the device presents special 154 management restrictions and requirements rather than the access 155 technology utilized. 157 However, in other situations constrained or cellular access 158 technologies might be used for network access, thereby causing 159 management restrictions and requirements to arise as a result of the 160 underlying access technologies. 162 2.1. Constrained Access Technologies 164 Due to resource restrictions, embedded devices deployed as sensors 165 and actuators in the various use cases utilize low-power low data- 166 rate wireless access technologies such as IEEE 802.15.4, DECT ULE or 167 BT-LE for network connectivity. 169 In such scenarios, it is important for the NMS to be aware of the 170 restrictions imposed by these access technologies to efficiently 171 manage these constrained devices. Specifically, such low-power low 172 data-rate access technologies typically have small frame sizes. So 173 it would be important for the NMS and management protocol of choice 174 to craft packets in a way that avoids fragmentation and reassembly of 175 packets since this can use valuable memory on constrained devices. 177 Devices using such access technologies might operate via a gateway 178 that translates between these access technologies and more 179 traditional Internet protocols. A hierarchical approach to device 180 management in such a situation might be useful, wherein the gateway 181 device is in-charge of devices connected to it, while the NMS 182 conducts management operations only to the gateway. 184 2.2. Cellular Access Technologies 186 Machine to machine (M2M) services are increasingly provided by mobile 187 service providers as numerous devices, home appliances, utility 188 meters, cars, video surveillance cameras, and health monitors, are 189 connected with mobile broadband technologies. Different 190 applications, e.g., in a home appliance or in-car network, use 191 Bluetooth, Wi-Fi or Zigbee locally and connect to a cellular module 192 acting as a gateway between the constrained environment and the 193 mobile cellular network. 195 Such a gateway might provide different options for the connectivity 196 of mobile networks and constrained devices: 198 o a smart phone with 3G/4G and WLAN radio might use BT-LE to connect 199 to the devices in a home area network, 201 o a femtocell might be combined with home gateway functionality 202 acting as a low-power cellular base station connecting smart 203 devices to the application server of a mobile service provider, 205 o an embedded cellular module with LTE radio connecting the devices 206 in the car network with the server running the telematics service, 208 o an M2M gateway connected to the mobile operator network supporting 209 diverse IoT connectivity technologies including ZigBee and CoAP 210 over 6LoWPAN over IEEE 802.15.4. 212 Common to all scenarios above is that they are embedded in a service 213 and connected to a network provided by a mobile service provider. 214 Usually there is a hierarchical deployment and management topology in 215 place where different parts of the network are managed by different 216 management entities and the count of devices to manage is high (e.g. 217 many thousands). In general, the network is comprised by manifold 218 type and size of devices matching to different device classes. As 219 such, the managing entity needs to be prepared to manage devices with 220 diverse capabilities using different communication or management 221 protocols. In case the devices are directly connected to a gateway 222 they most likely are managed by a management entity integrated with 223 the gateway, which itself is part of the Network Management System 224 (NMS) run by the mobile operator. Smart phones or embedded modules 225 connected to a gateway might be themselves in charge to manage the 226 devices on their level. The initial and subsequent configuration of 227 such a device is mainly based on self-configuration and is triggered 228 by the device itself. 230 The gateway might be in charge of filtering and aggregating the data 231 received from the device as the information sent by the device might 232 be mostly redundant. 234 3. Use Cases 235 3.1. Environmental Monitoring 237 Environmental monitoring applications are characterized by the 238 deployment of a number of sensors to monitor emissions, water 239 quality, or even the movements and habits of wildlife. Other 240 applications in this category include earthquake or tsunami early- 241 warning systems. The sensors often span a large geographic area, 242 they can be mobile, and they are often difficult to replace. 243 Furthermore, the sensors are usually not protected against tampering. 245 Management of environmental monitoring applications is largely 246 concerned with the monitoring whether the system is still functional 247 and the roll-out of new constrained devices in case the system looses 248 too much of its structure. The constrained devices themselves need 249 to be able to establish connectivity (auto-configuration) and they 250 need to be able to deal with events such as loosing neighbors or 251 being moved to other locations. 253 Management responsibility typically rests with the organization 254 running the environmental monitoring application. Since these 255 monitoring applications must be designed to tolerate a number of 256 failures, the time scale for detecting and recording failures is for 257 some of these applications likely measured in hours and repairs might 258 easily take days. In fact, in some scenarios it might be more cost- 259 and time-effective to not repair such devices at all. However, for 260 certain environmental monitoring applications, much tighter time 261 scales may exist and might be enforced by regulations (e.g., 262 monitoring of nuclear radiation). 264 3.2. Infrastructure Monitoring 266 Infrastructure monitoring is concerned with the monitoring of 267 infrastructures such as bridges, railway tracks, or (offshore) 268 windmills. The primary goal is usually to detect any events or 269 changes of the structural conditions that can impact the risk and 270 safety of the infrastructure being monitored. Another secondary goal 271 is to schedule repair and maintenance activities in a cost effective 272 manner. 274 The infrastructure to monitor might be in a factory or spread over a 275 wider area but difficult to access. As such, the network in use 276 might be based on a combination of fixed and wireless technologies, 277 which use robust networking equipment and support reliable 278 communication via application layer transactions. It is likely that 279 constrained devices in such a network are mainly C2 devices and have 280 to be controlled centrally by an application running on a server. In 281 case such a distributed network is widely spread, the wireless 282 devices might use diverse long-distance wireless technologies such as 283 WiMAX, or 3G/LTE, e.g. based on embedded hardware modules. In cases, 284 where an in-building network is involved, the network can be based on 285 Ethernet or wireless technologies suitable for in-building usage. 287 The management of infrastructure monitoring applications is primarily 288 concerned with the monitoring of the functioning of the system. 289 Infrastructure monitoring devices are typically rolled out and 290 installed by dedicated experts and changes are rare since the 291 infrastructure itself changes rarely. However, monitoring devices 292 are often deployed in unsupervised environments and hence special 293 attention must be given to protecting the devices from being 294 modified. 296 Management responsibility typically rests with the organization 297 owning the infrastructure or responsible for its operation. The time 298 scale for detecting and recording failures is likely measured in 299 hours and repairs might easily take days. However, certain events 300 (e.g., natural disasters) may require that status information be 301 obtained much more quickly and that replacements of failed sensors 302 can be rolled out quickly (or redundant sensors are activated 303 quickly). In case the devices are difficult to access, a self- 304 healing feature on the device might become necessary. 306 3.3. Industrial Applications 308 Industrial Applications and smart manufacturing refer to tasks such 309 as networked control and monitoring of manufacturing equipment, asset 310 and situation management, or manufacturing process control. For the 311 management of a factory it is becoming essential to implement smart 312 capabilities. From an engineering standpoint, industrial 313 applications are intelligent systems enabling rapid manufacturing of 314 new products, dynamic response to product demands, and real-time 315 optimization of manufacturing production and supply chain networks. 316 Potential industrial applications (e.g., for smart factories and 317 smart manufacturing) are: 319 o Digital control systems with embedded, automated process controls, 320 operator tools, as well as service information systems optimizing 321 plant operations and safety. 323 o Asset management using predictive maintenance tools, statistical 324 evaluation, and measurements maximizing plant reliability. 326 o Smart sensors detecting anomalies to avoid abnormal or 327 catastrophic events. 329 o Smart systems integrated within the industrial energy management 330 system and externally with the smart grid enabling real-time 331 energy optimization. 333 Management of Industrial Applications and smart manufacturing may in 334 some situations involve Building Automation tasks such as control of 335 energy, HVAC (heating, ventilation, and air conditioning), lighting, 336 or access control. Interacting with management systems from other 337 application areas might be important in some cases (e.g., 338 environmental monitoring for electric energy production, energy 339 management for dynamically scaling manufacturing, vehicular networks 340 for mobile asset tracking). 342 Sensor networks are an essential technology used for smart 343 manufacturing. Measurements, automated controls, plant optimization, 344 health and safety management, and other functions are provided by a 345 large number of networked sectors. Data interoperability and 346 seamless exchange of product, process, and project data are enabled 347 through interoperable data systems used by collaborating divisions or 348 business systems. Intelligent automation and learning systems are 349 vital to smart manufacturing but must be effectively integrated with 350 the decision environment. Wireless sensor networks (WSN) have been 351 developed for machinery Condition-based Maintenance (CBM) as they 352 offer significant cost savings and enable new functionalities. 353 Inaccessible locations, rotating machinery, hazardous areas, and 354 mobile assets can be reached with wireless sensors. WSNs can provide 355 today wireless link reliability, real-time capabilities, and quality- 356 of-service and enable industrial and related wireless sense and 357 control applications. 359 Management of industrial and factory applications is largely focused 360 on the monitoring whether the system is still functional, real-time 361 continuous performance monitoring, and optimization as necessary. 362 The factory network might be part of a campus network or connected to 363 the Internet. The constrained devices in such a network need to be 364 able to establish configuration themselves (auto-configuration) and 365 might need to deal with error conditions as much as possible locally. 366 Access control has to be provided with multi-level administrative 367 access and security. Support and diagnostics can be provided through 368 remote monitoring access centralized outside of the factory. 370 Management responsibility is typically owned by the organization 371 running the industrial application. Since the monitoring 372 applications must handle a potentially large number of failures, the 373 time scale for detecting and recording failures is for some of these 374 applications likely measured in minutes. However, for certain 375 industrial applications, much tighter time scales may exist, e.g. in 376 real-time, which might be enforced by the manufacturing process or 377 the use of critical material. 379 3.4. Energy Management 381 The EMAN working group developed an energy management framework 382 [I-D.ietf-eman-framework] for devices and device components within or 383 connected to communication networks. This document observes that one 384 of the challenges of energy management is that a power distribution 385 network is responsible for the supply of energy to various devices 386 and components, while a separate communication network is typically 387 used to monitor and control the power distribution network. Devices 388 in the context of energy management can be monitored for parameters 389 like Power, Energy, Demand and Power Quality. If a device contains 390 batteries, they can be also monitored and managed. 392 Energy devices differ in complexity and may include basic sensors or 393 switches, specialized electrical meters, or power distribution units 394 (PDU), and subsystems inside the network devices (routers, network 395 switches) or home or industrial appliances. The operators of an 396 Energy Management System are either the utility providers or 397 customers that aim to control and reduce the energy consumption and 398 the associated costs. The topology in use differs and the deployment 399 can cover areas from small surfaces (individual homes) to large 400 geographical areas. The EMAN requirements document [RFC6988] 401 discusses the requirements for energy management concerning 402 monitoring and control functions. 404 It is assumed that Energy Management will apply to a large range of 405 devices of all classes and networks topologies. Specific resource 406 monitoring like battery utilization and availability may be specific 407 to devices with lower physical resources (device classes C0 or C1). 409 Energy Management is especially relevant to the Smart Grid. A Smart 410 Grid is an electrical grid that uses data networks to gather and to 411 act on energy and power-related information in an automated fashion 412 with the goal to improve the efficiency, reliability, economics, and 413 sustainability of the production and distribution of electricity. 415 Smart Metering is a good example of Smart Grid based Energy 416 Management applications. Different types of possibly wireless small 417 meters produce all together a large amount of data, which is 418 collected by a central entity and processed by an application server, 419 which may be located within the customer's residence or off-site in a 420 data-center. The communication infrastructure can be provided by a 421 mobile network operator as the meters in urban areas will have most 422 likely a cellular or WiMAX radio. In case the application server is 423 located within the residence, such meters are more likely to use WiFi 424 protocols to interconnect with an existing network. 426 An Advanced Metering Infrastructure (AMI) network is another example 427 of the Smart Grid that enables an electric utility to retrieve 428 frequent electric usage data from each electric meter installed at a 429 customer's home or business. Unlike Smart Metering, in which case 430 the customer or their agents install appliance level meters, an AMI 431 infrastructure is typically managed by the utility providers and 432 could also include other distribution automation devices like 433 transformers and reclosers. Meters in AMI networks typically contain 434 constrained devices that connect to mesh networks with a low- 435 bandwidth radio. Usage data and outage notifications can be sent by 436 these meters to the utility's headend systems, via aggregation points 437 of higher-end router devices that bridge the constrained network to a 438 less constrained network via cellular, WiMAX, or Ethernet. Unlike 439 meters, these higher-end devices might be installed on utility poles 440 owned and operated by a separate entity. 442 It thereby becomes important for a management application to not only 443 be able to work with diverse types of devices, but also over multiple 444 links that might be operated and managed by separate entities, each 445 having divergent policies for their own devices and network segments. 446 During management operations, like firmware updates, it is important 447 that the management system performs robustly in order to avoid 448 accidental outages of critical power systems that could be part of 449 AMI networks. In fact, since AMI networks must also report on 450 outages, the management system might have to manage the energy 451 properties of battery operated AMI devices themselves as well. 453 A management system for home based Smart Metering solutions is likely 454 to have few devices laid out in a simple topology. However, AMI 455 networks installations could have thousands of nodes per router, 456 i.e., higher-end device, which organize themselves in an ad-hoc 457 manner. As such, a management system for AMI networks will need to 458 discover and operate over complex topologies as well. In some 459 situations, it is possible that the management system might also have 460 to setup and manage the topology of nodes, especially critical 461 routers. Encryption key management and sharing in both types of 462 network is also likely to be important for providing confidentiality 463 for all data traffic. In AMI networks the key may be obtained by a 464 meter only after an end-to-end authentication process based on 465 certificates. Smart Metering solution could adopt a similar approach 466 or the security may be implied due to the encrypted WiFi networks 467 they become part of. 469 The management of such a network requires end-to-end management of 470 and information exchange through different types of networks. 472 However, as of today there is no integrated energy management 473 approach and no common information model available. Specific energy 474 management applications or network islands use their own management 475 mechanisms. 477 3.5. Medical Applications 479 Constrained devices can be seen as an enabling technology for 480 advanced and possibly remote health monitoring and emergency 481 notification systems, ranging from blood pressure and heart rate 482 monitors to advanced devices capable to monitor implanted 483 technologies, such as pacemakers or advanced hearing aids. Medical 484 sensors may not only be attached to human bodies, they might also 485 exist in the infrastructure used by humans such as bathrooms or 486 kitchens. Medical applications will also be used to ensure 487 treatments are being applied properly and they might guide people 488 losing orientation. Fitness and wellness applications, such as 489 connected scales or wearable heart monitors, encourage consumers to 490 exercise and empower self-monitoring of key fitness indicators. 491 Different applications use Bluetooth, Wi-Fi or Zigbee connections to 492 access the patient's smartphone or home cellular connection to access 493 the Internet. 495 Constrained devices that are part of medical applications are managed 496 either by the users of those devices or by an organization providing 497 medical (monitoring) services for physicians. In the first case, 498 management must be automatic and or easy to install and setup by 499 average people. In the second case, it can be expected that devices 500 be controlled by specially trained people. In both cases, however, 501 it is crucial to protect the privacy of the people to which medical 502 devices are attached. Even though the data collected by a heart beat 503 monitor might be protected, the pure fact that someone carries such a 504 device may need protection. As such, certain medical appliances may 505 not want to participate in discovery and self-configuration protocols 506 in order to remain invisible. 508 Many medical devices are likely to be used (and relied upon) to 509 provide data to physicians in critical situations since the biggest 510 market is likely elderly and handicapped people. Timely delivery of 511 data can be quite important in certain applications like patient 512 mobility monitoring in oldage homes. Data must reach the physician 513 and/or emergency services within specified limits of time in order to 514 be useful. As such, fault detection of the communication network or 515 the constrained devices becomes a crucial function of the management 516 system that must be carried out with high reliability and, depending 517 on the medical appliance and its application, within seconds. 519 3.6. Building Automation 521 Building automation comprises the distributed systems designed and 522 deployed to monitor and control the mechanical, electrical and 523 electronic systems inside buildings with various destinations (e.g., 524 public and private, industrial, institutions, or residential). 525 Advanced Building Automation Systems (BAS) may be deployed 526 concentrating the various functions of safety, environmental control, 527 occupancy, security. More and more the deployment of the various 528 functional systems is connected to the same communication 529 infrastructure (possibly Internet Protocol based), which may involve 530 wired or wireless communications networks inside the building. 532 Building automation requires the deployment of a large number 533 (10-100.000) of sensors that monitor the status of devices, and 534 parameters inside the building and controllers with different 535 specialized functionality for areas within the building or the 536 totality of the building. Inter-node distances between neighboring 537 nodes vary between 1 to 20 meters. Contrary to home automation, in 538 building management the devices are expected to be managed assets and 539 known to a set of commissioning tools and a data storage, such that 540 every connected device has a known origin. The management includes 541 verifying the presence of the expected devices and detecting the 542 presence of unwanted devices. 544 Examples of functions performed by such controllers are regulating 545 the quality, humidity, and temperature of the air inside the building 546 and lighting. Other systems may report the status of the machinery 547 inside the building like elevators, or inside the rooms like 548 projectors in meeting rooms. Security cameras and sensors may be 549 deployed and operated on separate dedicated infrastructures connected 550 to the common backbone. The deployment area of a BAS is typically 551 inside one building (or part of it) or several buildings 552 geographically grouped in a campus. A building network can be 553 composed of network segments, where a network segment covers a floor, 554 an area on the floor, or a given functionality (e.g., security 555 cameras). 557 Some of the sensors in Building Automation Systems (for example fire 558 alarms or security systems) register, record and transfer critical 559 alarm information and therefore must be resilient to events like loss 560 of power or security attacks. This leads to the need to certify 561 components and subsystems operating in such constrained conditions 562 based on specific requirements. Also in some environments, the 563 malfunctioning of a control system (like temperature control) needs 564 to be reported in the shortest possible time. Complex control 565 systems can misbehave, and their critical status reporting and safety 566 algorithms need to be basic and robust and perform even in critical 567 conditions. 569 Building Automation solutions are deployed in some cases in newly 570 designed buildings, in other cases it might be over existing 571 infrastructures. In the first case, there is a broader range of 572 possible solutions, which can be planned for the infrastructure of 573 the building. In the second case the solution needs to be deployed 574 over an existing infrastructure taking into account factors like 575 existing wiring, distance limitations, the propagation of radio 576 signals over walls and floors, thereby making deployment difficult. 577 As a result, some of the existing WLAN solutions (e.g., IEEE 802.11 578 or IEEE 802.15) may be deployed. In mission-critical or security 579 sensitive environments and in cases where link failures happen often, 580 topologies that allow for reconfiguration of the network and 581 connection continuity may be required. Some of the sensors deployed 582 in building automation may be very simple constrained devices for 583 which class 0 or class 1 may be assumed. 585 For lighting applications, groups of lights must be defined and 586 managed. Commands to a group of light must arrive within 200 ms at 587 all destinations. The installation and operation of a building 588 network has different requirements. During the installation, many 589 stand-alone networks of a few to 100 nodes co-exist without a 590 connection to the backbone. During this phase, the nodes are 591 identified with a network identifier related to their physical 592 location. Devices are accessed from an installation tool to connect 593 them to the network in a secure fashion. During installation, the 594 setting of parameters of common values to enable interoperability may 595 be required. During operation, the networks are connected to the 596 backbone while maintaining the network identifier to physical 597 location relation. Network parameters like address and name are 598 stored in DNS. The names can assist in determining the physical 599 location of the device. 601 3.7. Home Automation 603 Home automation includes the control of lighting, heating, 604 ventilation, air conditioning, appliances, entertainment and home 605 security devices to improve convenience, comfort, energy efficiency, 606 and security. It can be seen as a residential extension of building 607 automation. However, unlike a building automation system, the 608 infrastructure in a home is operated in a considerably more ad-hoc 609 manner. While in some installations it is likely that there is no 610 centralized management system, akin to a Building Automation System 611 (BAS), available, in other situations outsourced and cloud based 612 systems responsible for managing devices in the home might be used. 614 Home automation networks need a certain amount of configuration 615 (associating switches or sensors to actors) that is either provided 616 by electricians deploying home automation solutions, by third party 617 home automation service providers (e.g., small specialized companies 618 or home automation device manufacturers) or by residents by using the 619 application user interface provided by home automation devices to 620 configure (parts of) the home automation solution. Similarly, 621 failures may be reported via suitable interfaces to residents or they 622 might be recorded and made available to services providers in charge 623 of the maintenance of the home automation infrastructure. 625 The management responsibility lies either with the residents or it 626 may be outsourced to electricians and/or third parties providing 627 management of home automation solutions as a service. A varying 628 combination of electricians, service providers or the residents may 629 be responsible for different aspects of managing the infrastructure. 630 The time scale for failure detection and resolution is in many cases 631 likely counted in hours to days. 633 3.8. Transport Applications 635 Transport Application is a generic term for the integrated 636 application of communications, control, and information processing in 637 a transportation system. Transport telematics or vehicle telematics 638 are used as a term for the group of technologies that support 639 transportation systems. Transport applications running on such a 640 transportation system cover all modes of the transport and consider 641 all elements of the transportation system, i.e. the vehicle, the 642 infrastructure, and the driver or user, interacting together 643 dynamically. Examples for transport applications are inter and intra 644 vehicular communication, smart traffic control, smart parking, 645 electronic toll collection systems, logistic and fleet management, 646 vehicle control, and safety and road assistance. 648 As a distributed system, transport applications require an end-to-end 649 management of different types of networks. It is likely that 650 constrained devices in a network (e.g. a moving in-car network) have 651 to be controlled by an application running on an application server 652 in the network of a service provider. Such a highly distributed 653 network including cellular devices on vehicles is assumed to include 654 a wireless access network using diverse long distance wireless 655 technologies such as WiMAX, 3G/LTE or satellite communication, e.g. 656 based on an embedded hardware module. As a result, the management of 657 constrained devices in the transport system might be necessary to 658 plan top-down and might need to use data models obliged from and 659 defined on the application layer. The assumed device classes in use 660 are mainly C2 devices. In cases, where an in-vehicle network is 661 involved, C1 devices with limited capabilities and a short-distance 662 constrained radio network, e.g. IEEE 802.15.4 might be used 663 additionally. 665 All Transport Applications will require an IT infrastructure to run 666 on top of, e.g., in public transport scenarios like trains, bus or 667 metro network infrastructure might be provided, maintained and 668 operated by third parties like mobile network or satellite network 669 operators. However, the management responsibility of the transport 670 application typically rests within the organization running the 671 transport application (in the public transport scenario, this would 672 typically be the public transport operator). Different aspects of 673 the infrastructure might also be managed by different entities. For 674 example, the in-car devices are likely to be installed and managed by 675 the manufacturer, while the public works might be responsible for the 676 on-road vehicular communication infrastructure used by these devices. 677 The back-end infrastructure is also likely to be maintained by third 678 party operators. As such, the NMS must be able to deal with 679 different network segments, each being operated and controlled by 680 separate entities, and enable appropriate access control and security 681 as well. 683 Depending on the type of application domain (vehicular or stationary) 684 and service being provided, it would be important for the NMS to be 685 able to function with different architectures, since different 686 manufacturers might have their own proprietary systems relying on a 687 specific Management Topology Option, as described in [COM-REQ]. 688 Moreover, constituents of the network can be either private, 689 belonging to individuals or private companies, or owned by public 690 institutions leading to different legal and organization 691 requirements. Across the entire infrastructure, a variety of 692 constrained devices are likely to be used, and must be individually 693 managed. The NMS must be able to either work directly with different 694 types of devices, or have the ability to interoperate with multiple 695 different systems. 697 The challenges in the management of vehicles in a mobile transport 698 application are manifold. The up-to-date position of each node in 699 the network should be reported to the corresponding management 700 entities, since the nodes could be moving within or roaming between 701 different networks. Secondly, a variety of troubleshooting 702 information, including sensitive location information, needs to be 703 reported to the management system in order to provide accurate 704 service to the customer. Management systems dealing with mobile 705 nodes could possibly exploit specific patterns in the mobility of the 706 nodes. These patterns emerge due to repetitive vehicular usage in 707 scenarios like people commuting to work, logistics supply vehicles 708 transporting shipments between warehouses and etc. The NMS must also 709 be able to handle partitioned networks, which would arise due to the 710 dynamic nature of traffic resulting in large inter-vehicle gaps in 711 sparsely populated scenarios. Since mobile nodes might roam in 712 remote networks, the NMS should be able to provide operating 713 configuration updates regardless of node location. 715 The constrained devices in a moving transport network might be 716 initially configured in a factory and a reconfiguration might be 717 needed only rarely. New devices might be integrated in an ad-hoc 718 manner based on self-management and -configuration capabilities. 719 Monitoring and data exchange might be necessary to do via a gateway 720 entity connected to the back-end transport infrastructure. The 721 devices and entities in the transport infrastructure need to be 722 monitored more frequently and can be able to communicate with a 723 higher data rate. The connectivity of such entities does not 724 necessarily need to be wireless. The time scale for detecting and 725 recording failures in a moving transport network is likely measured 726 in hours and repairs might easily take days. It is likely that a 727 self-healing feature would be used locally. On the other hand, 728 failures in fixed transport application infrastructure (e.g., 729 traffic-lights, digital signage displays) is likely to be measured in 730 minutes so as to avoid untoward traffic incidents. As such, the NMS 731 must be able to deal with differing timeliness requirements based on 732 the type of devices. 734 3.9. Community Network Applications 736 Community networks are comprised of constrained routers in a multi- 737 hop mesh topology, communicating over a lossy, and often wireless 738 channel. While the routers are mostly non-mobile, the topology may 739 be very dynamic because of fluctuations in link quality of the 740 (wireless) channel caused by, e.g., obstacles, or other nearby radio 741 transmissions. Depending on the routers that are used in the 742 community network, the resources of the routers (memory, CPU) may be 743 more or less constrained - available resources may range from only a 744 few kilobytes of RAM to several megabytes or more, and CPUs may be 745 small and embedded, or more powerful general-purpose processors. 746 Examples of such community networks are the FunkFeuer network 747 (Vienna, Austria), FreiFunk (Berlin, Germany), Seattle Wireless 748 (Seattle, USA), and AWMN (Athens, Greece). These community networks 749 are public and non-regulated, allowing their users to connect to each 750 other and - through an uplink to an ISP - to the Internet. No fee, 751 other than the initial purchase of a wireless router, is charged for 752 these services. Applications of these community networks can be 753 diverse, e.g., location based services, free Internet access, file 754 sharing between users, distributed chat services, social networking 755 etc, video sharing etc. 757 As an example of a community network, the FunkFeuer network comprises 758 several hundred routers, many of which have several radio interfaces 759 (with omnidirectional and some directed antennas). The routers of 760 the network are small-sized wireless routers, such as the Linksys 761 WRT54GL, available in 2011 for less than 50 Euros. These routers, 762 with 16 MB of RAM and 264 MHz of CPU power, are mounted on the 763 rooftops of the users. When new users want to connect to the 764 network, they acquire a wireless router, install the appropriate 765 firmware and routing protocol, and mount the router on the rooftop. 766 IP addresses for the router are assigned manually from a list of 767 addresses (because of the lack of autoconfiguration standards for 768 mesh networks in the IETF). 770 While the routers are non-mobile, fluctuations in link quality 771 require an ad hoc routing protocol that allows for quick convergence 772 to reflect the effective topology of the network (such as NHDP 773 [RFC6130] and OLSRv2 [RFC7181] developed in the MANET WG). Usually, 774 no human interaction is required for these protocols, as all variable 775 parameters required by the routing protocol are either negotiated in 776 the control traffic exchange, or are only of local importance to each 777 router (i.e. do not influence interoperability). However, external 778 management and monitoring of an ad hoc routing protocol may be 779 desirable to optimize parameters of the routing protocol. Such an 780 optimization may lead to a more stable perceived topology and to a 781 lower control traffic overhead, and therefore to a higher delivery 782 success ratio of data packets, a lower end-to-end delay, and less 783 unnecessary bandwidth and energy usage. 785 Different use cases for the management of community networks are 786 possible: 788 o One single Network Management Station, e.g. a border gateway 789 providing connectivity to the Internet, requires managing or 790 monitoring routers in the community network, in order to 791 investigate problems (monitoring) or to improve performance by 792 changing parameters (managing). As the topology of the network is 793 dynamic, constant connectivity of each router towards the 794 management station cannot be guaranteed. Current network 795 management protocols, such as SNMP and Netconf, may be used (e.g., 796 using interfaces such as the NHDP-MIB [RFC6779]). However, when 797 routers in the community network are constrained, existing 798 protocols may require too many resources in terms of memory and 799 CPU; and more importantly, the bandwidth requirements may exceed 800 the available channel capacity in wireless mesh networks. 801 Moreover, management and monitoring may be unfeasible if the 802 connection between the network management station and the routers 803 is frequently interrupted. 805 o A distributed network monitoring, in which more than one 806 management station monitors or manages other routers. Because 807 connectivity to a server cannot be guaranteed at all times, a 808 distributed approach may provide a higher reliability, at the cost 809 of increased complexity. Currently, no IETF standard exists for 810 distributed monitoring and management. 812 o Monitoring and management of a whole network or a group of 813 routers. Monitoring the performance of a community network may 814 require more information than what can be acquired from a single 815 router using a network management protocol. Statistics, such as 816 topology changes over time, data throughput along certain routing 817 paths, congestion etc., are of interest for a group of routers (or 818 the routing domain) as a whole. As of 2012, no IETF standard 819 allows for monitoring or managing whole networks, instead of 820 single routers. 822 3.10. Field Operations 824 The challenges of configuration and monitoring of networks operated 825 in the field by rescue and security agencies can be different from 826 the other use cases since the requirements and operating conditions 827 of such networks are quite different. 829 With technology advancements, field networks operated nowadays are 830 becomeing large and can consist of varieties of different types of 831 equipment that run different protocols and tools that obviously 832 increase complexity of these mission critical networks. In many 833 scenarios, configurations are, most likely, manually performed. 834 Furthermore, some legacy and even modern devices do not even support 835 IP networking. Majority of protocols and tools developed by vendors 836 that are being used are proprietary which makes integration more 837 difficult. 839 The main reason for this disjoint operation scenario is that most 840 equipment is developed with specific task requirements in mind, 841 rather than interoperability of the varied equipment types. For 842 example, the operating conditions experienced by high altitude 843 security equipment is significantly different from that used in 844 desert conditions. Similarly, search and rescue operations equipment 845 used in case of fire rescue has different requirements than flood 846 relief equipment. Furthermore, interoperation of equipment with 847 telecommunication equipment was not an expected outcome or in some 848 scenarios this may not even be desirable. 850 Currently, field networks operate with a fixed Network Operations 851 Center (NOC) that physically manages the configuration and evaluation 852 of all field devices. Once configured, the devices might be deployed 853 in fixed or mobile scenarios. Any configuration changes required 854 would need to be appropriately encrypted and authenticated to prevent 855 unauthorized access. 857 Hierarchical management of devices is a common requirement in such 858 scenarios since local managers or operators may need to respond to 859 changing conditions within their purview. The level of configuration 860 management available at each hierarchy must also be closely governed. 862 Since many field operation devices are used in hostile environments, 863 a high failure and disconnection rate should be tolerated by the NMS, 864 which must also be able to deal with multiple gateways and disjoint 865 management protocols. 867 Multi-national field operations invloving search, rescue and security 868 are becoming increasingly common, requiring the interoperation of a 869 diverse set of equipment designed with different operating conditions 870 in mind. Furthermore, different intra- and inter-governmental 871 agencies are likely to have a different set of standards, best 872 practices, rules and regulation, and implementation approaches that 873 may contradict or conflict with each other. The NMS should be able 874 to detect these and handle them in an acceptable manner, which may 875 require human intervention. 877 4. IANA Considerations 879 This document does not introduce any new code-points or namespaces 880 for registration with IANA. 882 Note to RFC Editor: this section may be removed on publication as an 883 RFC. 885 5. Security Considerations 887 This document discusses use cases for Management of Networks with 888 Constrained Devices. The security considerations described 889 throughout the companion document [COM-REQ] apply here as well. 891 6. Contributors 893 Following persons made significant contributions to and reviewed this 894 document: 896 o Ulrich Herberg (Fujitsu Laboratories of America) contributed the 897 Section 3.9 on Community Network Applications. 899 o Peter van der Stok contributed to Section 3.6 on Building 900 Automation. 902 o Zhen Cao contributed to Section 2.2 Cellular Access Technologies. 904 o Gilman Tolle contributed the Section 3.4 on Automated Metering 905 Infrastructure. 907 o James Nguyen and Ulrich Herberg contributed to Section 3.10 on 908 Military operations. 910 7. Acknowledgments 912 Following persons reviewed and provided valuable comments to 913 different versions of this document: 915 Dominique Barthel, Carsten Bormann, Zhen Cao, Benoit Claise, Bert 916 Greevenbosch, Ulrich Herberg, James Nguyen, Zach Shelby, and Peter 917 van der Stok. 919 The editors would like to thank the reviewers and the participants on 920 the Coman maillist for their valuable contributions and comments. 922 8. Informative References 924 [RFC6130] Clausen, T., Dearlove, C., and J. Dean, "Mobile Ad Hoc 925 Network (MANET) Neighborhood Discovery Protocol (NHDP)", 926 RFC 6130, April 2011. 928 [RFC6568] Kim, E., Kaspar, D., and JP. Vasseur, "Design and 929 Application Spaces for IPv6 over Low-Power Wireless 930 Personal Area Networks (6LoWPANs)", RFC 6568, April 2012. 932 [RFC6779] Herberg, U., Cole, R., and I. Chakeres, "Definition of 933 Managed Objects for the Neighborhood Discovery Protocol", 934 RFC 6779, October 2012. 936 [RFC6988] Quittek, J., Chandramouli, M., Winter, R., Dietz, T., and 937 B. Claise, "Requirements for Energy Management", RFC 6988, 938 September 2013. 940 [RFC7181] Clausen, T., Dearlove, C., Jacquet, P., and U. Herberg, 941 "The Optimized Link State Routing Protocol Version 2", RFC 942 7181, April 2014. 944 [RFC7228] Bormann, C., Ersue, M., and A. Keranen, "Terminology for 945 Constrained-Node Networks", RFC 7228, May 2014. 947 [I-D.ietf-eman-framework] 948 Claise, B., Schoening, B., and J. Quittek, "Energy 949 Management Framework", draft-ietf-eman-framework-19 (work 950 in progress), April 2014. 952 [COM-REQ] Ersue, M., Romascanu, D., and J. Schoenwaelder, 953 "Management of Networks with Constrained Devices: Problem 954 Statement and Requirements", draft-ietf-opsawg-coman- 955 probstate-reqs (work in progress), February 2014. 957 Appendix A. Change Log 959 A.1. draft-ietf-opsawg-coman-use-cases-01 - draft-ietf-opsawg-coman- 960 use-cases-02 962 o Renamed Mobile Access Technologies section to Cellular Access 963 Technologies 965 o Changed references to mobile access technologies to now read 966 cellular access technologies. 968 o Added text to the introduction to point out that the list of use 969 cases is not exhaustive since others unknown to the authors might 970 exist. 972 o Updated references to take into account RFCs that have been now 973 published. 975 o Updated Environmental Monitoring section to make it clear that in 976 some scenarios it may not be prudent to repair devices. 978 o Added clarification in Infrastructure Monitoring section that 979 reliable communication is achieved via application layer 980 transactions 982 o Removed reference to Energy Devices from Energy Management 983 section, instead labeling them as devices within the context of 984 energy management. 986 o Reduced descriptive content in Energy Management section. 988 o Rewrote text in Energy Management section to highlight management 989 characteristics of Smart Meter and AMI networks. 991 o Added text regarding timely delivery of information, and related 992 management system characteristic, to the Medical Applications 993 section 995 o Changed subnets to network segment in Building Automation section. 997 o Changed structure to infrastructure in Building Automation 998 section, and added text to highlight associated deployment 999 difficulties. 1001 o Removed Trickle timer as example of common values to be set in 1002 Building Automation section. 1004 o Added text regarding the possible availability of outsourced and 1005 cloud based management systems for Home Automation. 1007 o Added text to Transport Applications section to highlight the 1008 requirement of IT infrastructure for such applications to function 1009 on top of. 1011 o Merged the Transport Applications and Vehicular Networks section 1012 together. Following changes to the Vehicular Networks section 1013 were merged back into Transport Applications 1015 * Replaced wireless last hops with wireless access to vehicles in 1016 Vehicular Networks. 1018 * Expanded proprietary systems to "systems relying on a specific 1019 Management Topology Option, as described in [COM-REQ]." within 1020 Vehicular Networks section. 1022 * Added text regarding mobility patterns to Vehicular Networks. 1024 o Changed the Military Operations use case to Field Operations and 1025 edited the text to be suitable to such scenarios. 1027 A.2. draft-ietf-opsawg-coman-use-cases-00 - draft-ietf-opsawg-coman- 1028 use-cases-01 1030 o Reordered some use cases to improve the flow. 1032 o Added "Vehicular Networks". 1034 o Shortened the Military Operations use case. 1036 o Started adding substance to the security considerations section. 1038 A.3. draft-ersue-constrained-mgmt-03 - draft-ersue-opsawg-coman-use- 1039 cases-00 1041 o Reduced the terminology section for terminology addressed in the 1042 LWIG and Coman Requirements drafts. Referenced the other drafts. 1044 o Checked and aligned all terminology against the LWIG terminology 1045 draft. 1047 o Spent some effort to resolve the intersection between the 1048 Industrial Application, Home Automation and Building Automation 1049 use cases. 1051 o Moved section section 3. Use Cases from the companion document 1052 [COM-REQ] to this draft. 1054 o Reformulation of some text parts for more clarity. 1056 A.4. draft-ersue-constrained-mgmt-02-03 1058 o Extended the terminology section and removed some of the 1059 terminology addressed in the new LWIG terminology draft. 1060 Referenced the LWIG terminology draft. 1062 o Moved Section 1.3. on Constrained Device Classes to the new LWIG 1063 terminology draft. 1065 o Class of networks considering the different type of radio and 1066 communication technologies in use and dimensions extended. 1068 o Extended the Problem Statement in Section 2. following the 1069 requirements listed in Section 4. 1071 o Following requirements, which belong together and can be realized 1072 with similar or same kind of solutions, have been merged. 1074 * Distributed Management and Peer Configuration, 1076 * Device status monitoring and Neighbor-monitoring, 1078 * Passive Monitoring and Reactive Monitoring, 1080 * Event-driven self-management - Self-healing and Periodic self- 1081 management, 1083 * Authentication of management systems and Authentication of 1084 managed devices, 1086 * Access control on devices and Access control on management 1087 systems, 1089 * Management of Energy Resources and Data models for energy 1090 management, 1092 * Software distribution (group-based firmware update) and Group- 1093 based provisioning. 1095 o Deleted the empty section on the gaps in network management 1096 standards, as it will be written in a separate draft. 1098 o Added links to mentioned external pages. 1100 o Added text on OMA M2M Device Classification in appendix. 1102 A.5. draft-ersue-constrained-mgmt-01-02 1104 o Extended the terminology section. 1106 o Added additional text for the use cases concerning deployment 1107 type, network topology in use, network size, network capabilities, 1108 radio technology, etc. 1110 o Added examples for device classes in a use case. 1112 o Added additional text provided by Cao Zhen (China Mobile) for 1113 Mobile Applications and by Peter van der Stok for Building 1114 Automation. 1116 o Added the new use cases 'Advanced Metering Infrastructure' and 1117 'MANET Concept of Operations in Military'. 1119 o Added the section 'Managing the Constrainedness of a Device or 1120 Network' discussing the needs of very constrained devices. 1122 o Added a note that the requirements in [COM-REQ] need to be seen as 1123 standalone requirements and the current document does not 1124 recommend any profile of requirements. 1126 o Added a section in [COM-REQ] for the detailed requirements on 1127 constrained management matched to management tasks like fault, 1128 monitoring, configuration management, Security and Access Control, 1129 Energy Management, etc. 1131 o Solved nits and added references. 1133 o Added Appendix A on the related development in other bodies. 1135 o Added Appendix B on the work in related research projects. 1137 A.6. draft-ersue-constrained-mgmt-00-01 1139 o Splitted the section on 'Networks of Constrained Devices' into the 1140 sections 'Network Topology Options' and 'Management Topology 1141 Options'. 1143 o Added the use case 'Community Network Applications' and 'Mobile 1144 Applications'. 1146 o Provided a Contributors section. 1148 o Extended the section on 'Medical Applications'. 1150 o Solved nits and added references. 1152 Authors' Addresses 1154 Mehmet Ersue (editor) 1155 Nokia Networks 1157 Email: mehmet.ersue@nsn.com 1159 Dan Romascanu 1160 Avaya 1162 Email: dromasca@avaya.com 1164 Juergen Schoenwaelder 1165 Jacobs University Bremen 1167 Email: j.schoenwaelder@jacobs-university.de 1169 Anuj Sehgal 1170 Jacobs University Bremen 1172 Email: s.anuj@jacobs-university.de