idnits 2.17.1 draft-ietf-opsawg-coman-use-cases-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 1, 2015) is 3334 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 6779 (Obsoleted by RFC 7939) Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Ersue, Ed. 3 Internet-Draft Nokia Networks 4 Intended status: Informational D. Romascanu 5 Expires: September 2, 2015 Avaya 6 J. Schoenwaelder 7 A. Sehgal 8 Jacobs University Bremen 9 March 1, 2015 11 Management of Networks with Constrained Devices: Use Cases 12 draft-ietf-opsawg-coman-use-cases-05 14 Abstract 16 This document discusses use cases concerning the management of 17 networks, where constrained devices are involved. A problem 18 statement, deployment options and the requirements on the networks 19 with constrained devices can be found in the companion document on 20 "Management of Networks with Constrained Devices: Problem Statement 21 and Requirements". 23 Status of This Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at http://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on September 2, 2015. 40 Copyright Notice 42 Copyright (c) 2015 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 58 2. Access Technologies . . . . . . . . . . . . . . . . . . . . . 4 59 2.1. Constrained Access Technologies . . . . . . . . . . . . . 4 60 2.2. Cellular Access Technologies . . . . . . . . . . . . . . 5 61 3. Device Lifecycle . . . . . . . . . . . . . . . . . . . . . . 6 62 3.1. Manufacturing and Initial Testing . . . . . . . . . . . . 6 63 3.2. Installation and Configuration . . . . . . . . . . . . . 6 64 3.3. Operation and Maintenance . . . . . . . . . . . . . . . . 7 65 3.4. Recommissioning and Decommissioning . . . . . . . . . . . 7 66 4. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 8 67 4.1. Environmental Monitoring . . . . . . . . . . . . . . . . 8 68 4.2. Infrastructure Monitoring . . . . . . . . . . . . . . . . 9 69 4.3. Industrial Applications . . . . . . . . . . . . . . . . . 10 70 4.4. Energy Management . . . . . . . . . . . . . . . . . . . . 12 71 4.5. Medical Applications . . . . . . . . . . . . . . . . . . 14 72 4.6. Building Automation . . . . . . . . . . . . . . . . . . . 15 73 4.7. Home Automation . . . . . . . . . . . . . . . . . . . . . 17 74 4.8. Transport Applications . . . . . . . . . . . . . . . . . 18 75 4.9. Community Network Applications . . . . . . . . . . . . . 20 76 4.10. Field Operations . . . . . . . . . . . . . . . . . . . . 22 77 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 78 6. Security Considerations . . . . . . . . . . . . . . . . . . . 24 79 7. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 24 80 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 24 81 9. Informative References . . . . . . . . . . . . . . . . . . . 24 82 Appendix A. Change Log . . . . . . . . . . . . . . . . . . . . . 25 83 A.1. draft-ietf-opsawg-coman-use-cases-04 - draft-ietf-opsawg- 84 coman-use-cases-05 . . . . . . . . . . . . . . . . . . . 25 85 A.2. draft-ietf-opsawg-coman-use-cases-03 - draft-ietf-opsawg- 86 coman-use-cases-04 . . . . . . . . . . . . . . . . . . . 26 87 A.3. draft-ietf-opsawg-coman-use-cases-02 - draft-ietf-opsawg- 88 coman-use-cases-03 . . . . . . . . . . . . . . . . . . . 26 89 A.4. draft-ietf-opsawg-coman-use-cases-01 - draft-ietf-opsawg- 90 coman-use-cases-02 . . . . . . . . . . . . . . . . . . . 26 91 A.5. draft-ietf-opsawg-coman-use-cases-00 - draft-ietf-opsawg- 92 coman-use-cases-01 . . . . . . . . . . . . . . . . . . . 28 93 A.6. draft-ersue-constrained-mgmt-03 - draft-ersue-opsawg- 94 coman-use-cases-00 . . . . . . . . . . . . . . . . . . . 28 95 A.7. draft-ersue-constrained-mgmt-02-03 . . . . . . . . . . . 28 96 A.8. draft-ersue-constrained-mgmt-01-02 . . . . . . . . . . . 29 97 A.9. draft-ersue-constrained-mgmt-00-01 . . . . . . . . . . . 30 98 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 30 100 1. Introduction 102 Small devices with limited CPU, memory, and power resources, so 103 called constrained devices (aka. sensor, smart object, or smart 104 device) can be connected to a network. Such a network of constrained 105 devices itself may be constrained or challenged, e.g., with 106 unreliable or lossy channels, wireless technologies with limited 107 bandwidth and a dynamic topology, needing the service of a gateway or 108 proxy to connect to the Internet. In other scenarios, the 109 constrained devices can be connected to a non-constrained network 110 using off-the-shelf protocol stacks. Constrained devices might be in 111 charge of gathering information in diverse settings including natural 112 ecosystems, buildings, and factories and send the information to one 113 or more server stations. 115 Network management is characterized by monitoring network status, 116 detecting faults, and inferring their causes, setting network 117 parameters, and carrying out actions to remove faults, maintain 118 normal operation, and improve network efficiency and application 119 performance. The traditional network management application 120 periodically collects information from a set of elements that are 121 needed to manage, processes the data, and presents them to the 122 network management users. Constrained devices, however, often have 123 limited power, low transmission range, and might be unreliable. Such 124 unreliability might arise from device itself (e.g., battery 125 exhausted) or from the channel being constrained (i.e., low-capacity 126 and high-latency). They might also need to work in hostile 127 environments with advanced security requirements or need to be used 128 in harsh environments for a long time without supervision. Due to 129 such constraints, the management of a network with constrained 130 devices offers different type of challenges compared to the 131 management of a traditional IP network. 133 This document aims to understand use cases for the management of a 134 network, where constrained devices are involved. The document lists 135 and discusses diverse use cases for the management from the network 136 as well as from the application point of view. The list of discussed 137 use cases is not an exhaustive one since other scenarios, currently 138 unknown to the authors, are possible. The application scenarios 139 discussed aim to show where networks of constrained devices are 140 expected to be deployed. For each application scenario, we first 141 briefly describe the characteristics followed by a discussion on how 142 network management can be provided, who is likely going to be 143 responsible for it, and on which time-scale management operations are 144 likely to be carried out. 146 A problem statement, deployment and management topology options as 147 well as the requirements on the networks with constrained devices can 148 be found in the companion document [COM-REQ]. 150 This documents builds on the terminology defined in [RFC7228] and 151 [COM-REQ]. [RFC7228] is a base document for the terminology 152 concerning constrained devices and constrained networks. Some use 153 cases specific to IPv6 over Low-Power Wireless Personal Area Networks 154 (6LoWPANs) can be found in [RFC6568]. 156 2. Access Technologies 158 Besides the management requirements imposed by the different use 159 cases, the access technologies used by constrained devices can impose 160 restrictions and requirements upon the Network Management System 161 (NMS) and protocol of choice. 163 It is possible that some networks of constrained devices might 164 utilize traditional non-constrained access technologies for network 165 access, e.g., local area networks with plenty of capacity. In such 166 scenarios, the constrainedness of the device presents special 167 management restrictions and requirements rather than the access 168 technology utilized. 170 However, in other situations constrained or cellular access 171 technologies might be used for network access, thereby causing 172 management restrictions and requirements to arise as a result of the 173 underlying access technologies. 175 A discussion regarding the impact of cellular and constrained access 176 technologies is provided in this section since they impose some 177 special requirements on the management of constrained networks. On 178 the other hand, fixed line networks (e.g., power line communications) 179 are not discussed here since tend to be quite static and do not 180 typically impose any special requirements on the management of the 181 network. 183 2.1. Constrained Access Technologies 185 Due to resource restrictions, embedded devices deployed as sensors 186 and actuators in the various use cases utilize low-power low data- 187 rate wireless access technologies such as IEEE 802.15.4, DECT ULE or 188 Bluetooth Low-Energy (BT-LE) for network connectivity. 190 In such scenarios, it is important for the NMS to be aware of the 191 restrictions imposed by these access technologies to efficiently 192 manage these constrained devices. Specifically, such low-power low 193 data-rate access technologies typically have small frame sizes. So 194 it would be important for the NMS and management protocol of choice 195 to craft packets in a way that avoids fragmentation and reassembly of 196 packets since this can use valuable memory on constrained devices. 198 Devices using such access technologies might operate via a gateway 199 that translates between these access technologies and more 200 traditional Internet protocols. A hierarchical approach to device 201 management in such a situation might be useful, wherein the gateway 202 device is in-charge of devices connected to it, while the NMS 203 conducts management operations only to the gateway. 205 2.2. Cellular Access Technologies 207 Machine to machine (M2M) services are increasingly provided by mobile 208 service providers as numerous devices, home appliances, utility 209 meters, cars, video surveillance cameras, and health monitors, are 210 connected with mobile broadband technologies. Different 211 applications, e.g., in a home appliance or in-car network, use 212 Bluetooth, Wi-Fi or ZigBee locally and connect to a cellular module 213 acting as a gateway between the constrained environment and the 214 mobile cellular network. 216 Such a gateway might provide different options for the connectivity 217 of mobile networks and constrained devices: 219 o a smart phone with 3G/4G and WLAN radio might use BT-LE to connect 220 to the devices in a home area network, 222 o a femtocell might be combined with home gateway functionality 223 acting as a low-power cellular base station connecting smart 224 devices to the application server of a mobile service provider, 226 o an embedded cellular module with LTE radio connecting the devices 227 in the car network with the server running the telematics service, 229 o an M2M gateway connected to the mobile operator network supporting 230 diverse IoT connectivity technologies including ZigBee and CoAP 231 over 6LoWPAN over IEEE 802.15.4. 233 Common to all scenarios above is that they are embedded in a service 234 and connected to a network provided by a mobile service provider. 235 Usually there is a hierarchical deployment and management topology in 236 place where different parts of the network are managed by different 237 management entities and the count of devices to manage is high (e.g. 238 many thousands). In general, the network is comprised by manifold 239 type and size of devices matching to different device classes. As 240 such, the managing entity needs to be prepared to manage devices with 241 diverse capabilities using different communication or management 242 protocols. In case the devices are directly connected to a gateway 243 they most likely are managed by a management entity integrated with 244 the gateway, which itself is part of the Network Management System 245 (NMS) run by the mobile operator. Smart phones or embedded modules 246 connected to a gateway might be themselves in charge to manage the 247 devices on their level. The initial and subsequent configuration of 248 such a device is mainly based on self-configuration and is triggered 249 by the device itself. 251 The gateway might be in charge of filtering and aggregating the data 252 received from the device as the information sent by the device might 253 be mostly redundant. 255 3. Device Lifecycle 257 Since constrained devices deployed in a network might go through 258 multiple phases in their lifetime, it is possible for different 259 managers of networks and/or devices to exist during different parts 260 of the device lifetimes. An in-depth discussion regarding the 261 possible device lifecycles can be found in [IOT-SEC]. 263 3.1. Manufacturing and Initial Testing 265 Typically, the lifecycle of a device begins at the manufacturing 266 stage. During this phase the manufacturer of the device is 267 responsible for the management and configuration of the devices. It 268 is also possible that a certain use case might utilize multiple types 269 of constrained devices (e.g., temperature sensors, lighting 270 controllers, etc.) and these could be manufactured by different 271 entities. As such, during the manufacturing stage different managers 272 can exist for different devices. Similarly, during the initial 273 testing phase, where device quality assurance tasks might be 274 performed, the manufacturer remains responsible for the management of 275 devices and networks that might comprise them. 277 3.2. Installation and Configuration 279 The responsibility of managing the devices must be transferred to the 280 installer during the installation phase. There must exist procedures 281 for transferring management responsibility between the manufacturer 282 and installer. The installer may be the customer or an intermediary 283 contracted to setup the devices and their networks. It is important 284 that the NMS utilized allows devices originating at different vendors 285 to be managed, ensuring interoperability between them and the 286 configuration of trust relationships between them as well. 288 It is possible that the installation and configuration 289 responsibilities might lie with different entities. For example, the 290 installer of a device might only be responsible for cabling a 291 network, physically installing the devices and ensuring initial 292 network connectivity between them (e.g., configuring IP addresses). 293 Following such an installation, the customer or a sub-contractor 294 might actually configure the operation of the device. As such, 295 during installation and configuration multiple parties might be 296 responsible for managing a device and appropriate methods must be 297 available to ensure that this management responsibility is 298 transferred suitably. 300 3.3. Operation and Maintenance 302 At the outset of the operation phase, the operational responsibility 303 of a device and network should be passed on to the customer. It is 304 possible that the customer, however, might contract the maintenance 305 of the devices and network to a sub-contractor. In this case, the 306 NMS and management protocol should allow for configuring different 307 levels of access to the devices. Since different maintenance vendors 308 might be used for devices that perform different functions (e.g., 309 HVAC, lighting, etc.) it should also be possible to restrict 310 management access to devices based on the currently responsible 311 manager. 313 3.4. Recommissioning and Decommissioning 315 The owner of a device might choose to replace, repurpose or even 316 decommission it. In each of these cases, either the customer or the 317 contracted maintenance agency must ensure that appropriate steps are 318 taken to meet the end goal. 320 In case the devices needs to be replaced, the manager of the network 321 (customer or contractor responsible) must detach the device from the 322 network, remove all appropriate configuration and discard the device. 323 A new device must then be configured to replace it. The NMS should 324 allow for transferring configuration from and replacing an existing 325 device. The management responsibility of the operation/maintenance 326 manager would end once the device is removed from the network. 327 During the installation of the new replacement device, the same 328 responsibilities would apply as those during the Installation and 329 Configuration phases. 331 The device being replaced may not have yet reached end-of-life, and 332 as such, instead of being discarded it may be installed in a new 333 location. In this case, the management responsibilities are once 334 again resting in the hands of the entities responsible for the 335 Installation and Configuration phases at the new location. 337 If a device is repurposed, then it is possible that the management 338 responsibility for this device changes as well. For example, a 339 device might be moved from one building to another. In this case, 340 the managers responsible for devices and networks in each building 341 could be different. As such, the NMS must not only allow for 342 changing configuration but also transferring management 343 responsibilities. 345 In case a device is decommissioned, the management responsibility 346 typically ends at that point. 348 4. Use Cases 350 4.1. Environmental Monitoring 352 Environmental monitoring applications are characterized by the 353 deployment of a number of sensors to monitor emissions, water 354 quality, or even the movements and habits of wildlife. Other 355 applications in this category include earthquake or tsunami early- 356 warning systems. The sensors often span a large geographic area, 357 they can be mobile, and they are often difficult to replace. 358 Furthermore, the sensors are usually not protected against tampering. 360 Management of environmental monitoring applications is largely 361 concerned with the monitoring whether the system is still functional 362 and the roll-out of new constrained devices in case the system looses 363 too much of its structure. The constrained devices themselves need 364 to be able to establish connectivity (auto-configuration) and they 365 need to be able to deal with events such as loosing neighbors or 366 being moved to other locations. 368 Management responsibility typically rests with the organization 369 running the environmental monitoring application. Since these 370 monitoring applications must be designed to tolerate a number of 371 failures, the time scale for detecting and recording failures is for 372 some of these applications likely measured in hours and repairs might 373 easily take days. In fact, in some scenarios it might be more cost- 374 and time-effective to not repair such devices at all. However, for 375 certain environmental monitoring applications, much tighter time 376 scales may exist and might be enforced by regulations (e.g., 377 monitoring of nuclear radiation). 379 Since many applications of environmental monitoring sensors are 380 likely to be in areas that are important to safety (flood monitoring, 381 nuclear radiation monitoring, etc.) it is important for management 382 protocols and network management systems (NMS) to ensure appropriate 383 security protections. These protections include not only access 384 control, integrity and availability of data, but also provide 385 appropriate mechanisms that can deal with situations that might be 386 categorized as emergencies or when tampering with sensors/data might 387 be detected. 389 4.2. Infrastructure Monitoring 391 Infrastructure monitoring is concerned with the monitoring of 392 infrastructures such as bridges, railway tracks, or (offshore) 393 windmills. The primary goal is usually to detect any events or 394 changes of the structural conditions that can impact the risk and 395 safety of the infrastructure being monitored. Another secondary goal 396 is to schedule repair and maintenance activities in a cost effective 397 manner. 399 The infrastructure to monitor might be in a factory or spread over a 400 wider area but difficult to access. As such, the network in use 401 might be based on a combination of fixed and wireless technologies, 402 which use robust networking equipment and support reliable 403 communication via application layer transactions. It is likely that 404 constrained devices in such a network are mainly C2 devices [RFC7228] 405 and have to be controlled centrally by an application running on a 406 server. In case such a distributed network is widely spread, the 407 wireless devices might use diverse long-distance wireless 408 technologies such as WiMAX, or 3G/LTE. In cases, where an in- 409 building network is involved, the network can be based on Ethernet or 410 wireless technologies suitable for in-building usage. 412 The management of infrastructure monitoring applications is primarily 413 concerned with the monitoring of the functioning of the system. 414 Infrastructure monitoring devices are typically rolled out and 415 installed by dedicated experts and changes are rare since the 416 infrastructure itself changes rarely. However, monitoring devices 417 are often deployed in unsupervised environments and hence special 418 attention must be given to protecting the devices from being 419 modified. 421 Management responsibility typically rests with the organization 422 owning the infrastructure or responsible for its operation. The time 423 scale for detecting and recording failures is likely measured in 424 hours and repairs might easily take days. However, certain events 425 (e.g., natural disasters) may require that status information be 426 obtained much more quickly and that replacements of failed sensors 427 can be rolled out quickly (or redundant sensors are activated 428 quickly). In case the devices are difficult to access, a self- 429 healing feature on the device might become necessary. Since 430 infrastructure monitoring is closely related to ensuring safety, 431 management protocols and systems must provide appropriate security 432 protections to ensure confidentiality, integrity and availability of 433 data. 435 4.3. Industrial Applications 437 Industrial Applications and smart manufacturing refer to tasks such 438 as networked control and monitoring of manufacturing equipment, asset 439 and situation management, or manufacturing process control. For the 440 management of a factory it is becoming essential to implement smart 441 capabilities. From an engineering standpoint, industrial 442 applications are intelligent systems enabling rapid manufacturing of 443 new products, dynamic response to product demands, and real-time 444 optimization of manufacturing production and supply chain networks. 445 Potential industrial applications (e.g., for smart factories and 446 smart manufacturing) are: 448 o Digital control systems with embedded, automated process controls, 449 operator tools, as well as service information systems optimizing 450 plant operations and safety. 452 o Asset management using predictive maintenance tools, statistical 453 evaluation, and measurements maximizing plant reliability. 455 o Smart sensors detecting anomalies to avoid abnormal or 456 catastrophic events. 458 o Smart systems integrated within the industrial energy management 459 system and externally with the smart grid enabling real-time 460 energy optimization. 462 Management of Industrial Applications and smart manufacturing may in 463 some situations involve Building Automation tasks such as control of 464 energy, HVAC (heating, ventilation, and air conditioning), lighting, 465 or access control. Interacting with management systems from other 466 application areas might be important in some cases (e.g., 467 environmental monitoring for electric energy production, energy 468 management for dynamically scaling manufacturing, vehicular networks 469 for mobile asset tracking). Management of constrained devices and 470 networks may not only refer to the management of their network 471 connectivity. Since the capabilities of constrained devices are 472 limited, it is quite possible that a management system would even be 473 required to configure, monitor and operate the primary functions that 474 a constrained device is utilized for, besides managing its network 475 connectivity. 477 Sensor networks are an essential technology used for smart 478 manufacturing. Measurements, automated controls, plant optimization, 479 health and safety management, and other functions are provided by a 480 large number of networked sectors. Data interoperability and 481 seamless exchange of product, process, and project data are enabled 482 through interoperable data systems used by collaborating divisions or 483 business systems. Intelligent automation and learning systems are 484 vital to smart manufacturing but must be effectively integrated with 485 the decision environment. The NMS utilized must ensure timely 486 delivery of sensor data to the control unit so it may take 487 appropriate decisions. Similarly, relaying of commands must also be 488 monitored and managed to ensure optimal functioning. Wireless sensor 489 networks (WSN) have been developed for machinery Condition-based 490 Maintenance (CBM) as they offer significant cost savings and enable 491 new functionalities. Inaccessible locations, rotating machinery, 492 hazardous areas, and mobile assets can be reached with wireless 493 sensors. WSNs can provide today wireless link reliability, real-time 494 capabilities, and quality-of-service and enable industrial and 495 related wireless sense and control applications. 497 Management of industrial and factory applications is largely focused 498 on monitoring whether the system is still functional, real-time 499 continuous performance monitoring, and optimization as necessary. 500 The factory network might be part of a campus network or connected to 501 the Internet. The constrained devices in such a network need to be 502 able to establish configuration themselves (auto-configuration) and 503 might need to deal with error conditions as much as possible locally. 504 Access control has to be provided with multi-level administrative 505 access and security. Support and diagnostics can be provided through 506 remote monitoring access centralized outside of the factory. 508 Factory automation tasks require that continuous monitoring be used 509 to optimize production. Groups of manufacturing and monitoring 510 devices could be defined to establish relationships between them. To 511 ensure timely optimization of processes, commands from the NMS must 512 arrive at all destination within an appropriate duration. This 513 duration could change based on the manufacturing task being 514 performed. Installation and operation of factory networks have 515 different requirements. During the installation phase many networks, 516 usually distributed along different parts of the factory/assembly 517 line, co-exist without a connection to a common backbone. A 518 specialized installation tool is typically used to configure the 519 functions of different types of devices, in different factory 520 location, in a secure manner. At the end of the installation phase, 521 interoperability between these stand-alone networks and devices must 522 be enabled. During the operation phase, these stand-alone networks 523 are connected to a common backbone so that they may retrieve control 524 information from and send commands to appropriate devices. 526 Management responsibility is typically owned by the organization 527 running the industrial application. Since the monitoring 528 applications must handle a potentially large number of failures, the 529 time scale for detecting and recording failures is for some of these 530 applications likely measured in minutes. However, for certain 531 industrial applications, much tighter time scales may exist, e.g. in 532 real-time, which might be enforced by the manufacturing process or 533 the use of critical material. Management protocols and NMSs must 534 ensure appropriate access control since different users of industrial 535 control systems will have varying levels of permissions. E.g., while 536 supervisors might be allowed to change production parameters, they 537 should not be allowed to modify the functional configuration of 538 devices like a technician should. It is also important to ensure 539 integrity and availability of data since malfunctions can potentially 540 become safety issues. This also implies that management systems must 541 be able to react to situations that may pose dangers to worker 542 safety. 544 4.4. Energy Management 546 The EMAN working group developed an energy management framework 547 [RFC7326] for devices and device components within or connected to 548 communication networks. This document observes that one of the 549 challenges of energy management is that a power distribution network 550 is responsible for the supply of energy to various devices and 551 components, while a separate communication network is typically used 552 to monitor and control the power distribution network. Devices in 553 the context of energy management can be monitored for parameters like 554 power, energy, demand and power quality. If a device contains 555 batteries, they can be also monitored and managed. 557 Energy devices differ in complexity and may include basic sensors or 558 switches, specialized electrical meters, or power distribution units 559 (PDU), and subsystems inside the network devices (routers, network 560 switches) or home or industrial appliances. The operators of an 561 Energy Management System are either the utility providers or 562 customers that aim to control and reduce the energy consumption and 563 the associated costs. The topology in use differs and the deployment 564 can cover areas from small surfaces (individual homes) to large 565 geographical areas. The EMAN requirements document [RFC6988] 566 discusses the requirements for energy management concerning 567 monitoring and control functions. 569 It is assumed that energy management will apply to a large range of 570 devices of all classes and networks topologies. Specific resource 571 monitoring like battery utilization and availability may be specific 572 to devices with lower physical resources (device classes C0 or C1 573 [RFC7228]). 575 Energy management is especially relevant to the Smart Grid. A Smart 576 Grid is an electrical grid that uses data networks to gather and to 577 act on energy and power-related information in an automated fashion 578 with the goal to improve the efficiency, reliability, economics, and 579 sustainability of the production and distribution of electricity. 581 Smart Metering is a good example of Smart Grid based energy 582 management applications. Different types of possibly wireless small 583 meters produce all together a large amount of data, which is 584 collected by a central entity and processed by an application server, 585 which may be located within the customer's residence or off-site in a 586 data-center. The communication infrastructure can be provided by a 587 mobile network operator as the meters in urban areas will have most 588 likely a cellular or WiMAX radio. In case the application server is 589 located within the residence, such meters are more likely to use Wi- 590 Fi protocols to interconnect with an existing network. 592 An Advanced Metering Infrastructure (AMI) network is another example 593 of the Smart Grid that enables an electric utility to retrieve 594 frequent electric usage data from each electric meter installed at a 595 customer's home or business. Unlike Smart Metering, in which case 596 the customer or their agents install appliance level meters, an AMI 597 infrastructure is typically managed by the utility providers and 598 could also include other distribution automation devices like 599 transformers and reclosers. Meters in AMI networks typically contain 600 constrained devices that connect to mesh networks with a low- 601 bandwidth radio. Usage data and outage notifications can be sent by 602 these meters to the utility's headend systems, via aggregation points 603 of higher-end router devices that bridge the constrained network to a 604 less constrained network via cellular, WiMAX, or Ethernet. Unlike 605 meters, these higher-end devices might be installed on utility poles 606 owned and operated by a separate entity. 608 It thereby becomes important for a management application to not only 609 be able to work with diverse types of devices, but also over multiple 610 links that might be operated and managed by separate entities, each 611 having divergent policies for their own devices and network segments. 612 During management operations, like firmware updates, it is important 613 that the management system performs robustly in order to avoid 614 accidental outages of critical power systems that could be part of 615 AMI networks. In fact, since AMI networks must also report on 616 outages, the management system might have to manage the energy 617 properties of battery operated AMI devices themselves as well. 619 A management system for home based Smart Metering solutions is likely 620 to have devices laid out in a simple topology. However, AMI networks 621 installations could have thousands of nodes per router, i.e., higher- 622 end device, which organize themselves in an ad-hoc manner. As such, 623 a management system for AMI networks will need to discover and 624 operate over complex topologies as well. In some situations, it is 625 possible that the management system might also have to setup and 626 manage the topology of nodes, especially critical routers. 627 Encryption key management and sharing in both types of networks is 628 also likely to be important for providing confidentiality for all 629 data traffic. In AMI networks the key may be obtained by a meter 630 only after an end-to-end authentication process based on 631 certificates. Smart Metering solution could adopt a similar approach 632 or the security may be implied due to the encrypted Wi-Fi networks 633 they become part of. 635 The management of such a network requires end-to-end management of 636 and information exchange through different types of networks. 637 However, as of today there is no integrated energy management 638 approach and no common information model available. Specific energy 639 management applications or network islands use their own management 640 mechanisms. 642 4.5. Medical Applications 644 Constrained devices can be seen as an enabling technology for 645 advanced and possibly remote health monitoring and emergency 646 notification systems, ranging from blood pressure and heart rate 647 monitors to advanced devices capable of monitoring implanted 648 technologies, such as pacemakers or advanced hearing aids. Medical 649 sensors may not only be attached to human bodies, they might also 650 exist in the infrastructure used by humans such as bathrooms or 651 kitchens. Medical applications will also be used to ensure 652 treatments are being applied properly and they might guide people 653 losing orientation. Fitness and wellness applications, such as 654 connected scales or wearable heart monitors, encourage consumers to 655 exercise and empower self-monitoring of key fitness indicators. 656 Different applications use Bluetooth, Wi-Fi or ZigBee connections to 657 access the patient's smartphone or home cellular connection to access 658 the Internet. 660 Constrained devices that are part of medical applications are managed 661 either by the users of those devices or by an organization providing 662 medical (monitoring) services for physicians. In the first case, 663 management must be automatic and/or easy to install and setup by 664 average people. In the second case, it can be expected that devices 665 be controlled by specially trained people. In both cases, however, 666 it is crucial to protect the safety and privacy of the people to 667 which medical devices are attached. Security precautions to protect 668 access (authentication, encryption, integrity protections, etc.) to 669 such devices may be critical to safeguarding the individual. The 670 level of access granted to different users also may need to be 671 regulated. For example, an authorized surgeon or doctor must be 672 allowed to configure all necessary options on the devices, however, a 673 nurse or technician may only be allowed to retrieve data that can 674 assist in diagnosis. Even though the data collected by a heart beat 675 monitor might be protected, the pure fact that someone carries such a 676 device may need protection. As such, certain medical appliances may 677 not want to participate in discovery and self-configuration protocols 678 in order to remain invisible. 680 Many medical devices are likely to be used (and relied upon) to 681 provide data to physicians in critical situations since the biggest 682 market is likely elderly and handicapped people. Timely delivery of 683 data can be quite important in certain applications like patient 684 mobility monitoring in old-age homes. Data must reach the physician 685 and/or emergency services within specified limits of time in order to 686 be useful. As such, fault detection of the communication network or 687 the constrained devices becomes a crucial function of the management 688 system that must be carried out with high reliability and, depending 689 on the medical appliance and its application, within seconds. 691 4.6. Building Automation 693 Building automation comprises the distributed systems designed and 694 deployed to monitor and control the mechanical, electrical and 695 electronic systems inside buildings with various destinations (e.g., 696 public and private, industrial, institutions, or residential). 697 Advanced Building Automation Systems (BAS) may be deployed 698 concentrating the various functions of safety, environmental control, 699 occupancy, security. More and more the deployment of the various 700 functional systems is connected to the same communication 701 infrastructure (possibly Internet Protocol based), which may involve 702 wired or wireless communications networks inside the building. 704 Building automation requires the deployment of a large number 705 (10-100.000) of sensors that monitor the status of devices, and 706 parameters inside the building and controllers with different 707 specialized functionality for areas within the building or the 708 totality of the building. Inter-node distances between neighboring 709 nodes vary between 1 to 20 meters. The NMS must, as a result, be 710 able to manage and monitor a large number of devices, which may be 711 organized in multi-hop meshed networks. Distances between the nodes, 712 and the use of constrained protocols, means that networks of nodes 713 might be segmented. The management of such network segments and 714 nodes in these segments should be possible. Contrary to home 715 automation, in building management the devices are expected to be 716 managed assets and known to a set of commissioning tools and a data 717 storage, such that every connected device has a known origin. This 718 requires the management system to be able to discover devices on the 719 network and ensure that the expected list of devices is currently 720 matched. Management here includes verifying the presence of the 721 expected devices and detecting the presence of unwanted devices. 723 Examples of functions performed by controllers in building automation 724 are regulating the quality, humidity, and temperature of the air 725 inside the building and lighting. Other systems may report the 726 status of the machinery inside the building like elevators, or inside 727 the rooms like projectors in meeting rooms. Security cameras and 728 sensors may be deployed and operated on separate dedicated 729 infrastructures connected to the common backbone. The deployment 730 area of a BAS is typically inside one building (or part of it) or 731 several buildings geographically grouped in a campus. A building 732 network can be composed of network segments, where a network segment 733 covers a floor, an area on the floor, or a given functionality (e.g., 734 security cameras). It is possible that the management tasks of 735 different types of some devices might be separated from others (e.g, 736 security cameras might operate and be managed via a separate network 737 to the HVAC in a building). 739 Some of the sensors in Building Automation Systems (for example fire 740 alarms or security systems) register, record and transfer critical 741 alarm information and therefore must be resilient to events like loss 742 of power or security attacks. A management system must be able to 743 deal with unintentional segmentation of networks due to power loss or 744 channel unavailability. It must also be able to detect security 745 events. Due to specific operating conditions required from certain 746 devices, there might be a need to certify components and subsystems 747 operating in such constrained conditions based on specific 748 requirements. Also in some environments, the malfunctioning of a 749 control system (like temperature control) needs to be reported in the 750 shortest possible time. Complex control systems can misbehave, and 751 their critical status reporting and safety algorithms need to be 752 basic and robust and perform even in critical conditions. Providing 753 this monitoring, configuration and notification service is an 754 important task of the management system used in building automation. 756 Building automation solutions are deployed in some cases in newly 757 designed buildings, in other cases it might be over existing 758 infrastructures. In the first case, there is a broader range of 759 possible solutions, which can be planned for the infrastructure of 760 the building. In the second case the solution needs to be deployed 761 over an existing infrastructure taking into account factors like 762 existing wiring, distance limitations, the propagation of radio 763 signals over walls and floors, thereby making deployment difficult. 764 As a result, some of the existing WLAN solutions (e.g., IEEE 802.11 765 or IEEE 802.15) may be deployed. In mission-critical or security 766 sensitive environments and in cases where link failures happen often, 767 topologies that allow for reconfiguration of the network and 768 connection continuity may be required. Some of the sensors deployed 769 in building automation may be very simple constrained devices for 770 which C0 or C1 [RFC7228] may be assumed. 772 For lighting applications, groups of lights must be defined and 773 managed. Commands to a group of light must arrive within 200 ms at 774 all destinations. The installation and operation of a building 775 network has different requirements. During the installation, many 776 stand-alone networks of a few to 100 nodes co-exist without a 777 connection to the backbone. During this phase, the nodes are 778 identified with a network identifier related to their physical 779 location. Devices are accessed from an installation tool to connect 780 them to the network in a secure fashion. During installation, the 781 setting of parameters of common values to enable interoperability may 782 be required. During operation, the networks are connected to the 783 backbone while maintaining the network identifier to physical 784 location relation. Network parameters like address and name are 785 stored in DNS. The names can assist in determining the physical 786 location of the device. 788 It is also important for a building automation NMS to take safety and 789 security into account. Ensuring privacy and confidentiality of data, 790 such that unauthorized parties do not get access to it, is likely to 791 be important since users' individual behaviors could be potentially 792 understood via their settings. Appropriate security considerations 793 for authorization and access control to the NMS is also important 794 since different users are likely to have varied levels of operational 795 permissions in the system. E.g., while end users should be able to 796 control lighting systems, HVACs, etc., only qualified technicians 797 should be able to configure parameters that change the fundamental 798 operation of a device. It is also important for devices and the NMS 799 to be able to detect and report any tampering they might detect, 800 since these could lead to potential user safety concerns, e.g., if 801 sensors controlling air quality are tampered with such that the 802 levels of Carbon Monoxide become life threatening. This implies that 803 a NMS should also be able to deal with and appropriately prioritize 804 situations that might potentially lead to safety concerns. 806 4.7. Home Automation 808 Home automation includes the control of lighting, heating, 809 ventilation, air conditioning, appliances, entertainment and home 810 security devices to improve convenience, comfort, energy efficiency, 811 and safety. It can be seen as a residential extension of building 812 automation. However, unlike a building automation system, the 813 infrastructure in a home is operated in a considerably more ad-hoc 814 manner. While in some installations it is likely that there is no 815 centralized management system, akin to a Building Automation System 816 (BAS), available, in other situations outsourced and cloud based 817 systems responsible for managing devices in the home might be used. 819 Home automation networks need a certain amount of configuration 820 (associating switches or sensors to actuators) that is either 821 provided by electricians deploying home automation solutions, by 822 third party home automation service providers (e.g., small 823 specialized companies or home automation device manufacturers) or by 824 residents by using the application user interface provided by home 825 automation devices to configure (parts of) the home automation 826 solution. Similarly, failures may be reported via suitable 827 interfaces to residents or they might be recorded and made available 828 to services providers in charge of the maintenance of the home 829 automation infrastructure. 831 The management responsibility lies either with the residents or it 832 may be outsourced to electricians and/or third parties providing 833 management of home automation solutions as a service. A varying 834 combination of electricians, service providers or the residents may 835 be responsible for different aspects of managing the infrastructure. 836 The time scale for failure detection and resolution is in many cases 837 likely counted in hours to days. 839 4.8. Transport Applications 841 Transport application is a generic term for the integrated 842 application of communications, control, and information processing in 843 a transportation system. Transport telematics or vehicle telematics 844 are used as a term for the group of technologies that support 845 transportation systems. Transport applications running on such a 846 transportation system cover all modes of the transport and consider 847 all elements of the transportation system, i.e. the vehicle, the 848 infrastructure, and the driver or user, interacting together 849 dynamically. Examples for transport applications are inter and intra 850 vehicular communication, smart traffic control, smart parking, 851 electronic toll collection systems, logistic and fleet management, 852 vehicle control, and safety and road assistance. 854 As a distributed system, transport applications require an end-to-end 855 management of different types of networks. It is likely that 856 constrained devices in a network (e.g. a moving in-car network) have 857 to be controlled by an application running on an application server 858 in the network of a service provider. Such a highly distributed 859 network including cellular devices on vehicles is assumed to include 860 a wireless access network using diverse long distance wireless 861 technologies such as WiMAX, 3G/LTE or satellite communication, e.g. 862 based on an embedded hardware module. As a result, the management of 863 constrained devices in the transport system might be necessary to 864 plan top-down and might need to use data models obliged from and 865 defined on the application layer. The assumed device classes in use 866 are mainly C2 [RFC7228] devices. In cases, where an in-vehicle 867 network is involved, C1 devices [RFC7228] with limited capabilities 868 and a short-distance constrained radio network, e.g. IEEE 802.15.4 869 might be used additionally. 871 All Transport Applications will require an IT infrastructure to run 872 on top of, e.g., in public transport scenarios like trains, bus or 873 metro network infrastructure might be provided, maintained and 874 operated by third parties like mobile network or satellite network 875 operators. However, the management responsibility of the transport 876 application typically rests within the organization running the 877 transport application (in the public transport scenario, this would 878 typically be the public transport operator). Different aspects of 879 the infrastructure might also be managed by different entities. For 880 example, the in-car devices are likely to be installed and managed by 881 the manufacturer, while the public works might be responsible for the 882 on-road vehicular communication infrastructure used by these devices. 883 The back-end infrastructure is also likely to be maintained by third 884 party operators. As such, the NMS must be able to deal with 885 different network segments, each being operated and controlled by 886 separate entities, and enable appropriate access control and security 887 as well. 889 Depending on the type of application domain (vehicular or stationary) 890 and service being provided, it would be important for the NMS to be 891 able to function with different architectures, since different 892 manufacturers might have their own proprietary systems relying on a 893 specific Management Topology Option, as described in [COM-REQ]. 894 Moreover, constituents of the network can be either private, 895 belonging to individuals or private companies, or owned by public 896 institutions leading to different legal and organization 897 requirements. Across the entire infrastructure, a variety of 898 constrained devices are likely to be used, and must be individually 899 managed. The NMS must be able to either work directly with different 900 types of devices, or have the ability to interoperate with multiple 901 different systems. 903 The challenges in the management of vehicles in a mobile transport 904 application are manifold. The up-to-date position of each node in 905 the network should be reported to the corresponding management 906 entities, since the nodes could be moving within or roaming between 907 different networks. Secondly, a variety of troubleshooting 908 information, including sensitive location information, needs to be 909 reported to the management system in order to provide accurate 910 service to the customer. Management systems dealing with mobile 911 nodes could possibly exploit specific patterns in the mobility of the 912 nodes. These patterns emerge due to repetitive vehicular usage in 913 scenarios like people commuting to work, logistics supply vehicles 914 transporting shipments between warehouses, etc. The NMS must also be 915 able to handle partitioned networks, which would arise due to the 916 dynamic nature of traffic resulting in large inter-vehicle gaps in 917 sparsely populated scenarios. Since mobile nodes might roam in 918 remote networks, the NMS should be able to provide operating 919 configuration updates regardless of node location. 921 The constrained devices in a moving transport network might be 922 initially configured in a factory and a reconfiguration might be 923 needed only rarely. New devices might be integrated in an ad-hoc 924 manner based on self-management and -configuration capabilities. 925 Monitoring and data exchange might be necessary to do via a gateway 926 entity connected to the back-end transport infrastructure. The 927 devices and entities in the transport infrastructure need to be 928 monitored more frequently and can be able to communicate with a 929 higher data rate. The connectivity of such entities does not 930 necessarily need to be wireless. The time scale for detecting and 931 recording failures in a moving transport network is likely measured 932 in hours and repairs might easily take days. It is likely that a 933 self-healing feature would be used locally. On the other hand, 934 failures in fixed transport application infrastructure (e.g., 935 traffic-lights, digital signage displays) is likely to be measured in 936 minutes so as to avoid untoward traffic incidents. As such, the NMS 937 must be able to deal with differing timeliness requirements based on 938 the type of devices. 940 Since transport applications of the constrained devices and networks 941 deal with automotive vehicles, malfunctions and misuse can 942 potentially lead to safety concerns as well. As such, besides access 943 control, privacy of user data and timeliness management systems 944 should also be able to detect situations that are potentially 945 hazardous to safety. Some of these situations could be automatically 946 mitigated, e.g., traffic lights with incorrect timing, but others 947 might require human intervention, e.g., failed traffic lights. The 948 management system should take appropriate actions in these 949 situations. Maintaining data confidentiality and integrity is also 950 an important security aspect of a management system since tampering 951 (or malfunction) can also lead to potentially dangerous situations. 953 4.9. Community Network Applications 955 Community networks are comprised of constrained routers in a multi- 956 hop mesh topology, communicating over a lossy, and often wireless 957 channels. While the routers are mostly non-mobile, the topology may 958 be very dynamic because of fluctuations in link quality of the 959 (wireless) channel caused by, e.g., obstacles, or other nearby radio 960 transmissions. Depending on the routers that are used in the 961 community network, the resources of the routers (memory, CPU) may be 962 more or less constrained - available resources may range from only a 963 few kilobytes of RAM to several megabytes or more, and CPUs may be 964 small and embedded, or more powerful general-purpose processors. 965 Examples of such community networks are the FunkFeuer network 966 (Vienna, Austria), FreiFunk (Berlin, Germany), Seattle Wireless 967 (Seattle, USA), and AWMN (Athens, Greece). These community networks 968 are public and non-regulated, allowing their users to connect to each 969 other and - through an uplink to an ISP - to the Internet. No fee, 970 other than the initial purchase of a wireless router, is charged for 971 these services. Applications of these community networks can be 972 diverse, e.g., location based services, free Internet access, file 973 sharing between users, distributed chat services, social networking, 974 video sharing, etc. 976 As an example of a community network, the FunkFeuer network comprises 977 several hundred routers, many of which have several radio interfaces 978 (with omnidirectional and some directed antennas). The routers of 979 the network are small-sized wireless routers, such as the Linksys 980 WRT54GL, available in 2011 for less than 50 Euros. These routers, 981 with 16 MB of RAM and 264 MHz of CPU power, are mounted on the 982 rooftops of the users. When new users want to connect to the 983 network, they acquire a wireless router, install the appropriate 984 firmware and routing protocol, and mount the router on the rooftop. 985 IP addresses for the router are assigned manually from a list of 986 addresses (because of the lack of auto-configuration standards for 987 mesh networks in the IETF). 989 While the routers are non-mobile, fluctuations in link quality 990 require an ad hoc routing protocol that allows for quick convergence 991 to reflect the effective topology of the network (such as NHDP 992 [RFC6130] and OLSRv2 [RFC7181] developed in the MANET WG). Usually, 993 no human interaction is required for these protocols, as all variable 994 parameters required by the routing protocol are either negotiated in 995 the control traffic exchange, or are only of local importance to each 996 router (i.e. do not influence interoperability). However, external 997 management and monitoring of an ad hoc routing protocol may be 998 desirable to optimize parameters of the routing protocol. Such an 999 optimization may lead to a more stable perceived topology and to a 1000 lower control traffic overhead, and therefore to a higher delivery 1001 success ratio of data packets, a lower end-to-end delay, and less 1002 unnecessary bandwidth and energy usage. 1004 Different use cases for the management of community networks are 1005 possible: 1007 o One single Network Management Station, e.g. a border gateway 1008 providing connectivity to the Internet, requires managing or 1009 monitoring routers in the community network, in order to 1010 investigate problems (monitoring) or to improve performance by 1011 changing parameters (managing). As the topology of the network is 1012 dynamic, constant connectivity of each router towards the 1013 management station cannot be guaranteed. Current network 1014 management protocols, such as SNMP and NETCONF, may be used (e.g., 1015 using interfaces such as the NHDP-MIB [RFC6779]). However, when 1016 routers in the community network are constrained, existing 1017 protocols may require too many resources in terms of memory and 1018 CPU; and more importantly, the bandwidth requirements may exceed 1019 the available channel capacity in wireless mesh networks. 1020 Moreover, management and monitoring may be unfeasible if the 1021 connection between the network management station and the routers 1022 is frequently interrupted. 1024 o Distributed network monitoring, in which more than one management 1025 station monitors or manages other routers. Because connectivity 1026 to a server cannot be guaranteed at all times, a distributed 1027 approach may provide a higher reliability, at the cost of 1028 increased complexity. Currently, no IETF standard exists for 1029 distributed monitoring and management. 1031 o Monitoring and management of a whole network or a group of 1032 routers. Monitoring the performance of a community network may 1033 require more information than what can be acquired from a single 1034 router using a network management protocol. Statistics, such as 1035 topology changes over time, data throughput along certain routing 1036 paths, congestion etc., are of interest for a group of routers (or 1037 the routing domain) as a whole. As of 2014, no IETF standard 1038 allows for monitoring or managing whole networks, instead of 1039 single routers. 1041 4.10. Field Operations 1043 The challenges of configuration and monitoring of networks operated 1044 in the field by rescue and security agencies can be different from 1045 the other use cases since the requirements and operating conditions 1046 of such networks are quite different. 1048 With technology advancements, field networks operated nowadays are 1049 becoming large and can consist of varieties of different types of 1050 equipment that run different protocols and tools that obviously 1051 increase complexity of these mission-critical networks. In many 1052 scenarios, configurations are, most likely, manually performed. 1053 Furthermore, some legacy and even modern devices do not even support 1054 IP networking. A majority of protocols and tools developed by 1055 vendors that are being used are proprietary, which makes integration 1056 more difficult. 1058 The main reason for this disjoint operation scenario is that most 1059 equipment is developed with specific task requirements in mind, 1060 rather than interoperability of the varied equipment types. For 1061 example, the operating conditions experienced by high altitude 1062 security equipment is significantly different from that used in 1063 desert conditions. Similarly, search and rescue operations equipment 1064 used in case of fire rescue has different requirements than flood 1065 relief equipment. Furthermore, inter-operation of equipment with 1066 telecommunication equipment was not an expected outcome or in some 1067 scenarios this may not even be desirable. 1069 Currently, field networks operate with a fixed Network Operations 1070 Center (NOC) that physically manages the configuration and evaluation 1071 of all field devices. Once configured, the devices might be deployed 1072 in fixed or mobile scenarios. Any configuration changes required 1073 would need to be appropriately encrypted and authenticated to prevent 1074 unauthorized access. 1076 Hierarchical management of devices is a common requirement in such 1077 scenarios since local managers or operators may need to respond to 1078 changing conditions within their purview. The level of configuration 1079 management available at each hierarchy must also be closely governed. 1081 Since many field operation devices are used in hostile environments, 1082 a high failure and disconnection rate should be tolerated by the NMS, 1083 which must also be able to deal with multiple gateways and disjoint 1084 management protocols. 1086 Multi-national field operations involving search, rescue and security 1087 are becoming increasingly common, requiring inter-operation of a 1088 diverse set of equipment designed with different operating conditions 1089 in mind. Furthermore, different intra- and inter-governmental 1090 agencies are likely to have a different set of standards, best 1091 practices, rules and regulation, and implementation approaches that 1092 may contradict or conflict with each other. The NMS should be able 1093 to detect these and handle them in an acceptable manner, which may 1094 require human intervention. 1096 5. IANA Considerations 1098 This document does not introduce any new code-points or namespaces 1099 for registration with IANA. 1101 Note to RFC Editor: this section may be removed on publication as an 1102 RFC. 1104 6. Security Considerations 1106 This document discusses use cases for management of networks with 1107 constrained devices. The security considerations described 1108 throughout the companion document [COM-REQ] apply here as well. 1110 7. Contributors 1112 Following persons made significant contributions to and reviewed this 1113 document: 1115 o Ulrich Herberg contributed the Section 4.9 on Community Network 1116 Applications. 1118 o Peter van der Stok contributed to Section 4.6 on Building 1119 Automation. 1121 o Zhen Cao contributed to Section 2.2 Cellular Access Technologies. 1123 o Gilman Tolle contributed the Section 4.4 on Automated Metering 1124 Infrastructure. 1126 o James Nguyen and Ulrich Herberg contributed to Section 4.10 on 1127 Military operations. 1129 8. Acknowledgments 1131 Following persons reviewed and provided valuable comments to 1132 different versions of this document: 1134 Dominique Barthel, Carsten Bormann, Zhen Cao, Benoit Claise, Bert 1135 Greevenbosch, Ulrich Herberg, Ted Lemon, Kathleen Moriarty, James 1136 Nguyen, Zach Shelby, Peter van der Stok, and Martin Thomson. 1138 The editors would like to thank the reviewers and the participants on 1139 the Coman maillist for their valuable contributions and comments. 1141 9. Informative References 1143 [RFC6130] Clausen, T., Dearlove, C., and J. Dean, "Mobile Ad Hoc 1144 Network (MANET) Neighborhood Discovery Protocol (NHDP)", 1145 RFC 6130, April 2011. 1147 [RFC6568] Kim, E., Kaspar, D., and JP. Vasseur, "Design and 1148 Application Spaces for IPv6 over Low-Power Wireless 1149 Personal Area Networks (6LoWPANs)", RFC 6568, April 2012. 1151 [RFC6779] Herberg, U., Cole, R., and I. Chakeres, "Definition of 1152 Managed Objects for the Neighborhood Discovery Protocol", 1153 RFC 6779, October 2012. 1155 [RFC6988] Quittek, J., Chandramouli, M., Winter, R., Dietz, T., and 1156 B. Claise, "Requirements for Energy Management", RFC 6988, 1157 September 2013. 1159 [RFC7181] Clausen, T., Dearlove, C., Jacquet, P., and U. Herberg, 1160 "The Optimized Link State Routing Protocol Version 2", RFC 1161 7181, April 2014. 1163 [RFC7228] Bormann, C., Ersue, M., and A. Keranen, "Terminology for 1164 Constrained-Node Networks", RFC 7228, May 2014. 1166 [RFC7326] Parello, J., Claise, B., Schoening, B., and J. Quittek, 1167 "Energy Management Framework", RFC 7326, September 2014. 1169 [COM-REQ] Ersue, M., Romascanu, D., and J. Schoenwaelder, 1170 "Management of Networks with Constrained Devices: Problem 1171 Statement and Requirements", draft-ietf-opsawg-coman- 1172 probstate-reqs (work in progress), February 2014. 1174 [IOT-SEC] Garcia-Morchon, O., Kumar, S., Keoh, S., Hummen, R., and 1175 R. Struik, "Security Considerations in the IP-based 1176 Internet of Things", draft-garcia-core-security-06 (work 1177 in progress), September 2013. 1179 Appendix A. Change Log 1181 A.1. draft-ietf-opsawg-coman-use-cases-04 - draft-ietf-opsawg-coman- 1182 use-cases-05 1184 o Added text regarding security and safety considerations to the 1185 Environmental Monitoring, Infrastructure Monitoring, Industrial 1186 Applications, Medical Applications, Building Automation and 1187 Transport Applications section. 1189 o Adopted text as per comments received from Kathleen Moriarty 1190 during IESG review. 1192 o Added security related text to use cases for addressing concerns 1193 raised by Ted Lemon during the IESG review. 1195 A.2. draft-ietf-opsawg-coman-use-cases-03 - draft-ietf-opsawg-coman- 1196 use-cases-04 1198 o Resolved Gen-ART review comments received from Martin Thomson. 1200 o Deleted company name for the list of contributors. 1202 o Added Martin Thomson to Acknowledgments section. 1204 A.3. draft-ietf-opsawg-coman-use-cases-02 - draft-ietf-opsawg-coman- 1205 use-cases-03 1207 o Updated references to take into account RFCs that have now been 1208 published 1210 o Added text to the access technologies section explaining why fixed 1211 line technologies (e.g., powerline communications) have not been 1212 discussed. 1214 o Created a new section, Device Lifecycle, discussing the impact of 1215 different device lifecycle stages on the management of constrained 1216 networks. 1218 o Homogenized usage of device classes to form C0, C1 and C2. 1220 o Ensured consistency in usage of Wi-Fi, ZigBee and other 1221 terminologies. 1223 o Added text clarifying the management aspects of the Building 1224 Automation and Industrial Automation use cases. 1226 o Clarified the meaning of unreliability in context of constrained 1227 devices and networks. 1229 o Added information regarding the configuration and operation of 1230 factory automation use case, based on the type of information 1231 provided in the building automation use case. 1233 o Fixed editorial issues discovered by reviewers. 1235 A.4. draft-ietf-opsawg-coman-use-cases-01 - draft-ietf-opsawg-coman- 1236 use-cases-02 1238 o Renamed Mobile Access Technologies section to Cellular Access 1239 Technologies 1241 o Changed references to mobile access technologies to now read 1242 cellular access technologies. 1244 o Added text to the introduction to point out that the list of use 1245 cases is not exhaustive since others unknown to the authors might 1246 exist. 1248 o Updated references to take into account RFCs that have been now 1249 published. 1251 o Updated Environmental Monitoring section to make it clear that in 1252 some scenarios it may not be prudent to repair devices. 1254 o Added clarification in Infrastructure Monitoring section that 1255 reliable communication is achieved via application layer 1256 transactions 1258 o Removed reference to Energy Devices from Energy Management 1259 section, instead labeling them as devices within the context of 1260 energy management. 1262 o Reduced descriptive content in Energy Management section. 1264 o Rewrote text in Energy Management section to highlight management 1265 characteristics of Smart Meter and AMI networks. 1267 o Added text regarding timely delivery of information, and related 1268 management system characteristic, to the Medical Applications 1269 section 1271 o Changed subnets to network segment in Building Automation section. 1273 o Changed structure to infrastructure in Building Automation 1274 section, and added text to highlight associated deployment 1275 difficulties. 1277 o Removed Trickle timer as example of common values to be set in 1278 Building Automation section. 1280 o Added text regarding the possible availability of outsourced and 1281 cloud based management systems for Home Automation. 1283 o Added text to Transport Applications section to highlight the 1284 requirement of IT infrastructure for such applications to function 1285 on top of. 1287 o Merged the Transport Applications and Vehicular Networks section 1288 together. Following changes to the Vehicular Networks section 1289 were merged back into Transport Applications 1290 * Replaced wireless last hops with wireless access to vehicles in 1291 Vehicular Networks. 1293 * Expanded proprietary systems to "systems relying on a specific 1294 Management Topology Option, as described in [COM-REQ]." within 1295 Vehicular Networks section. 1297 * Added text regarding mobility patterns to Vehicular Networks. 1299 o Changed the Military Operations use case to Field Operations and 1300 edited the text to be suitable to such scenarios. 1302 A.5. draft-ietf-opsawg-coman-use-cases-00 - draft-ietf-opsawg-coman- 1303 use-cases-01 1305 o Reordered some use cases to improve the flow. 1307 o Added "Vehicular Networks". 1309 o Shortened the Military Operations use case. 1311 o Started adding substance to the security considerations section. 1313 A.6. draft-ersue-constrained-mgmt-03 - draft-ersue-opsawg-coman-use- 1314 cases-00 1316 o Reduced the terminology section for terminology addressed in the 1317 LWIG and Coman Requirements drafts. Referenced the other drafts. 1319 o Checked and aligned all terminology against the LWIG terminology 1320 draft. 1322 o Spent some effort to resolve the intersection between the 1323 Industrial Application, Home Automation and Building Automation 1324 use cases. 1326 o Moved section section 3. Use Cases from the companion document 1327 [COM-REQ] to this draft. 1329 o Reformulation of some text parts for more clarity. 1331 A.7. draft-ersue-constrained-mgmt-02-03 1333 o Extended the terminology section and removed some of the 1334 terminology addressed in the new LWIG terminology draft. 1335 Referenced the LWIG terminology draft. 1337 o Moved Section 1.3. on Constrained Device Classes to the new LWIG 1338 terminology draft. 1340 o Class of networks considering the different type of radio and 1341 communication technologies in use and dimensions extended. 1343 o Extended the Problem Statement in Section 2. following the 1344 requirements listed in Section 4. 1346 o Following requirements, which belong together and can be realized 1347 with similar or same kind of solutions, have been merged. 1349 * Distributed Management and Peer Configuration, 1351 * Device status monitoring and Neighbor-monitoring, 1353 * Passive Monitoring and Reactive Monitoring, 1355 * Event-driven self-management - Self-healing and Periodic self- 1356 management, 1358 * Authentication of management systems and Authentication of 1359 managed devices, 1361 * Access control on devices and Access control on management 1362 systems, 1364 * Management of Energy Resources and Data models for energy 1365 management, 1367 * Software distribution (group-based firmware update) and Group- 1368 based provisioning. 1370 o Deleted the empty section on the gaps in network management 1371 standards, as it will be written in a separate draft. 1373 o Added links to mentioned external pages. 1375 o Added text on OMA M2M Device Classification in appendix. 1377 A.8. draft-ersue-constrained-mgmt-01-02 1379 o Extended the terminology section. 1381 o Added additional text for the use cases concerning deployment 1382 type, network topology in use, network size, network capabilities, 1383 radio technology, etc. 1385 o Added examples for device classes in a use case. 1387 o Added additional text provided by Cao Zhen (China Mobile) for 1388 Mobile Applications and by Peter van der Stok for Building 1389 Automation. 1391 o Added the new use cases 'Advanced Metering Infrastructure' and 1392 'MANET Concept of Operations in Military'. 1394 o Added the section 'Managing the Constrainedness of a Device or 1395 Network' discussing the needs of very constrained devices. 1397 o Added a note that the requirements in [COM-REQ] need to be seen as 1398 standalone requirements and the current document does not 1399 recommend any profile of requirements. 1401 o Added a section in [COM-REQ] for the detailed requirements on 1402 constrained management matched to management tasks like fault, 1403 monitoring, configuration management, Security and Access Control, 1404 Energy Management, etc. 1406 o Solved nits and added references. 1408 o Added Appendix A on the related development in other bodies. 1410 o Added Appendix B on the work in related research projects. 1412 A.9. draft-ersue-constrained-mgmt-00-01 1414 o Splitted the section on 'Networks of Constrained Devices' into the 1415 sections 'Network Topology Options' and 'Management Topology 1416 Options'. 1418 o Added the use case 'Community Network Applications' and 'Mobile 1419 Applications'. 1421 o Provided a Contributors section. 1423 o Extended the section on 'Medical Applications'. 1425 o Solved nits and added references. 1427 Authors' Addresses 1429 Mehmet Ersue (editor) 1430 Nokia Networks 1432 Email: mehmet.ersue@nsn.com 1433 Dan Romascanu 1434 Avaya 1436 Email: dromasca@avaya.com 1438 Juergen Schoenwaelder 1439 Jacobs University Bremen 1441 Email: j.schoenwaelder@jacobs-university.de 1443 Anuj Sehgal 1444 Jacobs University Bremen 1446 Email: s.anuj@jacobs-university.de