idnits 2.17.1 draft-ietf-opsawg-coman-use-cases-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (January 19, 2015) is 3384 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 6779 (Obsoleted by RFC 7939) Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Ersue, Ed. 3 Internet-Draft Nokia Networks 4 Intended status: Informational D. Romascanu 5 Expires: July 23, 2015 Avaya 6 J. Schoenwaelder 7 A. Sehgal 8 Jacobs University Bremen 9 January 19, 2015 11 Management of Networks with Constrained Devices: Use Cases 12 draft-ietf-opsawg-coman-use-cases-04 14 Abstract 16 This document discusses use cases concerning the management of 17 networks, where constrained devices are involved. A problem 18 statement, deployment options and the requirements on the networks 19 with constrained devices can be found in the companion document on 20 "Management of Networks with Constrained Devices: Problem Statement 21 and Requirements". 23 Status of This Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at http://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on July 23, 2015. 40 Copyright Notice 42 Copyright (c) 2015 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 58 2. Access Technologies . . . . . . . . . . . . . . . . . . . . . 4 59 2.1. Constrained Access Technologies . . . . . . . . . . . . . 4 60 2.2. Cellular Access Technologies . . . . . . . . . . . . . . 5 61 3. Device Lifecycle . . . . . . . . . . . . . . . . . . . . . . 6 62 3.1. Manufacturing and Initial Testing . . . . . . . . . . . . 6 63 3.2. Installation and Configuration . . . . . . . . . . . . . 6 64 3.3. Operation and Maintenance . . . . . . . . . . . . . . . . 7 65 3.4. Recommissioning and Decommissioning . . . . . . . . . . . 7 66 4. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 8 67 4.1. Environmental Monitoring . . . . . . . . . . . . . . . . 8 68 4.2. Infrastructure Monitoring . . . . . . . . . . . . . . . . 8 69 4.3. Industrial Applications . . . . . . . . . . . . . . . . . 9 70 4.4. Energy Management . . . . . . . . . . . . . . . . . . . . 11 71 4.5. Medical Applications . . . . . . . . . . . . . . . . . . 13 72 4.6. Building Automation . . . . . . . . . . . . . . . . . . . 14 73 4.7. Home Automation . . . . . . . . . . . . . . . . . . . . . 16 74 4.8. Transport Applications . . . . . . . . . . . . . . . . . 17 75 4.9. Community Network Applications . . . . . . . . . . . . . 19 76 4.10. Field Operations . . . . . . . . . . . . . . . . . . . . 21 77 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 78 6. Security Considerations . . . . . . . . . . . . . . . . . . . 22 79 7. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 22 80 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 23 81 9. Informative References . . . . . . . . . . . . . . . . . . . 23 82 Appendix A. Change Log . . . . . . . . . . . . . . . . . . . . . 24 83 A.1. draft-ietf-opsawg-coman-use-cases-03 - draft-ietf-opsawg- 84 coman-use-cases-04 . . . . . . . . . . . . . . . . . . . 24 85 A.2. draft-ietf-opsawg-coman-use-cases-02 - draft-ietf-opsawg- 86 coman-use-cases-03 . . . . . . . . . . . . . . . . . . . 24 87 A.3. draft-ietf-opsawg-coman-use-cases-01 - draft-ietf-opsawg- 88 coman-use-cases-02 . . . . . . . . . . . . . . . . . . . 25 89 A.4. draft-ietf-opsawg-coman-use-cases-00 - draft-ietf-opsawg- 90 coman-use-cases-01 . . . . . . . . . . . . . . . . . . . 26 91 A.5. draft-ersue-constrained-mgmt-03 - draft-ersue-opsawg- 92 coman-use-cases-00 . . . . . . . . . . . . . . . . . . . 26 93 A.6. draft-ersue-constrained-mgmt-02-03 . . . . . . . . . . . 27 94 A.7. draft-ersue-constrained-mgmt-01-02 . . . . . . . . . . . 28 95 A.8. draft-ersue-constrained-mgmt-00-01 . . . . . . . . . . . 28 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 29 98 1. Introduction 100 Small devices with limited CPU, memory, and power resources, so 101 called constrained devices (aka. sensor, smart object, or smart 102 device) can be connected to a network. Such a network of constrained 103 devices itself may be constrained or challenged, e.g., with 104 unreliable or lossy channels, wireless technologies with limited 105 bandwidth and a dynamic topology, needing the service of a gateway or 106 proxy to connect to the Internet. In other scenarios, the 107 constrained devices can be connected to a non-constrained network 108 using off-the-shelf protocol stacks. Constrained devices might be in 109 charge of gathering information in diverse settings including natural 110 ecosystems, buildings, and factories and send the information to one 111 or more server stations. 113 Network management is characterized by monitoring network status, 114 detecting faults, and inferring their causes, setting network 115 parameters, and carrying out actions to remove faults, maintain 116 normal operation, and improve network efficiency and application 117 performance. The traditional network management application 118 periodically collects information from a set of elements that are 119 needed to manage, processes the data, and presents them to the 120 network management users. Constrained devices, however, often have 121 limited power, low transmission range, and might be unreliable. Such 122 unreliability might arise from device itself (e.g., battery 123 exhausted) or from the channel being constrained (i.e., low-capacity 124 and high-latency). They might also need to work in hostile 125 environments with advanced security requirements or need to be used 126 in harsh environments for a long time without supervision. Due to 127 such constraints, the management of a network with constrained 128 devices offers different type of challenges compared to the 129 management of a traditional IP network. 131 This document aims to understand use cases for the management of a 132 network, where constrained devices are involved. The document lists 133 and discusses diverse use cases for the management from the network 134 as well as from the application point of view. The list of discussed 135 use cases is not an exhaustive one since other scenarios, currently 136 unknown to the authors, are possible. The application scenarios 137 discussed aim to show where networks of constrained devices are 138 expected to be deployed. For each application scenario, we first 139 briefly describe the characteristics followed by a discussion on how 140 network management can be provided, who is likely going to be 141 responsible for it, and on which time-scale management operations are 142 likely to be carried out. 144 A problem statement, deployment and management topology options as 145 well as the requirements on the networks with constrained devices can 146 be found in the companion document [COM-REQ]. 148 This documents builds on the terminology defined in [RFC7228] and 149 [COM-REQ]. [RFC7228] is a base document for the terminology 150 concerning constrained devices and constrained networks. Some use 151 cases specific to IPv6 over Low-Power Wireless Personal Area Networks 152 (6LoWPANs) can be found in [RFC6568]. 154 2. Access Technologies 156 Besides the management requirements imposed by the different use 157 cases, the access technologies used by constrained devices can impose 158 restrictions and requirements upon the Network Management System 159 (NMS) and protocol of choice. 161 It is possible that some networks of constrained devices might 162 utilize traditional non-constrained access technologies for network 163 access, e.g., local area networks with plenty of capacity. In such 164 scenarios, the constrainedness of the device presents special 165 management restrictions and requirements rather than the access 166 technology utilized. 168 However, in other situations constrained or cellular access 169 technologies might be used for network access, thereby causing 170 management restrictions and requirements to arise as a result of the 171 underlying access technologies. 173 A discussion regarding the impact of cellular and constrained access 174 technologies is provided in this section since they impose some 175 special requirements on the management of constrained networks. On 176 the other hand, fixed line networks (e.g., power line communications) 177 are not discussed here since tend to be quite static and do not 178 typically impose any special requirements on the management of the 179 network. 181 2.1. Constrained Access Technologies 183 Due to resource restrictions, embedded devices deployed as sensors 184 and actuators in the various use cases utilize low-power low data- 185 rate wireless access technologies such as IEEE 802.15.4, DECT ULE or 186 Bluetooth Low-Energy (BT-LE) for network connectivity. 188 In such scenarios, it is important for the NMS to be aware of the 189 restrictions imposed by these access technologies to efficiently 190 manage these constrained devices. Specifically, such low-power low 191 data-rate access technologies typically have small frame sizes. So 192 it would be important for the NMS and management protocol of choice 193 to craft packets in a way that avoids fragmentation and reassembly of 194 packets since this can use valuable memory on constrained devices. 196 Devices using such access technologies might operate via a gateway 197 that translates between these access technologies and more 198 traditional Internet protocols. A hierarchical approach to device 199 management in such a situation might be useful, wherein the gateway 200 device is in-charge of devices connected to it, while the NMS 201 conducts management operations only to the gateway. 203 2.2. Cellular Access Technologies 205 Machine to machine (M2M) services are increasingly provided by mobile 206 service providers as numerous devices, home appliances, utility 207 meters, cars, video surveillance cameras, and health monitors, are 208 connected with mobile broadband technologies. Different 209 applications, e.g., in a home appliance or in-car network, use 210 Bluetooth, Wi-Fi or ZigBee locally and connect to a cellular module 211 acting as a gateway between the constrained environment and the 212 mobile cellular network. 214 Such a gateway might provide different options for the connectivity 215 of mobile networks and constrained devices: 217 o a smart phone with 3G/4G and WLAN radio might use BT-LE to connect 218 to the devices in a home area network, 220 o a femtocell might be combined with home gateway functionality 221 acting as a low-power cellular base station connecting smart 222 devices to the application server of a mobile service provider, 224 o an embedded cellular module with LTE radio connecting the devices 225 in the car network with the server running the telematics service, 227 o an M2M gateway connected to the mobile operator network supporting 228 diverse IoT connectivity technologies including ZigBee and CoAP 229 over 6LoWPAN over IEEE 802.15.4. 231 Common to all scenarios above is that they are embedded in a service 232 and connected to a network provided by a mobile service provider. 233 Usually there is a hierarchical deployment and management topology in 234 place where different parts of the network are managed by different 235 management entities and the count of devices to manage is high (e.g. 236 many thousands). In general, the network is comprised by manifold 237 type and size of devices matching to different device classes. As 238 such, the managing entity needs to be prepared to manage devices with 239 diverse capabilities using different communication or management 240 protocols. In case the devices are directly connected to a gateway 241 they most likely are managed by a management entity integrated with 242 the gateway, which itself is part of the Network Management System 243 (NMS) run by the mobile operator. Smart phones or embedded modules 244 connected to a gateway might be themselves in charge to manage the 245 devices on their level. The initial and subsequent configuration of 246 such a device is mainly based on self-configuration and is triggered 247 by the device itself. 249 The gateway might be in charge of filtering and aggregating the data 250 received from the device as the information sent by the device might 251 be mostly redundant. 253 3. Device Lifecycle 255 Since constrained devices deployed in a network might go through 256 multiple phases in their lifetime, it is possible for different 257 managers of networks and/or devices to exist during different parts 258 of the device lifetimes. An in-depth discussion regarding the 259 possible device lifecycles can be found in [IOT-SEC]. 261 3.1. Manufacturing and Initial Testing 263 Typically, the lifecycle of a device begins at the manufacturing 264 stage. During this phase the manufacturer of the device is 265 responsible for the management and configuration of the devices. It 266 is also possible that a certain use case might utilize multiple types 267 of constrained devices (e.g., temperature sensors, lighting 268 controllers, etc.) and these could be manufactured by different 269 entities. As such, during the manufacturing stage different managers 270 can exist for different devices. Similarly, during the initial 271 testing phase, where device quality assurance tasks might be 272 performed, the manufacturer remains responsible for the management of 273 devices and networks that might comprise them. 275 3.2. Installation and Configuration 277 The responsibility of managing the devices must be transferred to the 278 installer during the installation phase. There must exist procedures 279 for transferring management responsibility between the manufacturer 280 and installer. The installer may be the customer or an intermediary 281 contracted to setup the devices and their networks. It is important 282 that the NMS utilized allows devices originating at different vendors 283 to be managed, ensuring interoperability between them and the 284 configuration of trust relationships between them as well. 286 It is possible that the installation and configuration 287 responsibilities might lie with different entities. For example, the 288 installer of a device might only be responsible for cabling a 289 network, physically installing the devices and ensuring initial 290 network connectivity between them (e.g., configuring IP addresses). 291 Following such an installation, the customer or a sub-contractor 292 might actually configure the operation of the device. As such, 293 during installation and configuration multiple parties might be 294 responsible for managing a device and appropriate methods must be 295 available to ensure that this management responsibility is 296 transferred suitably. 298 3.3. Operation and Maintenance 300 At the outset of the operation phase, the operational responsibility 301 of a device and network should be passed on to the customer. It is 302 possible that the customer, however, might contract the maintenance 303 of the devices and network to a sub-contractor. In this case, the 304 NMS and management protocol should allow for configuring different 305 levels of access to the devices. Since different maintenance vendors 306 might be used for devices that perform different functions (e.g., 307 HVAC, lighting, etc.) it should also be possible to restrict 308 management access to devices based on the currently responsible 309 manager. 311 3.4. Recommissioning and Decommissioning 313 The owner of a device might choose to replace, repurpose or even 314 decommission it. In each of these cases, either the customer or the 315 contracted maintenance agency must ensure that appropriate steps are 316 taken to meet the end goal. 318 In case the devices needs to be replaced, the manager of the network 319 (customer or contractor responsible) must detach the device from the 320 network, remove all appropriate configuration and discard the device. 321 A new device must then be configured to replace it. The NMS should 322 allow for transferring configuration from and replacing an existing 323 device. The management responsibility of the operation/maintenance 324 manager would end once the device is removed from the network. 325 During the installation of the new replacement device, the same 326 responsibilities would apply as those during the Installation and 327 Configuration phases. 329 The device being replaced may not have yet reached end-of-life, and 330 as such, instead of being discarded it may be installed in a new 331 location. In this case, the management responsibilities are once 332 again resting in the hands of the entities responsible for the 333 Installation and Configuration phases at the new location. 335 If a device is repurposed, then it is possible that the management 336 responsibility for this device changes as well. For example, a 337 device might be moved from one building to another. In this case, 338 the managers responsible for devices and networks in each building 339 could be different. As such, the NMS must not only allow for 340 changing configuration but also transferring management 341 responsibilities. 343 In case a device is decommissioned, the management responsibility 344 typically ends at that point. 346 4. Use Cases 348 4.1. Environmental Monitoring 350 Environmental monitoring applications are characterized by the 351 deployment of a number of sensors to monitor emissions, water 352 quality, or even the movements and habits of wildlife. Other 353 applications in this category include earthquake or tsunami early- 354 warning systems. The sensors often span a large geographic area, 355 they can be mobile, and they are often difficult to replace. 356 Furthermore, the sensors are usually not protected against tampering. 358 Management of environmental monitoring applications is largely 359 concerned with the monitoring whether the system is still functional 360 and the roll-out of new constrained devices in case the system looses 361 too much of its structure. The constrained devices themselves need 362 to be able to establish connectivity (auto-configuration) and they 363 need to be able to deal with events such as loosing neighbors or 364 being moved to other locations. 366 Management responsibility typically rests with the organization 367 running the environmental monitoring application. Since these 368 monitoring applications must be designed to tolerate a number of 369 failures, the time scale for detecting and recording failures is for 370 some of these applications likely measured in hours and repairs might 371 easily take days. In fact, in some scenarios it might be more cost- 372 and time-effective to not repair such devices at all. However, for 373 certain environmental monitoring applications, much tighter time 374 scales may exist and might be enforced by regulations (e.g., 375 monitoring of nuclear radiation). 377 4.2. Infrastructure Monitoring 379 Infrastructure monitoring is concerned with the monitoring of 380 infrastructures such as bridges, railway tracks, or (offshore) 381 windmills. The primary goal is usually to detect any events or 382 changes of the structural conditions that can impact the risk and 383 safety of the infrastructure being monitored. Another secondary goal 384 is to schedule repair and maintenance activities in a cost effective 385 manner. 387 The infrastructure to monitor might be in a factory or spread over a 388 wider area but difficult to access. As such, the network in use 389 might be based on a combination of fixed and wireless technologies, 390 which use robust networking equipment and support reliable 391 communication via application layer transactions. It is likely that 392 constrained devices in such a network are mainly C2 devices [RFC7228] 393 and have to be controlled centrally by an application running on a 394 server. In case such a distributed network is widely spread, the 395 wireless devices might use diverse long-distance wireless 396 technologies such as WiMAX, or 3G/LTE. In cases, where an in- 397 building network is involved, the network can be based on Ethernet or 398 wireless technologies suitable for in-building usage. 400 The management of infrastructure monitoring applications is primarily 401 concerned with the monitoring of the functioning of the system. 402 Infrastructure monitoring devices are typically rolled out and 403 installed by dedicated experts and changes are rare since the 404 infrastructure itself changes rarely. However, monitoring devices 405 are often deployed in unsupervised environments and hence special 406 attention must be given to protecting the devices from being 407 modified. 409 Management responsibility typically rests with the organization 410 owning the infrastructure or responsible for its operation. The time 411 scale for detecting and recording failures is likely measured in 412 hours and repairs might easily take days. However, certain events 413 (e.g., natural disasters) may require that status information be 414 obtained much more quickly and that replacements of failed sensors 415 can be rolled out quickly (or redundant sensors are activated 416 quickly). In case the devices are difficult to access, a self- 417 healing feature on the device might become necessary. 419 4.3. Industrial Applications 421 Industrial Applications and smart manufacturing refer to tasks such 422 as networked control and monitoring of manufacturing equipment, asset 423 and situation management, or manufacturing process control. For the 424 management of a factory it is becoming essential to implement smart 425 capabilities. From an engineering standpoint, industrial 426 applications are intelligent systems enabling rapid manufacturing of 427 new products, dynamic response to product demands, and real-time 428 optimization of manufacturing production and supply chain networks. 429 Potential industrial applications (e.g., for smart factories and 430 smart manufacturing) are: 432 o Digital control systems with embedded, automated process controls, 433 operator tools, as well as service information systems optimizing 434 plant operations and safety. 436 o Asset management using predictive maintenance tools, statistical 437 evaluation, and measurements maximizing plant reliability. 439 o Smart sensors detecting anomalies to avoid abnormal or 440 catastrophic events. 442 o Smart systems integrated within the industrial energy management 443 system and externally with the smart grid enabling real-time 444 energy optimization. 446 Management of Industrial Applications and smart manufacturing may in 447 some situations involve Building Automation tasks such as control of 448 energy, HVAC (heating, ventilation, and air conditioning), lighting, 449 or access control. Interacting with management systems from other 450 application areas might be important in some cases (e.g., 451 environmental monitoring for electric energy production, energy 452 management for dynamically scaling manufacturing, vehicular networks 453 for mobile asset tracking). Management of constrained devices and 454 networks may not only refer to the management of their network 455 connectivity. Since the capabilities of constrained devices are 456 limited, it is quite possible that a management system would even be 457 required to configure, monitor and operate the primary functions that 458 a constrained device is utilized for, besides managing its network 459 connectivity. 461 Sensor networks are an essential technology used for smart 462 manufacturing. Measurements, automated controls, plant optimization, 463 health and safety management, and other functions are provided by a 464 large number of networked sectors. Data interoperability and 465 seamless exchange of product, process, and project data are enabled 466 through interoperable data systems used by collaborating divisions or 467 business systems. Intelligent automation and learning systems are 468 vital to smart manufacturing but must be effectively integrated with 469 the decision environment. The NMS utilized must ensure timely 470 delivery of sensor data to the control unit so it may take 471 appropriate decisions. Similarly, relaying of commands must also be 472 monitored and managed to ensure optimal functioning. Wireless sensor 473 networks (WSN) have been developed for machinery Condition-based 474 Maintenance (CBM) as they offer significant cost savings and enable 475 new functionalities. Inaccessible locations, rotating machinery, 476 hazardous areas, and mobile assets can be reached with wireless 477 sensors. WSNs can provide today wireless link reliability, real-time 478 capabilities, and quality-of-service and enable industrial and 479 related wireless sense and control applications. 481 Management of industrial and factory applications is largely focused 482 on monitoring whether the system is still functional, real-time 483 continuous performance monitoring, and optimization as necessary. 484 The factory network might be part of a campus network or connected to 485 the Internet. The constrained devices in such a network need to be 486 able to establish configuration themselves (auto-configuration) and 487 might need to deal with error conditions as much as possible locally. 488 Access control has to be provided with multi-level administrative 489 access and security. Support and diagnostics can be provided through 490 remote monitoring access centralized outside of the factory. 492 Factory automation tasks require that continuous monitoring be used 493 to optimize production. Groups of manufacturing and monitoring 494 devices could be defined to establish relationships between them. To 495 ensure timely optimization of processes, commands from the NMS must 496 arrive at all destination within an appropriate duration. This 497 duration could change based on the manufacturing task being 498 performed. Installation and operation of factory networks have 499 different requirements. During the installation phase many networks, 500 usually distributed along different parts of the factory/assembly 501 line, co-exist without a connection to a common backbone. A 502 specialized installation tool is typically used to configure the 503 functions of different types of devices, in different factory 504 location, in a secure manner. At the end of the installation phase, 505 interoperability between these stand-alone networks and devices must 506 be enabled. During the operation phase, these stand-alone networks 507 are connected to a common backbone so that they may retrieve control 508 information from and send commands to appropriate devices. 510 Management responsibility is typically owned by the organization 511 running the industrial application. Since the monitoring 512 applications must handle a potentially large number of failures, the 513 time scale for detecting and recording failures is for some of these 514 applications likely measured in minutes. However, for certain 515 industrial applications, much tighter time scales may exist, e.g. in 516 real-time, which might be enforced by the manufacturing process or 517 the use of critical material. 519 4.4. Energy Management 521 The EMAN working group developed an energy management framework 522 [RFC7326] for devices and device components within or connected to 523 communication networks. This document observes that one of the 524 challenges of energy management is that a power distribution network 525 is responsible for the supply of energy to various devices and 526 components, while a separate communication network is typically used 527 to monitor and control the power distribution network. Devices in 528 the context of energy management can be monitored for parameters like 529 power, energy, demand and power quality. If a device contains 530 batteries, they can be also monitored and managed. 532 Energy devices differ in complexity and may include basic sensors or 533 switches, specialized electrical meters, or power distribution units 534 (PDU), and subsystems inside the network devices (routers, network 535 switches) or home or industrial appliances. The operators of an 536 Energy Management System are either the utility providers or 537 customers that aim to control and reduce the energy consumption and 538 the associated costs. The topology in use differs and the deployment 539 can cover areas from small surfaces (individual homes) to large 540 geographical areas. The EMAN requirements document [RFC6988] 541 discusses the requirements for energy management concerning 542 monitoring and control functions. 544 It is assumed that energy management will apply to a large range of 545 devices of all classes and networks topologies. Specific resource 546 monitoring like battery utilization and availability may be specific 547 to devices with lower physical resources (device classes C0 or C1 548 [RFC7228]). 550 Energy management is especially relevant to the Smart Grid. A Smart 551 Grid is an electrical grid that uses data networks to gather and to 552 act on energy and power-related information in an automated fashion 553 with the goal to improve the efficiency, reliability, economics, and 554 sustainability of the production and distribution of electricity. 556 Smart Metering is a good example of Smart Grid based energy 557 management applications. Different types of possibly wireless small 558 meters produce all together a large amount of data, which is 559 collected by a central entity and processed by an application server, 560 which may be located within the customer's residence or off-site in a 561 data-center. The communication infrastructure can be provided by a 562 mobile network operator as the meters in urban areas will have most 563 likely a cellular or WiMAX radio. In case the application server is 564 located within the residence, such meters are more likely to use Wi- 565 Fi protocols to interconnect with an existing network. 567 An Advanced Metering Infrastructure (AMI) network is another example 568 of the Smart Grid that enables an electric utility to retrieve 569 frequent electric usage data from each electric meter installed at a 570 customer's home or business. Unlike Smart Metering, in which case 571 the customer or their agents install appliance level meters, an AMI 572 infrastructure is typically managed by the utility providers and 573 could also include other distribution automation devices like 574 transformers and reclosers. Meters in AMI networks typically contain 575 constrained devices that connect to mesh networks with a low- 576 bandwidth radio. Usage data and outage notifications can be sent by 577 these meters to the utility's headend systems, via aggregation points 578 of higher-end router devices that bridge the constrained network to a 579 less constrained network via cellular, WiMAX, or Ethernet. Unlike 580 meters, these higher-end devices might be installed on utility poles 581 owned and operated by a separate entity. 583 It thereby becomes important for a management application to not only 584 be able to work with diverse types of devices, but also over multiple 585 links that might be operated and managed by separate entities, each 586 having divergent policies for their own devices and network segments. 587 During management operations, like firmware updates, it is important 588 that the management system performs robustly in order to avoid 589 accidental outages of critical power systems that could be part of 590 AMI networks. In fact, since AMI networks must also report on 591 outages, the management system might have to manage the energy 592 properties of battery operated AMI devices themselves as well. 594 A management system for home based Smart Metering solutions is likely 595 to have devices laid out in a simple topology. However, AMI networks 596 installations could have thousands of nodes per router, i.e., higher- 597 end device, which organize themselves in an ad-hoc manner. As such, 598 a management system for AMI networks will need to discover and 599 operate over complex topologies as well. In some situations, it is 600 possible that the management system might also have to setup and 601 manage the topology of nodes, especially critical routers. 602 Encryption key management and sharing in both types of networks is 603 also likely to be important for providing confidentiality for all 604 data traffic. In AMI networks the key may be obtained by a meter 605 only after an end-to-end authentication process based on 606 certificates. Smart Metering solution could adopt a similar approach 607 or the security may be implied due to the encrypted Wi-Fi networks 608 they become part of. 610 The management of such a network requires end-to-end management of 611 and information exchange through different types of networks. 612 However, as of today there is no integrated energy management 613 approach and no common information model available. Specific energy 614 management applications or network islands use their own management 615 mechanisms. 617 4.5. Medical Applications 619 Constrained devices can be seen as an enabling technology for 620 advanced and possibly remote health monitoring and emergency 621 notification systems, ranging from blood pressure and heart rate 622 monitors to advanced devices capable of monitoring implanted 623 technologies, such as pacemakers or advanced hearing aids. Medical 624 sensors may not only be attached to human bodies, they might also 625 exist in the infrastructure used by humans such as bathrooms or 626 kitchens. Medical applications will also be used to ensure 627 treatments are being applied properly and they might guide people 628 losing orientation. Fitness and wellness applications, such as 629 connected scales or wearable heart monitors, encourage consumers to 630 exercise and empower self-monitoring of key fitness indicators. 631 Different applications use Bluetooth, Wi-Fi or ZigBee connections to 632 access the patient's smartphone or home cellular connection to access 633 the Internet. 635 Constrained devices that are part of medical applications are managed 636 either by the users of those devices or by an organization providing 637 medical (monitoring) services for physicians. In the first case, 638 management must be automatic and/or easy to install and setup by 639 average people. In the second case, it can be expected that devices 640 be controlled by specially trained people. In both cases, however, 641 it is crucial to protect the privacy of the people to which medical 642 devices are attached. Even though the data collected by a heart beat 643 monitor might be protected, the pure fact that someone carries such a 644 device may need protection. As such, certain medical appliances may 645 not want to participate in discovery and self-configuration protocols 646 in order to remain invisible. 648 Many medical devices are likely to be used (and relied upon) to 649 provide data to physicians in critical situations since the biggest 650 market is likely elderly and handicapped people. Timely delivery of 651 data can be quite important in certain applications like patient 652 mobility monitoring in old-age homes. Data must reach the physician 653 and/or emergency services within specified limits of time in order to 654 be useful. As such, fault detection of the communication network or 655 the constrained devices becomes a crucial function of the management 656 system that must be carried out with high reliability and, depending 657 on the medical appliance and its application, within seconds. 659 4.6. Building Automation 661 Building automation comprises the distributed systems designed and 662 deployed to monitor and control the mechanical, electrical and 663 electronic systems inside buildings with various destinations (e.g., 664 public and private, industrial, institutions, or residential). 665 Advanced Building Automation Systems (BAS) may be deployed 666 concentrating the various functions of safety, environmental control, 667 occupancy, security. More and more the deployment of the various 668 functional systems is connected to the same communication 669 infrastructure (possibly Internet Protocol based), which may involve 670 wired or wireless communications networks inside the building. 672 Building automation requires the deployment of a large number 673 (10-100.000) of sensors that monitor the status of devices, and 674 parameters inside the building and controllers with different 675 specialized functionality for areas within the building or the 676 totality of the building. Inter-node distances between neighboring 677 nodes vary between 1 to 20 meters. The NMS must, as a result, be 678 able to manage and monitor a large number of devices, which may be 679 organized in multi-hop meshed networks. Distances between the nodes, 680 and the use of constrained protocols, means that networks of nodes 681 might be segmented. The management of such network segments and 682 nodes in these segments should be possible. Contrary to home 683 automation, in building management the devices are expected to be 684 managed assets and known to a set of commissioning tools and a data 685 storage, such that every connected device has a known origin. This 686 requires the management system to be able to discover devices on the 687 network and ensure that the expected list of devices is currently 688 matched. Management here includes verifying the presence of the 689 expected devices and detecting the presence of unwanted devices. 691 Examples of functions performed by controllers in building automation 692 are regulating the quality, humidity, and temperature of the air 693 inside the building and lighting. Other systems may report the 694 status of the machinery inside the building like elevators, or inside 695 the rooms like projectors in meeting rooms. Security cameras and 696 sensors may be deployed and operated on separate dedicated 697 infrastructures connected to the common backbone. The deployment 698 area of a BAS is typically inside one building (or part of it) or 699 several buildings geographically grouped in a campus. A building 700 network can be composed of network segments, where a network segment 701 covers a floor, an area on the floor, or a given functionality (e.g., 702 security cameras). It is possible that the management tasks of 703 different types of some devices might be separated from others (e.g, 704 security cameras might operate and be managed via a separate network 705 to the HVAC in a building). 707 Some of the sensors in Building Automation Systems (for example fire 708 alarms or security systems) register, record and transfer critical 709 alarm information and therefore must be resilient to events like loss 710 of power or security attacks. A management system must be able to 711 deal with unintentional segmentation of networks due to power loss or 712 channel unavailability. It must also be able to detect security 713 events. Due to specific operating conditions required from certain 714 devices, there might be a need to certify components and subsystems 715 operating in such constrained conditions based on specific 716 requirements. Also in some environments, the malfunctioning of a 717 control system (like temperature control) needs to be reported in the 718 shortest possible time. Complex control systems can misbehave, and 719 their critical status reporting and safety algorithms need to be 720 basic and robust and perform even in critical conditions. Providing 721 this monitoring, configuration and notification service is an 722 important task of the management system used in building automation. 724 Building automation solutions are deployed in some cases in newly 725 designed buildings, in other cases it might be over existing 726 infrastructures. In the first case, there is a broader range of 727 possible solutions, which can be planned for the infrastructure of 728 the building. In the second case the solution needs to be deployed 729 over an existing infrastructure taking into account factors like 730 existing wiring, distance limitations, the propagation of radio 731 signals over walls and floors, thereby making deployment difficult. 732 As a result, some of the existing WLAN solutions (e.g., IEEE 802.11 733 or IEEE 802.15) may be deployed. In mission-critical or security 734 sensitive environments and in cases where link failures happen often, 735 topologies that allow for reconfiguration of the network and 736 connection continuity may be required. Some of the sensors deployed 737 in building automation may be very simple constrained devices for 738 which C0 or C1 [RFC7228] may be assumed. 740 For lighting applications, groups of lights must be defined and 741 managed. Commands to a group of light must arrive within 200 ms at 742 all destinations. The installation and operation of a building 743 network has different requirements. During the installation, many 744 stand-alone networks of a few to 100 nodes co-exist without a 745 connection to the backbone. During this phase, the nodes are 746 identified with a network identifier related to their physical 747 location. Devices are accessed from an installation tool to connect 748 them to the network in a secure fashion. During installation, the 749 setting of parameters of common values to enable interoperability may 750 be required. During operation, the networks are connected to the 751 backbone while maintaining the network identifier to physical 752 location relation. Network parameters like address and name are 753 stored in DNS. The names can assist in determining the physical 754 location of the device. 756 4.7. Home Automation 758 Home automation includes the control of lighting, heating, 759 ventilation, air conditioning, appliances, entertainment and home 760 security devices to improve convenience, comfort, energy efficiency, 761 and safety. It can be seen as a residential extension of building 762 automation. However, unlike a building automation system, the 763 infrastructure in a home is operated in a considerably more ad-hoc 764 manner. While in some installations it is likely that there is no 765 centralized management system, akin to a Building Automation System 766 (BAS), available, in other situations outsourced and cloud based 767 systems responsible for managing devices in the home might be used. 769 Home automation networks need a certain amount of configuration 770 (associating switches or sensors to actuators) that is either 771 provided by electricians deploying home automation solutions, by 772 third party home automation service providers (e.g., small 773 specialized companies or home automation device manufacturers) or by 774 residents by using the application user interface provided by home 775 automation devices to configure (parts of) the home automation 776 solution. Similarly, failures may be reported via suitable 777 interfaces to residents or they might be recorded and made available 778 to services providers in charge of the maintenance of the home 779 automation infrastructure. 781 The management responsibility lies either with the residents or it 782 may be outsourced to electricians and/or third parties providing 783 management of home automation solutions as a service. A varying 784 combination of electricians, service providers or the residents may 785 be responsible for different aspects of managing the infrastructure. 786 The time scale for failure detection and resolution is in many cases 787 likely counted in hours to days. 789 4.8. Transport Applications 791 Transport application is a generic term for the integrated 792 application of communications, control, and information processing in 793 a transportation system. Transport telematics or vehicle telematics 794 are used as a term for the group of technologies that support 795 transportation systems. Transport applications running on such a 796 transportation system cover all modes of the transport and consider 797 all elements of the transportation system, i.e. the vehicle, the 798 infrastructure, and the driver or user, interacting together 799 dynamically. Examples for transport applications are inter and intra 800 vehicular communication, smart traffic control, smart parking, 801 electronic toll collection systems, logistic and fleet management, 802 vehicle control, and safety and road assistance. 804 As a distributed system, transport applications require an end-to-end 805 management of different types of networks. It is likely that 806 constrained devices in a network (e.g. a moving in-car network) have 807 to be controlled by an application running on an application server 808 in the network of a service provider. Such a highly distributed 809 network including cellular devices on vehicles is assumed to include 810 a wireless access network using diverse long distance wireless 811 technologies such as WiMAX, 3G/LTE or satellite communication, e.g. 812 based on an embedded hardware module. As a result, the management of 813 constrained devices in the transport system might be necessary to 814 plan top-down and might need to use data models obliged from and 815 defined on the application layer. The assumed device classes in use 816 are mainly C2 [RFC7228] devices. In cases, where an in-vehicle 817 network is involved, C1 devices [RFC7228] with limited capabilities 818 and a short-distance constrained radio network, e.g. IEEE 802.15.4 819 might be used additionally. 821 All Transport Applications will require an IT infrastructure to run 822 on top of, e.g., in public transport scenarios like trains, bus or 823 metro network infrastructure might be provided, maintained and 824 operated by third parties like mobile network or satellite network 825 operators. However, the management responsibility of the transport 826 application typically rests within the organization running the 827 transport application (in the public transport scenario, this would 828 typically be the public transport operator). Different aspects of 829 the infrastructure might also be managed by different entities. For 830 example, the in-car devices are likely to be installed and managed by 831 the manufacturer, while the public works might be responsible for the 832 on-road vehicular communication infrastructure used by these devices. 833 The back-end infrastructure is also likely to be maintained by third 834 party operators. As such, the NMS must be able to deal with 835 different network segments, each being operated and controlled by 836 separate entities, and enable appropriate access control and security 837 as well. 839 Depending on the type of application domain (vehicular or stationary) 840 and service being provided, it would be important for the NMS to be 841 able to function with different architectures, since different 842 manufacturers might have their own proprietary systems relying on a 843 specific Management Topology Option, as described in [COM-REQ]. 844 Moreover, constituents of the network can be either private, 845 belonging to individuals or private companies, or owned by public 846 institutions leading to different legal and organization 847 requirements. Across the entire infrastructure, a variety of 848 constrained devices are likely to be used, and must be individually 849 managed. The NMS must be able to either work directly with different 850 types of devices, or have the ability to interoperate with multiple 851 different systems. 853 The challenges in the management of vehicles in a mobile transport 854 application are manifold. The up-to-date position of each node in 855 the network should be reported to the corresponding management 856 entities, since the nodes could be moving within or roaming between 857 different networks. Secondly, a variety of troubleshooting 858 information, including sensitive location information, needs to be 859 reported to the management system in order to provide accurate 860 service to the customer. Management systems dealing with mobile 861 nodes could possibly exploit specific patterns in the mobility of the 862 nodes. These patterns emerge due to repetitive vehicular usage in 863 scenarios like people commuting to work, logistics supply vehicles 864 transporting shipments between warehouses, etc. The NMS must also be 865 able to handle partitioned networks, which would arise due to the 866 dynamic nature of traffic resulting in large inter-vehicle gaps in 867 sparsely populated scenarios. Since mobile nodes might roam in 868 remote networks, the NMS should be able to provide operating 869 configuration updates regardless of node location. 871 The constrained devices in a moving transport network might be 872 initially configured in a factory and a reconfiguration might be 873 needed only rarely. New devices might be integrated in an ad-hoc 874 manner based on self-management and -configuration capabilities. 875 Monitoring and data exchange might be necessary to do via a gateway 876 entity connected to the back-end transport infrastructure. The 877 devices and entities in the transport infrastructure need to be 878 monitored more frequently and can be able to communicate with a 879 higher data rate. The connectivity of such entities does not 880 necessarily need to be wireless. The time scale for detecting and 881 recording failures in a moving transport network is likely measured 882 in hours and repairs might easily take days. It is likely that a 883 self-healing feature would be used locally. On the other hand, 884 failures in fixed transport application infrastructure (e.g., 885 traffic-lights, digital signage displays) is likely to be measured in 886 minutes so as to avoid untoward traffic incidents. As such, the NMS 887 must be able to deal with differing timeliness requirements based on 888 the type of devices. 890 4.9. Community Network Applications 892 Community networks are comprised of constrained routers in a multi- 893 hop mesh topology, communicating over a lossy, and often wireless 894 channels. While the routers are mostly non-mobile, the topology may 895 be very dynamic because of fluctuations in link quality of the 896 (wireless) channel caused by, e.g., obstacles, or other nearby radio 897 transmissions. Depending on the routers that are used in the 898 community network, the resources of the routers (memory, CPU) may be 899 more or less constrained - available resources may range from only a 900 few kilobytes of RAM to several megabytes or more, and CPUs may be 901 small and embedded, or more powerful general-purpose processors. 902 Examples of such community networks are the FunkFeuer network 903 (Vienna, Austria), FreiFunk (Berlin, Germany), Seattle Wireless 904 (Seattle, USA), and AWMN (Athens, Greece). These community networks 905 are public and non-regulated, allowing their users to connect to each 906 other and - through an uplink to an ISP - to the Internet. No fee, 907 other than the initial purchase of a wireless router, is charged for 908 these services. Applications of these community networks can be 909 diverse, e.g., location based services, free Internet access, file 910 sharing between users, distributed chat services, social networking, 911 video sharing, etc. 913 As an example of a community network, the FunkFeuer network comprises 914 several hundred routers, many of which have several radio interfaces 915 (with omnidirectional and some directed antennas). The routers of 916 the network are small-sized wireless routers, such as the Linksys 917 WRT54GL, available in 2011 for less than 50 Euros. These routers, 918 with 16 MB of RAM and 264 MHz of CPU power, are mounted on the 919 rooftops of the users. When new users want to connect to the 920 network, they acquire a wireless router, install the appropriate 921 firmware and routing protocol, and mount the router on the rooftop. 922 IP addresses for the router are assigned manually from a list of 923 addresses (because of the lack of auto-configuration standards for 924 mesh networks in the IETF). 926 While the routers are non-mobile, fluctuations in link quality 927 require an ad hoc routing protocol that allows for quick convergence 928 to reflect the effective topology of the network (such as NHDP 929 [RFC6130] and OLSRv2 [RFC7181] developed in the MANET WG). Usually, 930 no human interaction is required for these protocols, as all variable 931 parameters required by the routing protocol are either negotiated in 932 the control traffic exchange, or are only of local importance to each 933 router (i.e. do not influence interoperability). However, external 934 management and monitoring of an ad hoc routing protocol may be 935 desirable to optimize parameters of the routing protocol. Such an 936 optimization may lead to a more stable perceived topology and to a 937 lower control traffic overhead, and therefore to a higher delivery 938 success ratio of data packets, a lower end-to-end delay, and less 939 unnecessary bandwidth and energy usage. 941 Different use cases for the management of community networks are 942 possible: 944 o One single Network Management Station, e.g. a border gateway 945 providing connectivity to the Internet, requires managing or 946 monitoring routers in the community network, in order to 947 investigate problems (monitoring) or to improve performance by 948 changing parameters (managing). As the topology of the network is 949 dynamic, constant connectivity of each router towards the 950 management station cannot be guaranteed. Current network 951 management protocols, such as SNMP and NETCONF, may be used (e.g., 952 using interfaces such as the NHDP-MIB [RFC6779]). However, when 953 routers in the community network are constrained, existing 954 protocols may require too many resources in terms of memory and 955 CPU; and more importantly, the bandwidth requirements may exceed 956 the available channel capacity in wireless mesh networks. 957 Moreover, management and monitoring may be unfeasible if the 958 connection between the network management station and the routers 959 is frequently interrupted. 961 o Distributed network monitoring, in which more than one management 962 station monitors or manages other routers. Because connectivity 963 to a server cannot be guaranteed at all times, a distributed 964 approach may provide a higher reliability, at the cost of 965 increased complexity. Currently, no IETF standard exists for 966 distributed monitoring and management. 968 o Monitoring and management of a whole network or a group of 969 routers. Monitoring the performance of a community network may 970 require more information than what can be acquired from a single 971 router using a network management protocol. Statistics, such as 972 topology changes over time, data throughput along certain routing 973 paths, congestion etc., are of interest for a group of routers (or 974 the routing domain) as a whole. As of 2014, no IETF standard 975 allows for monitoring or managing whole networks, instead of 976 single routers. 978 4.10. Field Operations 980 The challenges of configuration and monitoring of networks operated 981 in the field by rescue and security agencies can be different from 982 the other use cases since the requirements and operating conditions 983 of such networks are quite different. 985 With technology advancements, field networks operated nowadays are 986 becoming large and can consist of varieties of different types of 987 equipment that run different protocols and tools that obviously 988 increase complexity of these mission-critical networks. In many 989 scenarios, configurations are, most likely, manually performed. 990 Furthermore, some legacy and even modern devices do not even support 991 IP networking. A majority of protocols and tools developed by 992 vendors that are being used are proprietary, which makes integration 993 more difficult. 995 The main reason for this disjoint operation scenario is that most 996 equipment is developed with specific task requirements in mind, 997 rather than interoperability of the varied equipment types. For 998 example, the operating conditions experienced by high altitude 999 security equipment is significantly different from that used in 1000 desert conditions. Similarly, search and rescue operations equipment 1001 used in case of fire rescue has different requirements than flood 1002 relief equipment. Furthermore, inter-operation of equipment with 1003 telecommunication equipment was not an expected outcome or in some 1004 scenarios this may not even be desirable. 1006 Currently, field networks operate with a fixed Network Operations 1007 Center (NOC) that physically manages the configuration and evaluation 1008 of all field devices. Once configured, the devices might be deployed 1009 in fixed or mobile scenarios. Any configuration changes required 1010 would need to be appropriately encrypted and authenticated to prevent 1011 unauthorized access. 1013 Hierarchical management of devices is a common requirement in such 1014 scenarios since local managers or operators may need to respond to 1015 changing conditions within their purview. The level of configuration 1016 management available at each hierarchy must also be closely governed. 1018 Since many field operation devices are used in hostile environments, 1019 a high failure and disconnection rate should be tolerated by the NMS, 1020 which must also be able to deal with multiple gateways and disjoint 1021 management protocols. 1023 Multi-national field operations involving search, rescue and security 1024 are becoming increasingly common, requiring inter-operation of a 1025 diverse set of equipment designed with different operating conditions 1026 in mind. Furthermore, different intra- and inter-governmental 1027 agencies are likely to have a different set of standards, best 1028 practices, rules and regulation, and implementation approaches that 1029 may contradict or conflict with each other. The NMS should be able 1030 to detect these and handle them in an acceptable manner, which may 1031 require human intervention. 1033 5. IANA Considerations 1035 This document does not introduce any new code-points or namespaces 1036 for registration with IANA. 1038 Note to RFC Editor: this section may be removed on publication as an 1039 RFC. 1041 6. Security Considerations 1043 This document discusses use cases for management of networks with 1044 constrained devices. The security considerations described 1045 throughout the companion document [COM-REQ] apply here as well. 1047 7. Contributors 1049 Following persons made significant contributions to and reviewed this 1050 document: 1052 o Ulrich Herberg contributed the Section 4.9 on Community Network 1053 Applications. 1055 o Peter van der Stok contributed to Section 4.6 on Building 1056 Automation. 1058 o Zhen Cao contributed to Section 2.2 Cellular Access Technologies. 1060 o Gilman Tolle contributed the Section 4.4 on Automated Metering 1061 Infrastructure. 1063 o James Nguyen and Ulrich Herberg contributed to Section 4.10 on 1064 Military operations. 1066 8. Acknowledgments 1068 Following persons reviewed and provided valuable comments to 1069 different versions of this document: 1071 Dominique Barthel, Carsten Bormann, Zhen Cao, Benoit Claise, Bert 1072 Greevenbosch, Ulrich Herberg, James Nguyen, Zach Shelby, Peter van 1073 der Stok, and Martin Thomson. 1075 The editors would like to thank the reviewers and the participants on 1076 the Coman maillist for their valuable contributions and comments. 1078 9. Informative References 1080 [RFC6130] Clausen, T., Dearlove, C., and J. Dean, "Mobile Ad Hoc 1081 Network (MANET) Neighborhood Discovery Protocol (NHDP)", 1082 RFC 6130, April 2011. 1084 [RFC6568] Kim, E., Kaspar, D., and JP. Vasseur, "Design and 1085 Application Spaces for IPv6 over Low-Power Wireless 1086 Personal Area Networks (6LoWPANs)", RFC 6568, April 2012. 1088 [RFC6779] Herberg, U., Cole, R., and I. Chakeres, "Definition of 1089 Managed Objects for the Neighborhood Discovery Protocol", 1090 RFC 6779, October 2012. 1092 [RFC6988] Quittek, J., Chandramouli, M., Winter, R., Dietz, T., and 1093 B. Claise, "Requirements for Energy Management", RFC 6988, 1094 September 2013. 1096 [RFC7181] Clausen, T., Dearlove, C., Jacquet, P., and U. Herberg, 1097 "The Optimized Link State Routing Protocol Version 2", RFC 1098 7181, April 2014. 1100 [RFC7228] Bormann, C., Ersue, M., and A. Keranen, "Terminology for 1101 Constrained-Node Networks", RFC 7228, May 2014. 1103 [RFC7326] Parello, J., Claise, B., Schoening, B., and J. Quittek, 1104 "Energy Management Framework", RFC 7326, September 2014. 1106 [COM-REQ] Ersue, M., Romascanu, D., and J. Schoenwaelder, 1107 "Management of Networks with Constrained Devices: Problem 1108 Statement and Requirements", draft-ietf-opsawg-coman- 1109 probstate-reqs (work in progress), February 2014. 1111 [IOT-SEC] Garcia-Morchon, O., Kumar, S., Keoh, S., Hummen, R., and 1112 R. Struik, "Security Considerations in the IP-based 1113 Internet of Things", draft-garcia-core-security-06 (work 1114 in progress), September 2013. 1116 Appendix A. Change Log 1118 A.1. draft-ietf-opsawg-coman-use-cases-03 - draft-ietf-opsawg-coman- 1119 use-cases-04 1121 o Resolved Gen-ART review comments received from Martin Thomson. 1123 o Deleted company name for the list of contributors. 1125 o Added Martin Thomson to Acknowledgments section. 1127 A.2. draft-ietf-opsawg-coman-use-cases-02 - draft-ietf-opsawg-coman- 1128 use-cases-03 1130 o Updated references to take into account RFCs that have now been 1131 published 1133 o Added text to the access technologies section explaining why fixed 1134 line technologies (e.g., powerline communications) have not been 1135 discussed. 1137 o Created a new section, Device Lifecycle, discussing the impact of 1138 different device lifecycle stages on the management of constrained 1139 networks. 1141 o Homogenized usage of device classes to form C0, C1 and C2. 1143 o Ensured consistency in usage of Wi-Fi, ZigBee and other 1144 terminologies. 1146 o Added text clarifying the management aspects of the Building 1147 Automation and Industrial Automation use cases. 1149 o Clarified the meaning of unreliability in context of constrained 1150 devices and networks. 1152 o Added information regarding the configuration and operation of 1153 factory automation use case, based on the type of information 1154 provided in the building automation use case. 1156 o Fixed editorial issues discovered by reviewers. 1158 A.3. draft-ietf-opsawg-coman-use-cases-01 - draft-ietf-opsawg-coman- 1159 use-cases-02 1161 o Renamed Mobile Access Technologies section to Cellular Access 1162 Technologies 1164 o Changed references to mobile access technologies to now read 1165 cellular access technologies. 1167 o Added text to the introduction to point out that the list of use 1168 cases is not exhaustive since others unknown to the authors might 1169 exist. 1171 o Updated references to take into account RFCs that have been now 1172 published. 1174 o Updated Environmental Monitoring section to make it clear that in 1175 some scenarios it may not be prudent to repair devices. 1177 o Added clarification in Infrastructure Monitoring section that 1178 reliable communication is achieved via application layer 1179 transactions 1181 o Removed reference to Energy Devices from Energy Management 1182 section, instead labeling them as devices within the context of 1183 energy management. 1185 o Reduced descriptive content in Energy Management section. 1187 o Rewrote text in Energy Management section to highlight management 1188 characteristics of Smart Meter and AMI networks. 1190 o Added text regarding timely delivery of information, and related 1191 management system characteristic, to the Medical Applications 1192 section 1194 o Changed subnets to network segment in Building Automation section. 1196 o Changed structure to infrastructure in Building Automation 1197 section, and added text to highlight associated deployment 1198 difficulties. 1200 o Removed Trickle timer as example of common values to be set in 1201 Building Automation section. 1203 o Added text regarding the possible availability of outsourced and 1204 cloud based management systems for Home Automation. 1206 o Added text to Transport Applications section to highlight the 1207 requirement of IT infrastructure for such applications to function 1208 on top of. 1210 o Merged the Transport Applications and Vehicular Networks section 1211 together. Following changes to the Vehicular Networks section 1212 were merged back into Transport Applications 1214 * Replaced wireless last hops with wireless access to vehicles in 1215 Vehicular Networks. 1217 * Expanded proprietary systems to "systems relying on a specific 1218 Management Topology Option, as described in [COM-REQ]." within 1219 Vehicular Networks section. 1221 * Added text regarding mobility patterns to Vehicular Networks. 1223 o Changed the Military Operations use case to Field Operations and 1224 edited the text to be suitable to such scenarios. 1226 A.4. draft-ietf-opsawg-coman-use-cases-00 - draft-ietf-opsawg-coman- 1227 use-cases-01 1229 o Reordered some use cases to improve the flow. 1231 o Added "Vehicular Networks". 1233 o Shortened the Military Operations use case. 1235 o Started adding substance to the security considerations section. 1237 A.5. draft-ersue-constrained-mgmt-03 - draft-ersue-opsawg-coman-use- 1238 cases-00 1240 o Reduced the terminology section for terminology addressed in the 1241 LWIG and Coman Requirements drafts. Referenced the other drafts. 1243 o Checked and aligned all terminology against the LWIG terminology 1244 draft. 1246 o Spent some effort to resolve the intersection between the 1247 Industrial Application, Home Automation and Building Automation 1248 use cases. 1250 o Moved section section 3. Use Cases from the companion document 1251 [COM-REQ] to this draft. 1253 o Reformulation of some text parts for more clarity. 1255 A.6. draft-ersue-constrained-mgmt-02-03 1257 o Extended the terminology section and removed some of the 1258 terminology addressed in the new LWIG terminology draft. 1259 Referenced the LWIG terminology draft. 1261 o Moved Section 1.3. on Constrained Device Classes to the new LWIG 1262 terminology draft. 1264 o Class of networks considering the different type of radio and 1265 communication technologies in use and dimensions extended. 1267 o Extended the Problem Statement in Section 2. following the 1268 requirements listed in Section 4. 1270 o Following requirements, which belong together and can be realized 1271 with similar or same kind of solutions, have been merged. 1273 * Distributed Management and Peer Configuration, 1275 * Device status monitoring and Neighbor-monitoring, 1277 * Passive Monitoring and Reactive Monitoring, 1279 * Event-driven self-management - Self-healing and Periodic self- 1280 management, 1282 * Authentication of management systems and Authentication of 1283 managed devices, 1285 * Access control on devices and Access control on management 1286 systems, 1288 * Management of Energy Resources and Data models for energy 1289 management, 1291 * Software distribution (group-based firmware update) and Group- 1292 based provisioning. 1294 o Deleted the empty section on the gaps in network management 1295 standards, as it will be written in a separate draft. 1297 o Added links to mentioned external pages. 1299 o Added text on OMA M2M Device Classification in appendix. 1301 A.7. draft-ersue-constrained-mgmt-01-02 1303 o Extended the terminology section. 1305 o Added additional text for the use cases concerning deployment 1306 type, network topology in use, network size, network capabilities, 1307 radio technology, etc. 1309 o Added examples for device classes in a use case. 1311 o Added additional text provided by Cao Zhen (China Mobile) for 1312 Mobile Applications and by Peter van der Stok for Building 1313 Automation. 1315 o Added the new use cases 'Advanced Metering Infrastructure' and 1316 'MANET Concept of Operations in Military'. 1318 o Added the section 'Managing the Constrainedness of a Device or 1319 Network' discussing the needs of very constrained devices. 1321 o Added a note that the requirements in [COM-REQ] need to be seen as 1322 standalone requirements and the current document does not 1323 recommend any profile of requirements. 1325 o Added a section in [COM-REQ] for the detailed requirements on 1326 constrained management matched to management tasks like fault, 1327 monitoring, configuration management, Security and Access Control, 1328 Energy Management, etc. 1330 o Solved nits and added references. 1332 o Added Appendix A on the related development in other bodies. 1334 o Added Appendix B on the work in related research projects. 1336 A.8. draft-ersue-constrained-mgmt-00-01 1338 o Splitted the section on 'Networks of Constrained Devices' into the 1339 sections 'Network Topology Options' and 'Management Topology 1340 Options'. 1342 o Added the use case 'Community Network Applications' and 'Mobile 1343 Applications'. 1345 o Provided a Contributors section. 1347 o Extended the section on 'Medical Applications'. 1349 o Solved nits and added references. 1351 Authors' Addresses 1353 Mehmet Ersue (editor) 1354 Nokia Networks 1356 Email: mehmet.ersue@nsn.com 1358 Dan Romascanu 1359 Avaya 1361 Email: dromasca@avaya.com 1363 Juergen Schoenwaelder 1364 Jacobs University Bremen 1366 Email: j.schoenwaelder@jacobs-university.de 1368 Anuj Sehgal 1369 Jacobs University Bremen 1371 Email: s.anuj@jacobs-university.de