idnits 2.17.1 draft-ietf-opsawg-coman-use-cases-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 27, 2014) is 3469 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 6779 (Obsoleted by RFC 7939) Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Ersue, Ed. 3 Internet-Draft Nokia Networks 4 Intended status: Informational D. Romascanu 5 Expires: April 30, 2015 Avaya 6 J. Schoenwaelder 7 A. Sehgal 8 Jacobs University Bremen 9 October 27, 2014 11 Management of Networks with Constrained Devices: Use Cases 12 draft-ietf-opsawg-coman-use-cases-03 14 Abstract 16 This document discusses use cases concerning the management of 17 networks, where constrained devices are involved. A problem 18 statement, deployment options and the requirements on the networks 19 with constrained devices can be found in the companion document on 20 "Management of Networks with Constrained Devices: Problem Statement 21 and Requirements". 23 Status of This Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at http://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on April 30, 2015. 40 Copyright Notice 42 Copyright (c) 2014 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 58 2. Access Technologies . . . . . . . . . . . . . . . . . . . . . 4 59 2.1. Constrained Access Technologies . . . . . . . . . . . . . 4 60 2.2. Cellular Access Technologies . . . . . . . . . . . . . . 5 61 3. Device Lifecycle . . . . . . . . . . . . . . . . . . . . . . 6 62 3.1. Manufacturing and Initial Testing . . . . . . . . . . . . 6 63 3.2. Installation and Configuration . . . . . . . . . . . . . 6 64 3.3. Operation and Maintenance . . . . . . . . . . . . . . . . 7 65 3.4. Recommissioning and Decommissioning . . . . . . . . . . . 7 66 4. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 8 67 4.1. Environmental Monitoring . . . . . . . . . . . . . . . . 8 68 4.2. Infrastructure Monitoring . . . . . . . . . . . . . . . . 8 69 4.3. Industrial Applications . . . . . . . . . . . . . . . . . 9 70 4.4. Energy Management . . . . . . . . . . . . . . . . . . . . 11 71 4.5. Medical Applications . . . . . . . . . . . . . . . . . . 13 72 4.6. Building Automation . . . . . . . . . . . . . . . . . . . 14 73 4.7. Home Automation . . . . . . . . . . . . . . . . . . . . . 16 74 4.8. Transport Applications . . . . . . . . . . . . . . . . . 17 75 4.9. Community Network Applications . . . . . . . . . . . . . 19 76 4.10. Field Operations . . . . . . . . . . . . . . . . . . . . 21 77 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 78 6. Security Considerations . . . . . . . . . . . . . . . . . . . 22 79 7. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 22 80 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 23 81 9. Informative References . . . . . . . . . . . . . . . . . . . 23 82 Appendix A. Change Log . . . . . . . . . . . . . . . . . . . . . 24 83 A.1. draft-ietf-opsawg-coman-use-cases-02 - draft-ietf-opsawg- 84 coman-use-cases-03 . . . . . . . . . . . . . . . . . . . 24 85 A.2. draft-ietf-opsawg-coman-use-cases-01 - draft-ietf-opsawg- 86 coman-use-cases-02 . . . . . . . . . . . . . . . . . . . 25 87 A.3. draft-ietf-opsawg-coman-use-cases-00 - draft-ietf-opsawg- 88 coman-use-cases-01 . . . . . . . . . . . . . . . . . . . 26 89 A.4. draft-ersue-constrained-mgmt-03 - draft-ersue-opsawg- 90 coman-use-cases-00 . . . . . . . . . . . . . . . . . . . 26 91 A.5. draft-ersue-constrained-mgmt-02-03 . . . . . . . . . . . 27 92 A.6. draft-ersue-constrained-mgmt-01-02 . . . . . . . . . . . 28 93 A.7. draft-ersue-constrained-mgmt-00-01 . . . . . . . . . . . 28 94 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 29 96 1. Introduction 98 Small devices with limited CPU, memory, and power resources, so 99 called constrained devices (aka. sensor, smart object, or smart 100 device) can be connected to a network. Such a network of constrained 101 devices itself may be constrained or challenged, e.g., with 102 unreliable or lossy channels, wireless technologies with limited 103 bandwidth and a dynamic topology, needing the service of a gateway or 104 proxy to connect to the Internet. In other scenarios, the 105 constrained devices can be connected to a non-constrained network 106 using off-the-shelf protocol stacks. Constrained devices might be in 107 charge of gathering information in diverse settings including natural 108 ecosystems, buildings, and factories and send the information to one 109 or more server stations. 111 Network management is characterized by monitoring network status, 112 detecting faults, and inferring their causes, setting network 113 parameters, and carrying out actions to remove faults, maintain 114 normal operation, and improve network efficiency and application 115 performance. The traditional network management application 116 periodically collects information from a set of elements that are 117 needed to manage, processes the data, and presents them to the 118 network management users. Constrained devices, however, often have 119 limited power, low transmission range, and might be unreliable. Such 120 unreliability might arise from device itself (e.g., battery 121 exhausted) or from the channel being constrained (i.e., low-capacity 122 and high-latency). They might also need to work in hostile 123 environments with advanced security requirements or need to be used 124 in harsh environments for a long time without supervision. Due to 125 such constraints, the management of a network with constrained 126 devices offers different type of challenges compared to the 127 management of a traditional IP network. 129 This document aims to understand use cases for the management of a 130 network, where constrained devices are involved. The document lists 131 and discusses diverse use cases for the management from the network 132 as well as from the application point of view. The list of discussed 133 use cases is not an exhaustive one since other scenarios, currently 134 unknown to the authors, are possible. The application scenarios 135 discussed aim to show where networks of constrained devices are 136 expected to be deployed. For each application scenario, we first 137 briefly describe the characteristics followed by a discussion on how 138 network management can be provided, who is likely going to be 139 responsible for it, and on which time-scale management operations are 140 likely to be carried out. 142 A problem statement, deployment and management topology options as 143 well as the requirements on the networks with constrained devices can 144 be found in the companion document [COM-REQ]. 146 This documents builds on the terminology defined in [RFC7228] and 147 [COM-REQ]. [RFC7228] is a base document for the terminology 148 concerning constrained devices and constrained networks. Some use 149 cases specific to IPv6 over Low-Power Wireless Personal Area Networks 150 (6LoWPANs) can be found in [RFC6568]. 152 2. Access Technologies 154 Besides the management requirements imposed by the different use 155 cases, the access technologies used by constrained devices can impose 156 restrictions and requirements upon the Network Management System 157 (NMS) and protocol of choice. 159 It is possible that some networks of constrained devices might 160 utilize traditional non-constrained access technologies for network 161 access, e.g., local area networks with plenty of capacity. In such 162 scenarios, the constrainedness of the device presents special 163 management restrictions and requirements rather than the access 164 technology utilized. 166 However, in other situations constrained or cellular access 167 technologies might be used for network access, thereby causing 168 management restrictions and requirements to arise as a result of the 169 underlying access technologies. 171 A discussion regarding the impact of cellular and constrained access 172 technologies is provided in this section since they impose some 173 special requirements on the management of constrained networks. On 174 the other hand, fixed line networks (e.g., power line communications) 175 are not discussed here since tend to be quite static and do not 176 typically impose any special requirements on the management of the 177 network. 179 2.1. Constrained Access Technologies 181 Due to resource restrictions, embedded devices deployed as sensors 182 and actuators in the various use cases utilize low-power low data- 183 rate wireless access technologies such as IEEE 802.15.4, DECT ULE or 184 Bluetooth Low-Energy (BT-LE) for network connectivity. 186 In such scenarios, it is important for the NMS to be aware of the 187 restrictions imposed by these access technologies to efficiently 188 manage these constrained devices. Specifically, such low-power low 189 data-rate access technologies typically have small frame sizes. So 190 it would be important for the NMS and management protocol of choice 191 to craft packets in a way that avoids fragmentation and reassembly of 192 packets since this can use valuable memory on constrained devices. 194 Devices using such access technologies might operate via a gateway 195 that translates between these access technologies and more 196 traditional Internet protocols. A hierarchical approach to device 197 management in such a situation might be useful, wherein the gateway 198 device is in-charge of devices connected to it, while the NMS 199 conducts management operations only to the gateway. 201 2.2. Cellular Access Technologies 203 Machine to machine (M2M) services are increasingly provided by mobile 204 service providers as numerous devices, home appliances, utility 205 meters, cars, video surveillance cameras, and health monitors, are 206 connected with mobile broadband technologies. Different 207 applications, e.g., in a home appliance or in-car network, use 208 Bluetooth, Wi-Fi or ZigBee locally and connect to a cellular module 209 acting as a gateway between the constrained environment and the 210 mobile cellular network. 212 Such a gateway might provide different options for the connectivity 213 of mobile networks and constrained devices: 215 o a smart phone with 3G/4G and WLAN radio might use BT-LE to connect 216 to the devices in a home area network, 218 o a femtocell might be combined with home gateway functionality 219 acting as a low-power cellular base station connecting smart 220 devices to the application server of a mobile service provider, 222 o an embedded cellular module with LTE radio connecting the devices 223 in the car network with the server running the telematics service, 225 o an M2M gateway connected to the mobile operator network supporting 226 diverse IoT connectivity technologies including ZigBee and CoAP 227 over 6LoWPAN over IEEE 802.15.4. 229 Common to all scenarios above is that they are embedded in a service 230 and connected to a network provided by a mobile service provider. 231 Usually there is a hierarchical deployment and management topology in 232 place where different parts of the network are managed by different 233 management entities and the count of devices to manage is high (e.g. 234 many thousands). In general, the network is comprised by manifold 235 type and size of devices matching to different device classes. As 236 such, the managing entity needs to be prepared to manage devices with 237 diverse capabilities using different communication or management 238 protocols. In case the devices are directly connected to a gateway 239 they most likely are managed by a management entity integrated with 240 the gateway, which itself is part of the Network Management System 241 (NMS) run by the mobile operator. Smart phones or embedded modules 242 connected to a gateway might be themselves in charge to manage the 243 devices on their level. The initial and subsequent configuration of 244 such a device is mainly based on self-configuration and is triggered 245 by the device itself. 247 The gateway might be in charge of filtering and aggregating the data 248 received from the device as the information sent by the device might 249 be mostly redundant. 251 3. Device Lifecycle 253 Since constrained devices deployed in a network might go through 254 multiple phases in their lifetime, it is possible for different 255 managers of networks and/or devices to exist during different parts 256 of the device lifetimes. An in-depth discussion regarding the 257 possible device lifecycles can be found in [IOT-SEC]. 259 3.1. Manufacturing and Initial Testing 261 Typically, the lifecycle of a device begins at the manufacturing 262 stage. During this phase the manufacturer of the device is 263 responsible for the management and configuration of the devices. It 264 is also possible that a certain use case might utilize multiple types 265 of constrained devices (e.g., temperature sensors, lighting 266 controllers, etc.) and these could be manufactured by different 267 entities. As such, during the manufacturing stage different managers 268 can exist for different devices. Similarly, during the initial 269 testing phase, where device quality assurance tasks might be 270 performed, the manufacturer remains responsible for the management of 271 devices and networks that might comprise them. 273 3.2. Installation and Configuration 275 The responsibility of managing the devices must be transferred to the 276 installer during the installation phase. There must exist procedures 277 for transferring management responsibility between the manufacturer 278 and installer. The installer may be the customer or an intermediary 279 contracted to setup the devices and their networks. It is important 280 that the NMS utilized allows devices originating at different vendors 281 to be managed, ensuring interoperability between them and the 282 configuration of trust relationships between them as well. 284 It is possible that the installation and configuration 285 responsibilities might lie with different entities. For example, the 286 installer of a device might only be responsible for cabling a 287 network, physically installing the devices and ensuring initial 288 network connectivity between them (e.g., configuring IP addresses). 289 Following such an installation, the customer or a sub-contractor 290 might actually configure the operation of the device. As such, 291 during installation and configuration multiple parties might be 292 responsible for managing a device and appropriate methods must be 293 available to ensure that this management responsibility is 294 transferred suitably. 296 3.3. Operation and Maintenance 298 At the outset of the operation phase, the operational responsibility 299 of a device and network should be passed on to the customer. It is 300 possible that the customer, however, might contract the maintenance 301 of the devices and network to a sub-contractor. In this case, the 302 NMS and management protocol should allow for configuring different 303 levels of access to the devices. Since different maintenance vendors 304 might be used for devices that perform different functions (e.g., 305 HVAC, lighting, etc.) it should also be possible to restrict 306 management access to devices based on the currently responsible 307 manager. 309 3.4. Recommissioning and Decommissioning 311 The owner of a device might choose to replace, repurpose or even 312 decommission it. In each of these cases, either the customer or the 313 contracted maintenance agency must ensure that appropriate steps are 314 taken to meet the end goal. 316 In case the devices needs to be replaced, the manager of the network 317 (customer or contractor responsible) must detach the device from the 318 network, remove all appropriate configuration and discard the device. 319 A new device must then be configured to replace it. The NMS should 320 allow for transferring configuration from and replacing an existing 321 device. The management responsibility of the operation/maintenance 322 manager would end once the device is removed from the network. 323 During the installation of the new replacement device, the same 324 responsibilities would apply as those during the Installation and 325 Configuration phases. 327 The device being replaced may not have yet reached end-of-life, and 328 as such, instead of being discarded it may be installed in a new 329 location. In this case, the management responsibilities are once 330 again resting in the hands of the entities responsible for the 331 Installation and Configuration phases at the new location. 333 If a device is repurposed, then it is possible that the management 334 responsibility for this device changes as well. For example, a 335 device might be moved from one building to another. In this case, 336 the managers responsible for devices and networks in each building 337 could be different. As such, the NMS must not only allow for 338 changing configuration but also transferring management 339 responsibilities. 341 In case a device is decommissioned, the management responsibility 342 typically ends at that point. 344 4. Use Cases 346 4.1. Environmental Monitoring 348 Environmental monitoring applications are characterized by the 349 deployment of a number of sensors to monitor emissions, water 350 quality, or even the movements and habits of wildlife. Other 351 applications in this category include earthquake or tsunami early- 352 warning systems. The sensors often span a large geographic area, 353 they can be mobile, and they are often difficult to replace. 354 Furthermore, the sensors are usually not protected against tampering. 356 Management of environmental monitoring applications is largely 357 concerned with the monitoring whether the system is still functional 358 and the roll-out of new constrained devices in case the system looses 359 too much of its structure. The constrained devices themselves need 360 to be able to establish connectivity (auto-configuration) and they 361 need to be able to deal with events such as loosing neighbors or 362 being moved to other locations. 364 Management responsibility typically rests with the organization 365 running the environmental monitoring application. Since these 366 monitoring applications must be designed to tolerate a number of 367 failures, the time scale for detecting and recording failures is for 368 some of these applications likely measured in hours and repairs might 369 easily take days. In fact, in some scenarios it might be more cost- 370 and time-effective to not repair such devices at all. However, for 371 certain environmental monitoring applications, much tighter time 372 scales may exist and might be enforced by regulations (e.g., 373 monitoring of nuclear radiation). 375 4.2. Infrastructure Monitoring 377 Infrastructure monitoring is concerned with the monitoring of 378 infrastructures such as bridges, railway tracks, or (offshore) 379 windmills. The primary goal is usually to detect any events or 380 changes of the structural conditions that can impact the risk and 381 safety of the infrastructure being monitored. Another secondary goal 382 is to schedule repair and maintenance activities in a cost effective 383 manner. 385 The infrastructure to monitor might be in a factory or spread over a 386 wider area but difficult to access. As such, the network in use 387 might be based on a combination of fixed and wireless technologies, 388 which use robust networking equipment and support reliable 389 communication via application layer transactions. It is likely that 390 constrained devices in such a network are mainly C2 devices [RFC7228] 391 and have to be controlled centrally by an application running on a 392 server. In case such a distributed network is widely spread, the 393 wireless devices might use diverse long-distance wireless 394 technologies such as WiMAX, or 3G/LTE. In cases, where an in- 395 building network is involved, the network can be based on Ethernet or 396 wireless technologies suitable for in-building usage. 398 The management of infrastructure monitoring applications is primarily 399 concerned with the monitoring of the functioning of the system. 400 Infrastructure monitoring devices are typically rolled out and 401 installed by dedicated experts and changes are rare since the 402 infrastructure itself changes rarely. However, monitoring devices 403 are often deployed in unsupervised environments and hence special 404 attention must be given to protecting the devices from being 405 modified. 407 Management responsibility typically rests with the organization 408 owning the infrastructure or responsible for its operation. The time 409 scale for detecting and recording failures is likely measured in 410 hours and repairs might easily take days. However, certain events 411 (e.g., natural disasters) may require that status information be 412 obtained much more quickly and that replacements of failed sensors 413 can be rolled out quickly (or redundant sensors are activated 414 quickly). In case the devices are difficult to access, a self- 415 healing feature on the device might become necessary. 417 4.3. Industrial Applications 419 Industrial Applications and smart manufacturing refer to tasks such 420 as networked control and monitoring of manufacturing equipment, asset 421 and situation management, or manufacturing process control. For the 422 management of a factory it is becoming essential to implement smart 423 capabilities. From an engineering standpoint, industrial 424 applications are intelligent systems enabling rapid manufacturing of 425 new products, dynamic response to product demands, and real-time 426 optimization of manufacturing production and supply chain networks. 427 Potential industrial applications (e.g., for smart factories and 428 smart manufacturing) are: 430 o Digital control systems with embedded, automated process controls, 431 operator tools, as well as service information systems optimizing 432 plant operations and safety. 434 o Asset management using predictive maintenance tools, statistical 435 evaluation, and measurements maximizing plant reliability. 437 o Smart sensors detecting anomalies to avoid abnormal or 438 catastrophic events. 440 o Smart systems integrated within the industrial energy management 441 system and externally with the smart grid enabling real-time 442 energy optimization. 444 Management of Industrial Applications and smart manufacturing may in 445 some situations involve Building Automation tasks such as control of 446 energy, HVAC (heating, ventilation, and air conditioning), lighting, 447 or access control. Interacting with management systems from other 448 application areas might be important in some cases (e.g., 449 environmental monitoring for electric energy production, energy 450 management for dynamically scaling manufacturing, vehicular networks 451 for mobile asset tracking). Management of constrained devices and 452 networks may not only refer to the management of their network 453 connectivity. Since the capabilities of constrained devices are 454 limited, it is quite possible that a management system would even be 455 required to configure, monitor and operate the primary functions that 456 a constrained device is utilized for, besides managing its network 457 connectivity. 459 Sensor networks are an essential technology used for smart 460 manufacturing. Measurements, automated controls, plant optimization, 461 health and safety management, and other functions are provided by a 462 large number of networked sectors. Data interoperability and 463 seamless exchange of product, process, and project data are enabled 464 through interoperable data systems used by collaborating divisions or 465 business systems. Intelligent automation and learning systems are 466 vital to smart manufacturing but must be effectively integrated with 467 the decision environment. The NMS utilized must ensure timely 468 delivery of sensor data to the control unit so it may take 469 appropriate decisions. Similarly, relaying of commands must also be 470 monitored and managed to ensure optimal functioning. Wireless sensor 471 networks (WSN) have been developed for machinery Condition-based 472 Maintenance (CBM) as they offer significant cost savings and enable 473 new functionalities. Inaccessible locations, rotating machinery, 474 hazardous areas, and mobile assets can be reached with wireless 475 sensors. WSNs can provide today wireless link reliability, real-time 476 capabilities, and quality-of-service and enable industrial and 477 related wireless sense and control applications. 479 Management of industrial and factory applications is largely focused 480 on monitoring whether the system is still functional, real-time 481 continuous performance monitoring, and optimization as necessary. 482 The factory network might be part of a campus network or connected to 483 the Internet. The constrained devices in such a network need to be 484 able to establish configuration themselves (auto-configuration) and 485 might need to deal with error conditions as much as possible locally. 486 Access control has to be provided with multi-level administrative 487 access and security. Support and diagnostics can be provided through 488 remote monitoring access centralized outside of the factory. 490 Factory automation tasks require that continuous monitoring be used 491 to optimize production. Groups of manufacturing and monitoring 492 devices could be defined to establish relationships between them. To 493 ensure timely optimization of processes, commands from the NMS must 494 arrive at all destination within an appropriate duration. This 495 duration could change based on the manufacturing task being 496 performed. Installation and operation of factory networks have 497 different requirements. During the installation phase many networks, 498 usually distributed along different parts of the factory/assembly 499 line, co-exist without a connection to a common backbone. A 500 specialized installation tool is typically used to configure the 501 functions of different types of devices, in different factory 502 location, in a secure manner. At the end of the installation phase, 503 interoperability between these stand-alone networks and devices must 504 be enabled. During the operation phase, these stand-alone networks 505 are connected to a common backbone so that they may retrieve control 506 information from and send commands to appropriate devices. 508 Management responsibility is typically owned by the organization 509 running the industrial application. Since the monitoring 510 applications must handle a potentially large number of failures, the 511 time scale for detecting and recording failures is for some of these 512 applications likely measured in minutes. However, for certain 513 industrial applications, much tighter time scales may exist, e.g. in 514 real-time, which might be enforced by the manufacturing process or 515 the use of critical material. 517 4.4. Energy Management 519 The EMAN working group developed an energy management framework 520 [RFC7326] for devices and device components within or connected to 521 communication networks. This document observes that one of the 522 challenges of energy management is that a power distribution network 523 is responsible for the supply of energy to various devices and 524 components, while a separate communication network is typically used 525 to monitor and control the power distribution network. Devices in 526 the context of energy management can be monitored for parameters like 527 Power, Energy, Demand and Power Quality. If a device contains 528 batteries, they can be also monitored and managed. 530 Energy devices differ in complexity and may include basic sensors or 531 switches, specialized electrical meters, or power distribution units 532 (PDU), and subsystems inside the network devices (routers, network 533 switches) or home or industrial appliances. The operators of an 534 Energy Management System are either the utility providers or 535 customers that aim to control and reduce the energy consumption and 536 the associated costs. The topology in use differs and the deployment 537 can cover areas from small surfaces (individual homes) to large 538 geographical areas. The EMAN requirements document [RFC6988] 539 discusses the requirements for energy management concerning 540 monitoring and control functions. 542 It is assumed that Energy Management will apply to a large range of 543 devices of all classes and networks topologies. Specific resource 544 monitoring like battery utilization and availability may be specific 545 to devices with lower physical resources (device classes C0 or C1 546 [RFC7228]). 548 Energy Management is especially relevant to the Smart Grid. A Smart 549 Grid is an electrical grid that uses data networks to gather and to 550 act on energy and power-related information in an automated fashion 551 with the goal to improve the efficiency, reliability, economics, and 552 sustainability of the production and distribution of electricity. 554 Smart Metering is a good example of Smart Grid based Energy 555 Management applications. Different types of possibly wireless small 556 meters produce all together a large amount of data, which is 557 collected by a central entity and processed by an application server, 558 which may be located within the customer's residence or off-site in a 559 data-center. The communication infrastructure can be provided by a 560 mobile network operator as the meters in urban areas will have most 561 likely a cellular or WiMAX radio. In case the application server is 562 located within the residence, such meters are more likely to use Wi- 563 Fi protocols to interconnect with an existing network. 565 An Advanced Metering Infrastructure (AMI) network is another example 566 of the Smart Grid that enables an electric utility to retrieve 567 frequent electric usage data from each electric meter installed at a 568 customer's home or business. Unlike Smart Metering, in which case 569 the customer or their agents install appliance level meters, an AMI 570 infrastructure is typically managed by the utility providers and 571 could also include other distribution automation devices like 572 transformers and reclosers. Meters in AMI networks typically contain 573 constrained devices that connect to mesh networks with a low- 574 bandwidth radio. Usage data and outage notifications can be sent by 575 these meters to the utility's headend systems, via aggregation points 576 of higher-end router devices that bridge the constrained network to a 577 less constrained network via cellular, WiMAX, or Ethernet. Unlike 578 meters, these higher-end devices might be installed on utility poles 579 owned and operated by a separate entity. 581 It thereby becomes important for a management application to not only 582 be able to work with diverse types of devices, but also over multiple 583 links that might be operated and managed by separate entities, each 584 having divergent policies for their own devices and network segments. 585 During management operations, like firmware updates, it is important 586 that the management system performs robustly in order to avoid 587 accidental outages of critical power systems that could be part of 588 AMI networks. In fact, since AMI networks must also report on 589 outages, the management system might have to manage the energy 590 properties of battery operated AMI devices themselves as well. 592 A management system for home based Smart Metering solutions is likely 593 to have devices laid out in a simple topology. However, AMI networks 594 installations could have thousands of nodes per router, i.e., higher- 595 end device, which organize themselves in an ad-hoc manner. As such, 596 a management system for AMI networks will need to discover and 597 operate over complex topologies as well. In some situations, it is 598 possible that the management system might also have to setup and 599 manage the topology of nodes, especially critical routers. 600 Encryption key management and sharing in both types of networks is 601 also likely to be important for providing confidentiality for all 602 data traffic. In AMI networks the key may be obtained by a meter 603 only after an end-to-end authentication process based on 604 certificates. Smart Metering solution could adopt a similar approach 605 or the security may be implied due to the encrypted Wi-Fi networks 606 they become part of. 608 The management of such a network requires end-to-end management of 609 and information exchange through different types of networks. 610 However, as of today there is no integrated energy management 611 approach and no common information model available. Specific energy 612 management applications or network islands use their own management 613 mechanisms. 615 4.5. Medical Applications 617 Constrained devices can be seen as an enabling technology for 618 advanced and possibly remote health monitoring and emergency 619 notification systems, ranging from blood pressure and heart rate 620 monitors to advanced devices capable of monitoring implanted 621 technologies, such as pacemakers or advanced hearing aids. Medical 622 sensors may not only be attached to human bodies, they might also 623 exist in the infrastructure used by humans such as bathrooms or 624 kitchens. Medical applications will also be used to ensure 625 treatments are being applied properly and they might guide people 626 losing orientation. Fitness and wellness applications, such as 627 connected scales or wearable heart monitors, encourage consumers to 628 exercise and empower self-monitoring of key fitness indicators. 629 Different applications use Bluetooth, Wi-Fi or ZigBee connections to 630 access the patient's smartphone or home cellular connection to access 631 the Internet. 633 Constrained devices that are part of medical applications are managed 634 either by the users of those devices or by an organization providing 635 medical (monitoring) services for physicians. In the first case, 636 management must be automatic and/or easy to install and setup by 637 average people. In the second case, it can be expected that devices 638 be controlled by specially trained people. In both cases, however, 639 it is crucial to protect the privacy of the people to which medical 640 devices are attached. Even though the data collected by a heart beat 641 monitor might be protected, the pure fact that someone carries such a 642 device may need protection. As such, certain medical appliances may 643 not want to participate in discovery and self-configuration protocols 644 in order to remain invisible. 646 Many medical devices are likely to be used (and relied upon) to 647 provide data to physicians in critical situations since the biggest 648 market is likely elderly and handicapped people. Timely delivery of 649 data can be quite important in certain applications like patient 650 mobility monitoring in old-age homes. Data must reach the physician 651 and/or emergency services within specified limits of time in order to 652 be useful. As such, fault detection of the communication network or 653 the constrained devices becomes a crucial function of the management 654 system that must be carried out with high reliability and, depending 655 on the medical appliance and its application, within seconds. 657 4.6. Building Automation 659 Building automation comprises the distributed systems designed and 660 deployed to monitor and control the mechanical, electrical and 661 electronic systems inside buildings with various destinations (e.g., 662 public and private, industrial, institutions, or residential). 663 Advanced Building Automation Systems (BAS) may be deployed 664 concentrating the various functions of safety, environmental control, 665 occupancy, security. More and more the deployment of the various 666 functional systems is connected to the same communication 667 infrastructure (possibly Internet Protocol based), which may involve 668 wired or wireless communications networks inside the building. 670 Building automation requires the deployment of a large number 671 (10-100.000) of sensors that monitor the status of devices, and 672 parameters inside the building and controllers with different 673 specialized functionality for areas within the building or the 674 totality of the building. Inter-node distances between neighboring 675 nodes vary between 1 to 20 meters. The NMS must, as a result, be 676 able to manage and monitor a large number of devices, which may be 677 organized in multi-hop meshed networks. Distances between the nodes, 678 and the use of constrained protocols, means that networks of nodes 679 might be segmented. The management of such network segments and 680 nodes in these segments should be possible. Contrary to home 681 automation, in building management the devices are expected to be 682 managed assets and known to a set of commissioning tools and a data 683 storage, such that every connected device has a known origin. This 684 requires the management system to be able to discover devices on the 685 network and ensure that the expected list of devices is currently 686 matched. Management here includes verifying the presence of the 687 expected devices and detecting the presence of unwanted devices. 689 Examples of functions performed by controllers in building automation 690 are regulating the quality, humidity, and temperature of the air 691 inside the building and lighting. Other systems may report the 692 status of the machinery inside the building like elevators, or inside 693 the rooms like projectors in meeting rooms. Security cameras and 694 sensors may be deployed and operated on separate dedicated 695 infrastructures connected to the common backbone. The deployment 696 area of a BAS is typically inside one building (or part of it) or 697 several buildings geographically grouped in a campus. A building 698 network can be composed of network segments, where a network segment 699 covers a floor, an area on the floor, or a given functionality (e.g., 700 security cameras). It is possible that the management tasks of 701 different types of some devices might be separated from others (e.g, 702 security cameras might operate and be managed via a separate network 703 to the HVAC in a building). 705 Some of the sensors in Building Automation Systems (for example fire 706 alarms or security systems) register, record and transfer critical 707 alarm information and therefore must be resilient to events like loss 708 of power or security attacks. A management system must be able to 709 deal with unintentional segmentation of networks due to power loss or 710 channel unavailability. It must also be able to detect security 711 events. Due to specific operating conditions required from certain 712 devices, there might be a need to certify components and subsystems 713 operating in such constrained conditions based on specific 714 requirements. Also in some environments, the malfunctioning of a 715 control system (like temperature control) needs to be reported in the 716 shortest possible time. Complex control systems can misbehave, and 717 their critical status reporting and safety algorithms need to be 718 basic and robust and perform even in critical conditions. Providing 719 this monitoring, configuration and notification service is an 720 important task of the management system used in building automation. 722 Building Automation solutions are deployed in some cases in newly 723 designed buildings, in other cases it might be over existing 724 infrastructures. In the first case, there is a broader range of 725 possible solutions, which can be planned for the infrastructure of 726 the building. In the second case the solution needs to be deployed 727 over an existing infrastructure taking into account factors like 728 existing wiring, distance limitations, the propagation of radio 729 signals over walls and floors, thereby making deployment difficult. 730 As a result, some of the existing WLAN solutions (e.g., IEEE 802.11 731 or IEEE 802.15) may be deployed. In mission-critical or security 732 sensitive environments and in cases where link failures happen often, 733 topologies that allow for reconfiguration of the network and 734 connection continuity may be required. Some of the sensors deployed 735 in building automation may be very simple constrained devices for 736 which C0 or C1 [RFC7228] may be assumed. 738 For lighting applications, groups of lights must be defined and 739 managed. Commands to a group of light must arrive within 200 ms at 740 all destinations. The installation and operation of a building 741 network has different requirements. During the installation, many 742 stand-alone networks of a few to 100 nodes co-exist without a 743 connection to the backbone. During this phase, the nodes are 744 identified with a network identifier related to their physical 745 location. Devices are accessed from an installation tool to connect 746 them to the network in a secure fashion. During installation, the 747 setting of parameters of common values to enable interoperability may 748 be required. During operation, the networks are connected to the 749 backbone while maintaining the network identifier to physical 750 location relation. Network parameters like address and name are 751 stored in DNS. The names can assist in determining the physical 752 location of the device. 754 4.7. Home Automation 756 Home automation includes the control of lighting, heating, 757 ventilation, air conditioning, appliances, entertainment and home 758 security devices to improve convenience, comfort, energy efficiency, 759 and safety. It can be seen as a residential extension of building 760 automation. However, unlike a building automation system, the 761 infrastructure in a home is operated in a considerably more ad-hoc 762 manner. While in some installations it is likely that there is no 763 centralized management system, akin to a Building Automation System 764 (BAS), available, in other situations outsourced and cloud based 765 systems responsible for managing devices in the home might be used. 767 Home automation networks need a certain amount of configuration 768 (associating switches or sensors to actuators) that is either 769 provided by electricians deploying home automation solutions, by 770 third party home automation service providers (e.g., small 771 specialized companies or home automation device manufacturers) or by 772 residents by using the application user interface provided by home 773 automation devices to configure (parts of) the home automation 774 solution. Similarly, failures may be reported via suitable 775 interfaces to residents or they might be recorded and made available 776 to services providers in charge of the maintenance of the home 777 automation infrastructure. 779 The management responsibility lies either with the residents or it 780 may be outsourced to electricians and/or third parties providing 781 management of home automation solutions as a service. A varying 782 combination of electricians, service providers or the residents may 783 be responsible for different aspects of managing the infrastructure. 784 The time scale for failure detection and resolution is in many cases 785 likely counted in hours to days. 787 4.8. Transport Applications 789 Transport Application is a generic term for the integrated 790 application of communications, control, and information processing in 791 a transportation system. Transport telematics or vehicle telematics 792 are used as a term for the group of technologies that support 793 transportation systems. Transport applications running on such a 794 transportation system cover all modes of the transport and consider 795 all elements of the transportation system, i.e. the vehicle, the 796 infrastructure, and the driver or user, interacting together 797 dynamically. Examples for transport applications are inter and intra 798 vehicular communication, smart traffic control, smart parking, 799 electronic toll collection systems, logistic and fleet management, 800 vehicle control, and safety and road assistance. 802 As a distributed system, transport applications require an end-to-end 803 management of different types of networks. It is likely that 804 constrained devices in a network (e.g. a moving in-car network) have 805 to be controlled by an application running on an application server 806 in the network of a service provider. Such a highly distributed 807 network including cellular devices on vehicles is assumed to include 808 a wireless access network using diverse long distance wireless 809 technologies such as WiMAX, 3G/LTE or satellite communication, e.g. 810 based on an embedded hardware module. As a result, the management of 811 constrained devices in the transport system might be necessary to 812 plan top-down and might need to use data models obliged from and 813 defined on the application layer. The assumed device classes in use 814 are mainly C2 [RFC7228] devices. In cases, where an in-vehicle 815 network is involved, C1 devices [RFC7228] with limited capabilities 816 and a short-distance constrained radio network, e.g. IEEE 802.15.4 817 might be used additionally. 819 All Transport Applications will require an IT infrastructure to run 820 on top of, e.g., in public transport scenarios like trains, bus or 821 metro network infrastructure might be provided, maintained and 822 operated by third parties like mobile network or satellite network 823 operators. However, the management responsibility of the transport 824 application typically rests within the organization running the 825 transport application (in the public transport scenario, this would 826 typically be the public transport operator). Different aspects of 827 the infrastructure might also be managed by different entities. For 828 example, the in-car devices are likely to be installed and managed by 829 the manufacturer, while the public works might be responsible for the 830 on-road vehicular communication infrastructure used by these devices. 831 The back-end infrastructure is also likely to be maintained by third 832 party operators. As such, the NMS must be able to deal with 833 different network segments, each being operated and controlled by 834 separate entities, and enable appropriate access control and security 835 as well. 837 Depending on the type of application domain (vehicular or stationary) 838 and service being provided, it would be important for the NMS to be 839 able to function with different architectures, since different 840 manufacturers might have their own proprietary systems relying on a 841 specific Management Topology Option, as described in [COM-REQ]. 842 Moreover, constituents of the network can be either private, 843 belonging to individuals or private companies, or owned by public 844 institutions leading to different legal and organization 845 requirements. Across the entire infrastructure, a variety of 846 constrained devices are likely to be used, and must be individually 847 managed. The NMS must be able to either work directly with different 848 types of devices, or have the ability to interoperate with multiple 849 different systems. 851 The challenges in the management of vehicles in a mobile transport 852 application are manifold. The up-to-date position of each node in 853 the network should be reported to the corresponding management 854 entities, since the nodes could be moving within or roaming between 855 different networks. Secondly, a variety of troubleshooting 856 information, including sensitive location information, needs to be 857 reported to the management system in order to provide accurate 858 service to the customer. Management systems dealing with mobile 859 nodes could possibly exploit specific patterns in the mobility of the 860 nodes. These patterns emerge due to repetitive vehicular usage in 861 scenarios like people commuting to work, logistics supply vehicles 862 transporting shipments between warehouses, etc. The NMS must also be 863 able to handle partitioned networks, which would arise due to the 864 dynamic nature of traffic resulting in large inter-vehicle gaps in 865 sparsely populated scenarios. Since mobile nodes might roam in 866 remote networks, the NMS should be able to provide operating 867 configuration updates regardless of node location. 869 The constrained devices in a moving transport network might be 870 initially configured in a factory and a reconfiguration might be 871 needed only rarely. New devices might be integrated in an ad-hoc 872 manner based on self-management and -configuration capabilities. 873 Monitoring and data exchange might be necessary to do via a gateway 874 entity connected to the back-end transport infrastructure. The 875 devices and entities in the transport infrastructure need to be 876 monitored more frequently and can be able to communicate with a 877 higher data rate. The connectivity of such entities does not 878 necessarily need to be wireless. The time scale for detecting and 879 recording failures in a moving transport network is likely measured 880 in hours and repairs might easily take days. It is likely that a 881 self-healing feature would be used locally. On the other hand, 882 failures in fixed transport application infrastructure (e.g., 883 traffic-lights, digital signage displays) is likely to be measured in 884 minutes so as to avoid untoward traffic incidents. As such, the NMS 885 must be able to deal with differing timeliness requirements based on 886 the type of devices. 888 4.9. Community Network Applications 890 Community networks are comprised of constrained routers in a multi- 891 hop mesh topology, communicating over a lossy, and often wireless 892 channels. While the routers are mostly non-mobile, the topology may 893 be very dynamic because of fluctuations in link quality of the 894 (wireless) channel caused by, e.g., obstacles, or other nearby radio 895 transmissions. Depending on the routers that are used in the 896 community network, the resources of the routers (memory, CPU) may be 897 more or less constrained - available resources may range from only a 898 few kilobytes of RAM to several megabytes or more, and CPUs may be 899 small and embedded, or more powerful general-purpose processors. 900 Examples of such community networks are the FunkFeuer network 901 (Vienna, Austria), FreiFunk (Berlin, Germany), Seattle Wireless 902 (Seattle, USA), and AWMN (Athens, Greece). These community networks 903 are public and non-regulated, allowing their users to connect to each 904 other and - through an uplink to an ISP - to the Internet. No fee, 905 other than the initial purchase of a wireless router, is charged for 906 these services. Applications of these community networks can be 907 diverse, e.g., location based services, free Internet access, file 908 sharing between users, distributed chat services, social networking, 909 video sharing, etc. 911 As an example of a community network, the FunkFeuer network comprises 912 several hundred routers, many of which have several radio interfaces 913 (with omnidirectional and some directed antennas). The routers of 914 the network are small-sized wireless routers, such as the Linksys 915 WRT54GL, available in 2011 for less than 50 Euros. These routers, 916 with 16 MB of RAM and 264 MHz of CPU power, are mounted on the 917 rooftops of the users. When new users want to connect to the 918 network, they acquire a wireless router, install the appropriate 919 firmware and routing protocol, and mount the router on the rooftop. 920 IP addresses for the router are assigned manually from a list of 921 addresses (because of the lack of auto-configuration standards for 922 mesh networks in the IETF). 924 While the routers are non-mobile, fluctuations in link quality 925 require an ad hoc routing protocol that allows for quick convergence 926 to reflect the effective topology of the network (such as NHDP 927 [RFC6130] and OLSRv2 [RFC7181] developed in the MANET WG). Usually, 928 no human interaction is required for these protocols, as all variable 929 parameters required by the routing protocol are either negotiated in 930 the control traffic exchange, or are only of local importance to each 931 router (i.e. do not influence interoperability). However, external 932 management and monitoring of an ad hoc routing protocol may be 933 desirable to optimize parameters of the routing protocol. Such an 934 optimization may lead to a more stable perceived topology and to a 935 lower control traffic overhead, and therefore to a higher delivery 936 success ratio of data packets, a lower end-to-end delay, and less 937 unnecessary bandwidth and energy usage. 939 Different use cases for the management of community networks are 940 possible: 942 o One single Network Management Station, e.g. a border gateway 943 providing connectivity to the Internet, requires managing or 944 monitoring routers in the community network, in order to 945 investigate problems (monitoring) or to improve performance by 946 changing parameters (managing). As the topology of the network is 947 dynamic, constant connectivity of each router towards the 948 management station cannot be guaranteed. Current network 949 management protocols, such as SNMP and NETCONF, may be used (e.g., 950 using interfaces such as the NHDP-MIB [RFC6779]). However, when 951 routers in the community network are constrained, existing 952 protocols may require too many resources in terms of memory and 953 CPU; and more importantly, the bandwidth requirements may exceed 954 the available channel capacity in wireless mesh networks. 955 Moreover, management and monitoring may be unfeasible if the 956 connection between the network management station and the routers 957 is frequently interrupted. 959 o Distributed network monitoring, in which more than one management 960 station monitors or manages other routers. Because connectivity 961 to a server cannot be guaranteed at all times, a distributed 962 approach may provide a higher reliability, at the cost of 963 increased complexity. Currently, no IETF standard exists for 964 distributed monitoring and management. 966 o Monitoring and management of a whole network or a group of 967 routers. Monitoring the performance of a community network may 968 require more information than what can be acquired from a single 969 router using a network management protocol. Statistics, such as 970 topology changes over time, data throughput along certain routing 971 paths, congestion etc., are of interest for a group of routers (or 972 the routing domain) as a whole. As of 2014, no IETF standard 973 allows for monitoring or managing whole networks, instead of 974 single routers. 976 4.10. Field Operations 978 The challenges of configuration and monitoring of networks operated 979 in the field by rescue and security agencies can be different from 980 the other use cases since the requirements and operating conditions 981 of such networks are quite different. 983 With technology advancements, field networks operated nowadays are 984 becoming large and can consist of varieties of different types of 985 equipment that run different protocols and tools that obviously 986 increase complexity of these mission-critical networks. In many 987 scenarios, configurations are, most likely, manually performed. 988 Furthermore, some legacy and even modern devices do not even support 989 IP networking. A majority of protocols and tools developed by 990 vendors that are being used are proprietary, which makes integration 991 more difficult. 993 The main reason for this disjoint operation scenario is that most 994 equipment is developed with specific task requirements in mind, 995 rather than interoperability of the varied equipment types. For 996 example, the operating conditions experienced by high altitude 997 security equipment is significantly different from that used in 998 desert conditions. Similarly, search and rescue operations equipment 999 used in case of fire rescue has different requirements than flood 1000 relief equipment. Furthermore, inter-operation of equipment with 1001 telecommunication equipment was not an expected outcome or in some 1002 scenarios this may not even be desirable. 1004 Currently, field networks operate with a fixed Network Operations 1005 Center (NOC) that physically manages the configuration and evaluation 1006 of all field devices. Once configured, the devices might be deployed 1007 in fixed or mobile scenarios. Any configuration changes required 1008 would need to be appropriately encrypted and authenticated to prevent 1009 unauthorized access. 1011 Hierarchical management of devices is a common requirement in such 1012 scenarios since local managers or operators may need to respond to 1013 changing conditions within their purview. The level of configuration 1014 management available at each hierarchy must also be closely governed. 1016 Since many field operation devices are used in hostile environments, 1017 a high failure and disconnection rate should be tolerated by the NMS, 1018 which must also be able to deal with multiple gateways and disjoint 1019 management protocols. 1021 Multi-national field operations involving search, rescue and security 1022 are becoming increasingly common, requiring inter-operation of a 1023 diverse set of equipment designed with different operating conditions 1024 in mind. Furthermore, different intra- and inter-governmental 1025 agencies are likely to have a different set of standards, best 1026 practices, rules and regulation, and implementation approaches that 1027 may contradict or conflict with each other. The NMS should be able 1028 to detect these and handle them in an acceptable manner, which may 1029 require human intervention. 1031 5. IANA Considerations 1033 This document does not introduce any new code-points or namespaces 1034 for registration with IANA. 1036 Note to RFC Editor: this section may be removed on publication as an 1037 RFC. 1039 6. Security Considerations 1041 This document discusses use cases for Management of Networks with 1042 Constrained Devices. The security considerations described 1043 throughout the companion document [COM-REQ] apply here as well. 1045 7. Contributors 1047 Following persons made significant contributions to and reviewed this 1048 document: 1050 o Ulrich Herberg (Fujitsu Laboratories of America) contributed the 1051 Section 4.9 on Community Network Applications. 1053 o Peter van der Stok contributed to Section 4.6 on Building 1054 Automation. 1056 o Zhen Cao contributed to Section 2.2 Cellular Access Technologies. 1058 o Gilman Tolle contributed the Section 4.4 on Automated Metering 1059 Infrastructure. 1061 o James Nguyen and Ulrich Herberg contributed to Section 4.10 on 1062 Military operations. 1064 8. Acknowledgments 1066 Following persons reviewed and provided valuable comments to 1067 different versions of this document: 1069 Dominique Barthel, Carsten Bormann, Zhen Cao, Benoit Claise, Bert 1070 Greevenbosch, Ulrich Herberg, James Nguyen, Zach Shelby, and Peter 1071 van der Stok. 1073 The editors would like to thank the reviewers and the participants on 1074 the Coman maillist for their valuable contributions and comments. 1076 9. Informative References 1078 [RFC6130] Clausen, T., Dearlove, C., and J. Dean, "Mobile Ad Hoc 1079 Network (MANET) Neighborhood Discovery Protocol (NHDP)", 1080 RFC 6130, April 2011. 1082 [RFC6568] Kim, E., Kaspar, D., and JP. Vasseur, "Design and 1083 Application Spaces for IPv6 over Low-Power Wireless 1084 Personal Area Networks (6LoWPANs)", RFC 6568, April 2012. 1086 [RFC6779] Herberg, U., Cole, R., and I. Chakeres, "Definition of 1087 Managed Objects for the Neighborhood Discovery Protocol", 1088 RFC 6779, October 2012. 1090 [RFC6988] Quittek, J., Chandramouli, M., Winter, R., Dietz, T., and 1091 B. Claise, "Requirements for Energy Management", RFC 6988, 1092 September 2013. 1094 [RFC7181] Clausen, T., Dearlove, C., Jacquet, P., and U. Herberg, 1095 "The Optimized Link State Routing Protocol Version 2", RFC 1096 7181, April 2014. 1098 [RFC7228] Bormann, C., Ersue, M., and A. Keranen, "Terminology for 1099 Constrained-Node Networks", RFC 7228, May 2014. 1101 [RFC7326] Parello, J., Claise, B., Schoening, B., and J. Quittek, 1102 "Energy Management Framework", RFC 7326, September 2014. 1104 [COM-REQ] Ersue, M., Romascanu, D., and J. Schoenwaelder, 1105 "Management of Networks with Constrained Devices: Problem 1106 Statement and Requirements", draft-ietf-opsawg-coman- 1107 probstate-reqs (work in progress), February 2014. 1109 [IOT-SEC] Garcia-Morchon, O., Kumar, S., Keoh, S., Hummen, R., and 1110 R. Struik, "Security Considerations in the IP-based 1111 Internet of Things", draft-garcia-core-security-06 (work 1112 in progress), September 2013. 1114 Appendix A. Change Log 1116 A.1. draft-ietf-opsawg-coman-use-cases-02 - draft-ietf-opsawg-coman- 1117 use-cases-03 1119 o Updated references to take into account RFCs that have now been 1120 published 1122 o Added text to the access technologies section explaining why fixed 1123 line technologies (e.g., powerline communications) have not been 1124 discussed. 1126 o Created a new section, Device Lifecycle, discussing the impact of 1127 different device lifecycle stages on the management of constrained 1128 networks. 1130 o Homogenized usage of device classes to form C0, C1 and C2. 1132 o Ensured consistency in usage of Wi-Fi, ZigBee and other 1133 terminologies. 1135 o Added text clarifying the management aspects of the Building 1136 Automation and Industrial Automation use cases. 1138 o Clarified the meaning of unreliability in context of constrained 1139 devices and networks. 1141 o Added information regarding the configuration and operation of 1142 factory automation use case, based on the type of information 1143 provided in the building automation use case. 1145 o Fixed editorial issues discovered by reviewers. 1147 A.2. draft-ietf-opsawg-coman-use-cases-01 - draft-ietf-opsawg-coman- 1148 use-cases-02 1150 o Renamed Mobile Access Technologies section to Cellular Access 1151 Technologies 1153 o Changed references to mobile access technologies to now read 1154 cellular access technologies. 1156 o Added text to the introduction to point out that the list of use 1157 cases is not exhaustive since others unknown to the authors might 1158 exist. 1160 o Updated references to take into account RFCs that have been now 1161 published. 1163 o Updated Environmental Monitoring section to make it clear that in 1164 some scenarios it may not be prudent to repair devices. 1166 o Added clarification in Infrastructure Monitoring section that 1167 reliable communication is achieved via application layer 1168 transactions 1170 o Removed reference to Energy Devices from Energy Management 1171 section, instead labeling them as devices within the context of 1172 energy management. 1174 o Reduced descriptive content in Energy Management section. 1176 o Rewrote text in Energy Management section to highlight management 1177 characteristics of Smart Meter and AMI networks. 1179 o Added text regarding timely delivery of information, and related 1180 management system characteristic, to the Medical Applications 1181 section 1183 o Changed subnets to network segment in Building Automation section. 1185 o Changed structure to infrastructure in Building Automation 1186 section, and added text to highlight associated deployment 1187 difficulties. 1189 o Removed Trickle timer as example of common values to be set in 1190 Building Automation section. 1192 o Added text regarding the possible availability of outsourced and 1193 cloud based management systems for Home Automation. 1195 o Added text to Transport Applications section to highlight the 1196 requirement of IT infrastructure for such applications to function 1197 on top of. 1199 o Merged the Transport Applications and Vehicular Networks section 1200 together. Following changes to the Vehicular Networks section 1201 were merged back into Transport Applications 1203 * Replaced wireless last hops with wireless access to vehicles in 1204 Vehicular Networks. 1206 * Expanded proprietary systems to "systems relying on a specific 1207 Management Topology Option, as described in [COM-REQ]." within 1208 Vehicular Networks section. 1210 * Added text regarding mobility patterns to Vehicular Networks. 1212 o Changed the Military Operations use case to Field Operations and 1213 edited the text to be suitable to such scenarios. 1215 A.3. draft-ietf-opsawg-coman-use-cases-00 - draft-ietf-opsawg-coman- 1216 use-cases-01 1218 o Reordered some use cases to improve the flow. 1220 o Added "Vehicular Networks". 1222 o Shortened the Military Operations use case. 1224 o Started adding substance to the security considerations section. 1226 A.4. draft-ersue-constrained-mgmt-03 - draft-ersue-opsawg-coman-use- 1227 cases-00 1229 o Reduced the terminology section for terminology addressed in the 1230 LWIG and Coman Requirements drafts. Referenced the other drafts. 1232 o Checked and aligned all terminology against the LWIG terminology 1233 draft. 1235 o Spent some effort to resolve the intersection between the 1236 Industrial Application, Home Automation and Building Automation 1237 use cases. 1239 o Moved section section 3. Use Cases from the companion document 1240 [COM-REQ] to this draft. 1242 o Reformulation of some text parts for more clarity. 1244 A.5. draft-ersue-constrained-mgmt-02-03 1246 o Extended the terminology section and removed some of the 1247 terminology addressed in the new LWIG terminology draft. 1248 Referenced the LWIG terminology draft. 1250 o Moved Section 1.3. on Constrained Device Classes to the new LWIG 1251 terminology draft. 1253 o Class of networks considering the different type of radio and 1254 communication technologies in use and dimensions extended. 1256 o Extended the Problem Statement in Section 2. following the 1257 requirements listed in Section 4. 1259 o Following requirements, which belong together and can be realized 1260 with similar or same kind of solutions, have been merged. 1262 * Distributed Management and Peer Configuration, 1264 * Device status monitoring and Neighbor-monitoring, 1266 * Passive Monitoring and Reactive Monitoring, 1268 * Event-driven self-management - Self-healing and Periodic self- 1269 management, 1271 * Authentication of management systems and Authentication of 1272 managed devices, 1274 * Access control on devices and Access control on management 1275 systems, 1277 * Management of Energy Resources and Data models for energy 1278 management, 1280 * Software distribution (group-based firmware update) and Group- 1281 based provisioning. 1283 o Deleted the empty section on the gaps in network management 1284 standards, as it will be written in a separate draft. 1286 o Added links to mentioned external pages. 1288 o Added text on OMA M2M Device Classification in appendix. 1290 A.6. draft-ersue-constrained-mgmt-01-02 1292 o Extended the terminology section. 1294 o Added additional text for the use cases concerning deployment 1295 type, network topology in use, network size, network capabilities, 1296 radio technology, etc. 1298 o Added examples for device classes in a use case. 1300 o Added additional text provided by Cao Zhen (China Mobile) for 1301 Mobile Applications and by Peter van der Stok for Building 1302 Automation. 1304 o Added the new use cases 'Advanced Metering Infrastructure' and 1305 'MANET Concept of Operations in Military'. 1307 o Added the section 'Managing the Constrainedness of a Device or 1308 Network' discussing the needs of very constrained devices. 1310 o Added a note that the requirements in [COM-REQ] need to be seen as 1311 standalone requirements and the current document does not 1312 recommend any profile of requirements. 1314 o Added a section in [COM-REQ] for the detailed requirements on 1315 constrained management matched to management tasks like fault, 1316 monitoring, configuration management, Security and Access Control, 1317 Energy Management, etc. 1319 o Solved nits and added references. 1321 o Added Appendix A on the related development in other bodies. 1323 o Added Appendix B on the work in related research projects. 1325 A.7. draft-ersue-constrained-mgmt-00-01 1327 o Splitted the section on 'Networks of Constrained Devices' into the 1328 sections 'Network Topology Options' and 'Management Topology 1329 Options'. 1331 o Added the use case 'Community Network Applications' and 'Mobile 1332 Applications'. 1334 o Provided a Contributors section. 1336 o Extended the section on 'Medical Applications'. 1338 o Solved nits and added references. 1340 Authors' Addresses 1342 Mehmet Ersue (editor) 1343 Nokia Networks 1345 Email: mehmet.ersue@nsn.com 1347 Dan Romascanu 1348 Avaya 1350 Email: dromasca@avaya.com 1352 Juergen Schoenwaelder 1353 Jacobs University Bremen 1355 Email: j.schoenwaelder@jacobs-university.de 1357 Anuj Sehgal 1358 Jacobs University Bremen 1360 Email: s.anuj@jacobs-university.de