idnits 2.17.1 draft-ietf-opsawg-coman-use-cases-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 14, 2014) is 3724 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 6779 (Obsoleted by RFC 7939) == Outdated reference: A later version (-19) exists of draft-ietf-eman-framework-15 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Ersue, Ed. 3 Internet-Draft Nokia Solutions and Networks 4 Intended status: Informational D. Romascanu 5 Expires: August 18, 2014 Avaya 6 J. Schoenwaelder 7 A. Sehgal 8 Jacobs University Bremen 9 February 14, 2014 11 Management of Networks with Constrained Devices: Use Cases 12 draft-ietf-opsawg-coman-use-cases-01 14 Abstract 16 This document discusses the use cases concerning the management of 17 networks, where constrained devices are involved. A problem 18 statement, deployment options and the requirements on the networks 19 with constrained devices can be found in the companion document on 20 "Management of Networks with Constrained Devices: Problem Statement 21 and Requirements". 23 Status of this Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at http://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on August 18, 2014. 40 Copyright Notice 42 Copyright (c) 2014 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 58 2. Access Technologies . . . . . . . . . . . . . . . . . . . . . 5 59 2.1. Constrained Access Technologies . . . . . . . . . . . . . 5 60 2.2. Mobile Access Technologies . . . . . . . . . . . . . . . . 5 61 3. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 7 62 3.1. Environmental Monitoring . . . . . . . . . . . . . . . . . 7 63 3.2. Infrastructure Monitoring . . . . . . . . . . . . . . . . 7 64 3.3. Industrial Applications . . . . . . . . . . . . . . . . . 8 65 3.4. Energy Management . . . . . . . . . . . . . . . . . . . . 10 66 3.5. Medical Applications . . . . . . . . . . . . . . . . . . . 12 67 3.6. Building Automation . . . . . . . . . . . . . . . . . . . 13 68 3.7. Home Automation . . . . . . . . . . . . . . . . . . . . . 15 69 3.8. Transport Applications . . . . . . . . . . . . . . . . . . 15 70 3.9. Vehicular Networks . . . . . . . . . . . . . . . . . . . . 17 71 3.10. Community Network Applications . . . . . . . . . . . . . . 18 72 3.11. Military Operations . . . . . . . . . . . . . . . . . . . 19 73 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 74 5. Security Considerations . . . . . . . . . . . . . . . . . . . 22 75 6. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . 23 76 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 24 77 8. Informative References . . . . . . . . . . . . . . . . . . . . 25 78 Appendix A. Open Issues . . . . . . . . . . . . . . . . . . . . . 26 79 Appendix B. Change Log . . . . . . . . . . . . . . . . . . . . . 27 80 B.1. draft-ietf-opsawg-coman-use-cases-00 - 81 draft-ietf-opsawg-coman-use-cases-01 . . . . . . . . . . . 27 82 B.2. draft-ersue-constrained-mgmt-03 - 83 draft-ersue-opsawg-coman-use-cases-00 . . . . . . . . . . 27 84 B.3. draft-ersue-constrained-mgmt-02-03 . . . . . . . . . . . . 27 85 B.4. draft-ersue-constrained-mgmt-01-02 . . . . . . . . . . . . 28 86 B.5. draft-ersue-constrained-mgmt-00-01 . . . . . . . . . . . . 29 87 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 30 89 1. Introduction 91 Small devices with limited CPU, memory, and power resources, so 92 called constrained devices (aka. sensor, smart object, or smart 93 device) can be connected to a network. Such a network of constrained 94 devices itself may be constrained or challenged, e.g., with 95 unreliable or lossy channels, wireless technologies with limited 96 bandwidth and a dynamic topology, needing the service of a gateway or 97 proxy to connect to the Internet. In other scenarios, the 98 constrained devices can be connected to a non-constrained network 99 using off-the-shelf protocol stacks. Constrained devices might be in 100 charge of gathering information in diverse settings including natural 101 ecosystems, buildings, and factories and send the information to one 102 or more server stations. 104 Network management is characterized by monitoring network status, 105 detecting faults, and inferring their causes, setting network 106 parameters, and carrying out actions to remove faults, maintain 107 normal operation, and improve network efficiency and application 108 performance. The traditional network management application 109 periodically collects information from a set of elements that are 110 needed to manage, processes the data, and presents them to the 111 network management users. Constrained devices, however, often have 112 limited power, low transmission range, and might be unreliable. They 113 might also need to work in hostile environments with advanced 114 security requirements or need to be used in harsh environments for a 115 long time without supervision. Due to such constraints, the 116 management of a network with constrained devices offers different 117 type of challenges compared to the management of a traditional IP 118 network. 120 This document aims to understand the use cases for the management of 121 a network, where constrained devices are involved. The document 122 lists and discusses diverse use cases for the management from the 123 network as well as from the application point of view. The 124 application scenarios discussed aim to show where networks of 125 constrained devices are expected to be deployed. For each 126 application scenario, we first briefly describe the characteristics 127 followed by a discussion on how network management can be provided, 128 who is likely going to be responsible for it, and on which time-scale 129 management operations are likely to be carried out. 131 A problem statement, deployment and management topology options as 132 well as the requirements on the networks with constrained devices can 133 be found in the companion document [COM-REQ]. 135 This documents builds on the terminology defined in 136 [I-D.ietf-lwig-terminology] and [COM-REQ]. 138 [I-D.ietf-lwig-terminology] is a base document for the terminology 139 concerning constrained devices and constrained networks. Some use 140 cases specific to IPv6 over Low-Power Wireless Personal Area Networks 141 (6LoWPANs) can be found in [RFC6568]. 143 2. Access Technologies 145 Besides the management requirements imposed by the different use 146 cases, the access technologies used by constrained devices can impose 147 restrictions and requirements upon the Network Management System 148 (NMS) and protocol of choice. 150 It is possible that some networks of constrained devices might 151 utilize traditional non-constrained access technologies for network 152 access, e.g., local area networks with plenty of capacity. In such 153 scenarios, the constrainedness of the device presents special 154 management restrictions and requirements rather than the access 155 technology utilized. 157 However, in other situations constrained or mobile access 158 technologies might be used for network access, thereby causing 159 management restrictions and requirements to arise as a result of the 160 underlying access technologies. 162 2.1. Constrained Access Technologies 164 Due to resource restrictions, embedded devices deployed as sensors 165 and actuators in the various use cases utilize low-power low data- 166 rate wireless access technologies such as IEEE 802.15.4, DECT ULE or 167 BT-LE for network connectivity. 169 In such scenarios, it is important for the NMS to be aware of the 170 restrictions imposed by these access technologies to efficiently 171 manage these constrained devices. Specifically, such low-power low 172 data-rate access technologies typically have small frame sizes. So 173 it would be important for the NMS and management protocol of choice 174 to craft packets in a way that avoids fragmentation and reassembly of 175 packets since this can use valuable memory on constrained devices. 177 Devices using such access technologies might operate via a gateway 178 that translates between these access technologies and more 179 traditional Internet protocols. A hierarchical approach to device 180 management in such a situation might be useful, wherein the gateway 181 device is in-charge of devices connected to it, while the NMS 182 conducts management operations only to the gateway. 184 2.2. Mobile Access Technologies 186 Machine to machine (M2M) services are increasingly provided by mobile 187 service providers as numerous devices, home appliances, utility 188 meters, cars, video surveillance cameras, and health monitors, are 189 connected with mobile broadband technologies. Different 190 applications, e.g., in a home appliance or in-car network, use 191 Bluetooth, Wi-Fi or Zigbee locally and connect to a cellular module 192 acting as a gateway between the constrained environment and the 193 mobile cellular network. 195 Such a gateway might provide different options for the connectivity 196 of mobile networks and constrained devices: 198 o a smart phone with 3G/4G and WLAN radio might use BT-LE to connect 199 to the devices in a home area network, 201 o a femtocell might be combined with home gateway functionality 202 acting as a low-power cellular base station connecting smart 203 devices to the application server of a mobile service provider, 205 o an embedded cellular module with LTE radio connecting the devices 206 in the car network with the server running the telematics service, 208 o an M2M gateway connected to the mobile operator network supporting 209 diverse IoT connectivity technologies including ZigBee and CoAP 210 over 6LoWPAN over IEEE 802.15.4. 212 Common to all scenarios above is that they are embedded in a service 213 and connected to a network provided by a mobile service provider. 214 Usually there is a hierarchical deployment and management topology in 215 place where different parts of the network are managed by different 216 management entities and the count of devices to manage is high (e.g. 217 many thousands). In general, the network is comprised by manifold 218 type and size of devices matching to different device classes. As 219 such, the managing entity needs to be prepared to manage devices with 220 diverse capabilities using different communication or management 221 protocols. In case the devices are directly connected to a gateway 222 they most likely are managed by a management entity integrated with 223 the gateway, which itself is part of the Network Management System 224 (NMS) run by the mobile operator. Smart phones or embedded modules 225 connected to a gateway might be themselves in charge to manage the 226 devices on their level. The initial and subsequent configuration of 227 such a device is mainly based on self-configuration and is triggered 228 by the device itself. 230 The gateway might be in charge of filtering and aggregating the data 231 received from the device as the information sent by the device might 232 be mostly redundant. 234 3. Use Cases 236 3.1. Environmental Monitoring 238 Environmental monitoring applications are characterized by the 239 deployment of a number of sensors to monitor emissions, water 240 quality, or even the movements and habits of wildlife. Other 241 applications in this category include earthquake or tsunami early- 242 warning systems. The sensors often span a large geographic area, 243 they can be mobile, and they are often difficult to replace. 244 Furthermore, the sensors are usually not protected against tampering. 246 Management of environmental monitoring applications is largely 247 concerned with the monitoring whether the system is still functional 248 and the roll-out of new constrained devices in case the system looses 249 too much of its structure. The constrained devices themselves need 250 to be able to establish connectivity (auto-configuration) and they 251 need to be able to deal with events such as loosing neighbors or 252 being moved to other locations. 254 Management responsibility typically rests with the organization 255 running the environmental monitoring application. Since these 256 monitoring applications must be designed to tolerate a number of 257 failures, the time scale for detecting and recording failures is for 258 some of these applications likely measured in hours and repairs might 259 easily take days. However, for certain environmental monitoring 260 applications, much tighter time scales may exist and might be 261 enforced by regulations (e.g., monitoring of nuclear radiation). 263 3.2. Infrastructure Monitoring 265 Infrastructure monitoring is concerned with the monitoring of 266 infrastructures such as bridges, railway tracks, or (offshore) 267 windmills. The primary goal is usually to detect any events or 268 changes of the structural conditions that can impact the risk and 269 safety of the infrastructure being monitored. Another secondary goal 270 is to schedule repair and maintenance activities in a cost effective 271 manner. 273 The infrastructure to monitor might be in a factory or spread over a 274 wider area but difficult to access. As such, the network in use 275 might be based on a combination of fixed and wireless technologies, 276 which use robust networking equipment and support reliable 277 communication. It is likely that constrained devices in such a 278 network are mainly C2 devices and have to be controlled centrally by 279 an application running on a server. In case such a distributed 280 network is widely spread, the wireless devices might use diverse 281 long-distance wireless technologies such as WiMAX, or 3G/LTE, e.g. 283 based on embedded hardware modules. In cases, where an in-building 284 network is involved, the network can be based on Ethernet or wireless 285 technologies suitable for in-building usage. 287 The management of infrastructure monitoring applications is primarily 288 concerned with the monitoring of the functioning of the system. 289 Infrastructure monitoring devices are typically rolled out and 290 installed by dedicated experts and changes are rare since the 291 infrastructure itself changes rarely. However, monitoring devices 292 are often deployed in unsupervised environments and hence special 293 attention must be given to protecting the devices from being 294 modified. 296 Management responsibility typically rests with the organization 297 owning the infrastructure or responsible for its operation. The time 298 scale for detecting and recording failures is likely measured in 299 hours and repairs might easily take days. However, certain events 300 (e.g., natural disasters) may require that status information be 301 obtained much more quickly and that replacements of failed sensors 302 can be rolled out quickly (or redundant sensors are activated 303 quickly). In case the devices are difficult to access, a self- 304 healing feature on the device might become necessary. 306 3.3. Industrial Applications 308 Industrial Applications and smart manufacturing refer to tasks such 309 as networked control and monitoring of manufacturing equipment, asset 310 and situation management, or manufacturing process control. For the 311 management of a factory it is becoming essential to implement smart 312 capabilities. From an engineering standpoint, industrial 313 applications are intelligent systems enabling rapid manufacturing of 314 new products, dynamic response to product demands, and real-time 315 optimization of manufacturing production and supply chain networks. 316 Potential industrial applications (e.g., for smart factories and 317 smart manufacturing) are: 319 o Digital control systems with embedded, automated process controls, 320 operator tools, as well as service information systems optimizing 321 plant operations and safety. 323 o Asset management using predictive maintenance tools, statistical 324 evaluation, and measurements maximizing plant reliability. 326 o Smart sensors detecting anomalies to avoid abnormal or 327 catastrophic events. 329 o Smart systems integrated within the industrial energy management 330 system and externally with the smart grid enabling real-time 331 energy optimization. 333 Management of Industrial Applications and smart manufacturing may in 334 some situations involve Building Automation tasks such as control of 335 energy, HVAC (heating, ventilation, and air conditioning), lighting, 336 or access control. Interacting with management systems from other 337 application areas might be important in some cases (e.g., 338 environmental monitoring for electric energy production, energy 339 management for dynamically scaling manufacturing, vehicular networks 340 for mobile asset tracking). 342 Sensor networks are an essential technology used for smart 343 manufacturing. Measurements, automated controls, plant optimization, 344 health and safety management, and other functions are provided by a 345 large number of networked sectors. Data interoperability and 346 seamless exchange of product, process, and project data are enabled 347 through interoperable data systems used by collaborating divisions or 348 business systems. Intelligent automation and learning systems are 349 vital to smart manufacturing but must be effectively integrated with 350 the decision environment. Wireless sensor networks (WSN) have been 351 developed for machinery Condition-based Maintenance (CBM) as they 352 offer significant cost savings and enable new functionalities. 353 Inaccessible locations, rotating machinery, hazardous areas, and 354 mobile assets can be reached with wireless sensors. WSNs can provide 355 today wireless link reliability, real-time capabilities, and quality- 356 of-service and enable industrial and related wireless sense and 357 control applications. 359 Management of industrial and factory applications is largely focused 360 on the monitoring whether the system is still functional, real-time 361 continuous performance monitoring, and optimization as necessary. 362 The factory network might be part of a campus network or connected to 363 the Internet. The constrained devices in such a network need to be 364 able to establish configuration themselves (auto-configuration) and 365 might need to deal with error conditions as much as possible locally. 366 Access control has to be provided with multi-level administrative 367 access and security. Support and diagnostics can be provided through 368 remote monitoring access centralized outside of the factory. 370 Management responsibility is typically owned by the organization 371 running the industrial application. Since the monitoring 372 applications must handle a potentially large number of failures, the 373 time scale for detecting and recording failures is for some of these 374 applications likely measured in minutes. However, for certain 375 industrial applications, much tighter time scales may exist, e.g. in 376 real-time, which might be enforced by the manufacturing process or 377 the use of critical material. 379 3.4. Energy Management 381 The EMAN working group developed an energy management framework 382 [I-D.ietf-eman-framework] for devices and device components within or 383 connected to communication networks. This document observes that one 384 of the challenges of energy management is that a power distribution 385 network is responsible for the supply of energy to various devices 386 and components, while a separate communication network is typically 387 used to monitor and control the power distribution network. Devices 388 that have energy management capability are defined as Energy Devices 389 and identified components within a device (Energy Device Components) 390 can be monitored for parameters like Power, Energy, Demand and Power 391 Quality. If a device contains batteries, they can be also monitored 392 and managed. 394 Energy devices differ in complexity and may include basic sensors or 395 switches, specialized electrical meters, or power distribution units 396 (PDU), and subsystems inside the network devices (routers, network 397 switches) or home or industrial appliances. An Energy Management 398 System is a combination of hardware and software used to administer a 399 network with the primary purpose being Energy Management. The 400 operators of such a system are either the utility providers or 401 customers that aim to control and reduce the energy consumption and 402 the associated costs. The topology in use differs and the deployment 403 can cover areas from small surfaces (individual homes) to large 404 geographical areas. The EMAN requirements document [RFC6988] 405 discusses the requirements for energy management concerning 406 monitoring and control functions. 408 It is assumed that Energy Management will apply to a large range of 409 devices of all classes and networks topologies. Specific resource 410 monitoring like battery utilization and availability may be specific 411 to devices with lower physical resources (device classes C0 or C1). 413 Energy Management is especially relevant to the Smart Grid. A Smart 414 Grid is an electrical grid that uses data networks to gather and to 415 act on energy and power-related information in an automated fashion 416 with the goal to improve the efficiency, reliability, economics, and 417 sustainability of the production and distribution of electricity. A 418 Smart Grid provides sustainable and reliable generation, 419 transmission, distribution, storage and consumption of electrical 420 energy based on advanced energy and information technology. Smart 421 Grids enable the following specific application areas: Smart 422 transmission systems, Demand Response/Load Management, Substation 423 Automation, Advanced Distribution Management, Advanced Metering 424 Infrastructure (AMI), Smart Metering, Smart Home and Building 425 Automation, E-mobility, etc. 427 Smart Metering is a good example of Smart Grid based Energy 428 Management applications. Different types of possibly wireless small 429 meters produce all together a large amount of data, which is 430 collected by a central entity and processed by an application server, 431 which may be located within the customer's residence or off-site in a 432 data-center. The communication infrastructure can be provided by a 433 mobile network operator as the meters in urban areas will have most 434 likely a cellular or WiMAX radio. In case the application server is 435 located within the residence, such meters are more likely to use WiFi 436 protocols to interconnect with an existing network. 438 An AMI network is another example of the Smart Grid that enables an 439 electric utility to retrieve frequent electric usage data from each 440 electric meter installed at a customer's home or business. This is 441 unlike Smart Metering, in which case the customer or their agents 442 install appliance level meters, because an AMI infrastructure is 443 typically managed by the utility providers. With an AMI network, a 444 utility can also receive immediate notification of power outages when 445 they occur, directly from the electric meters that are experiencing 446 those outages. In addition, if the AMI network is designed to be 447 open and extensible, it could serve as the backbone for communicating 448 with other distribution automation devices besides meters, which 449 could include transformers and reclosers. 451 Each meter in the AMI network typically contains constrained devices 452 of the C2 type. Each meter uses the constrained devices to connect 453 to mesh networks with a low-bandwidth radio. These radios can be 50, 454 150, or 200 kbps at raw link speed, but actual network throughput may 455 be significantly lower due to forward error correction, multihop 456 delays, MAC delays, lossy links, and protocol overhead. Usage data 457 and outage notifications can be sent by these meters to the utility's 458 headend systems, typically located in a data center managed by the 459 utility, which include meter data collection systems, meter data 460 management systems, and outage management systems. 462 Meters in an AMI network, unlike in Smart Metering, act as traffic 463 sources and routers as well. Typically, smaller amounts of traffic 464 (read requests, configuration) flow "downstream" from the headend to 465 the mesh, and larger amounts of traffic flow "upstream" from the mesh 466 to the headend. However, during a firmware update operation for 467 example, larger amounts of traffic might flow downstream while 468 smaller amounts flow upstream. The mesh network is anchored by a 469 collection of higher-end devices that bridge the constrained network 470 with a backhaul link that connects to a less-constrained network via 471 cellular, WiMAX, or Ethernet. These higher-end devices might be 472 installed on utility poles that could be owned and managed by a 473 different entity than the utility company. 475 While a Smart Metering solution is likely to have a smaller number of 476 devices within a single household, AMI network installations could 477 contain 1000 meters per router, i.e., the higher-end device. Meters 478 in a local network that use a specific router form a Local Meter 479 Network (LMN). When powered on, meters discover nearby LMNs, select 480 the optimal LMN to join, and the meters in that LMN to route through. 481 However, in a Smart Metering application the meters are likely to 482 connect directly to a less-constrained network, thereby not needing 483 to form such local mesh networks. 485 Encryption key sharing in both types of network is also likely to be 486 important for providing confidentiality for all data traffic. In AMI 487 networks the key may be obtained by a meter only after an end-to-end 488 authentication process based on certificates, ensuring that only 489 authorized and authenticated meters are allowed to join the LMN. 490 Smart Metering solution could adopt a similar approach or the 491 security may be implied due to the encrypted WiFi networks they 492 become part of. 494 These examples demonstrate that the Smart Grid, and Energy Management 495 in general, is built on a distributed and heterogeneous network and 496 can use a combination of diverse networking technologies, such as 497 wireless Access Technologies (WiMAX, Cellular, etc.), wireline and 498 Internet Technologies (e.g., IP/MPLS, Ethernet, SDH/PDH over Fiber 499 optic) as well as low-power radio technologies enabling the 500 networking of smart meters, home appliances, and constrained devices 501 (e.g., BT-LE, ZigBee, Z-Wave, Wi-Fi). The operational effectiveness 502 of the Smart Grid is highly dependent on a robust, two-way, secure, 503 and reliable communications network with suitable availability. 505 The management of such a network requires end-to-end management of 506 and information exchange through different types of networks. 507 However, as of today there is no integrated energy management 508 approach and no common information model available. Specific energy 509 management applications or network islands use their own management 510 mechanisms. 512 3.5. Medical Applications 514 Constrained devices can be seen as an enabling technology for 515 advanced and possibly remote health monitoring and emergency 516 notification systems, ranging from blood pressure and heart rate 517 monitors to advanced devices capable to monitor implanted 518 technologies, such as pacemakers or advanced hearing aids. Medical 519 sensors may not only be attached to human bodies, they might also 520 exist in the infrastructure used by humans such as bathrooms or 521 kitchens. Medical applications will also be used to ensure 522 treatments are being applied properly and they might guide people 523 losing orientation. Fitness and wellness applications, such as 524 connected scales or wearable heart monitors, encourage consumers to 525 exercise and empower self-monitoring of key fitness indicators. 526 Different applications use Bluetooth, Wi-Fi or Zigbee connections to 527 access the patient's smartphone or home cellular connection to access 528 the Internet. 530 Constrained devices that are part of medical applications are managed 531 either by the users of those devices or by an organization providing 532 medical (monitoring) services for physicians. In the first case, 533 management must be automatic and or easy to install and setup by 534 average people. In the second case, it can be expected that devices 535 be controlled by specially trained people. In both cases, however, 536 it is crucial to protect the privacy of the people to which medical 537 devices are attached. Even though the data collected by a heart beat 538 monitor might be protected, the pure fact that someone carries such a 539 device may need protection. As such, certain medical appliances may 540 not want to participate in discovery and self-configuration protocols 541 in order to remain invisible. 543 Many medical devices are likely to be used (and relied upon) to 544 provide data to physicians in critical situations since the biggest 545 market is likely elderly and handicapped people. As such, fault 546 detection of the communication network or the constrained devices 547 becomes a crucial function that must be carried out with high 548 reliability and, depending on the medical appliance and its 549 application, within seconds. 551 3.6. Building Automation 553 Building automation comprises the distributed systems designed and 554 deployed to monitor and control the mechanical, electrical and 555 electronic systems inside buildings with various destinations (e.g., 556 public and private, industrial, institutions, or residential). 557 Advanced Building Automation Systems (BAS) may be deployed 558 concentrating the various functions of safety, environmental control, 559 occupancy, security. More and more the deployment of the various 560 functional systems is connected to the same communication 561 infrastructure (possibly Internet Protocol based), which may involve 562 wired or wireless communications networks inside the building. 564 Building automation requires the deployment of a large number (10- 565 100.000) of sensors that monitor the status of devices, and 566 parameters inside the building and controllers with different 567 specialized functionality for areas within the building or the 568 totality of the building. Inter-node distances between neighboring 569 nodes vary between 1 to 20 meters. Contrary to home automation, in 570 building management the devices are expected to be managed assets and 571 known to a set of commissioning tools and a data storage, such that 572 every connected device has a known origin. The management includes 573 verifying the presence of the expected devices and detecting the 574 presence of unwanted devices. 576 Examples of functions performed by such controllers are regulating 577 the quality, humidity, and temperature of the air inside the building 578 and lighting. Other systems may report the status of the machinery 579 inside the building like elevators, or inside the rooms like 580 projectors in meeting rooms. Security cameras and sensors may be 581 deployed and operated on separate dedicated infrastructures connected 582 to the common backbone. The deployment area of a BAS is typically 583 inside one building (or part of it) or several buildings 584 geographically grouped in a campus. A building network can be 585 composed of subnets, where a subnet covers a floor, an area on the 586 floor, or a given functionality (e.g., security cameras). 588 Some of the sensors in Building Automation Systems (for example fire 589 alarms or security systems) register, record and transfer critical 590 alarm information and therefore must be resilient to events like loss 591 of power or security attacks. This leads to the need that some 592 components and subsystems operate in constrained conditions and are 593 separately certified. Also in some environments, the malfunctioning 594 of a control system (like temperature control) needs to be reported 595 in the shortest possible time. Complex control systems can 596 misbehave, and their critical status reporting and safety algorithms 597 need to be basic and robust and perform even in critical conditions. 599 Building Automation solutions are deployed in some cases in newly 600 designed buildings, in other cases it might be over existing 601 infrastructures. In the first case, there is a broader range of 602 possible solutions, which can be planned for the infrastructure of 603 the building. In the second case the solution needs to be deployed 604 over an existing structure taking into account factors like existing 605 wiring, distance limitations, the propagation of radio signals over 606 walls and floors. As a result, some of the existing WLAN solutions 607 (e.g., IEEE 802.11 or IEEE 802.15) may be deployed. In mission- 608 critical or security sensitive environments and in cases where link 609 failures happen often, topologies that allow for reconfiguration of 610 the network and connection continuity may be required. Some of the 611 sensors deployed in building automation may be very simple 612 constrained devices for which class 0 or class 1 may be assumed. 614 For lighting applications, groups of lights must be defined and 615 managed. Commands to a group of light must arrive within 200 ms at 616 all destinations. The installation and operation of a building 617 network has different requirements. During the installation, many 618 stand-alone networks of a few to 100 nodes co-exist without a 619 connection to the backbone. During this phase, the nodes are 620 identified with a network identifier related to their physical 621 location. Devices are accessed from an installation tool to connect 622 them to the network in a secure fashion. During installation, the 623 setting of parameters to common values to enable interoperability may 624 occur (e.g., Trickle parameter values). During operation, the 625 networks are connected to the backbone while maintaining the network 626 identifier to physical location relation. Network parameters like 627 address and name are stored in DNS. The names can assist in 628 determining the physical location of the device. 630 3.7. Home Automation 632 Home automation includes the control of lighting, heating, 633 ventilation, air conditioning, appliances, entertainment and home 634 security devices to improve convenience, comfort, energy efficiency, 635 and security. It can be seen as a residential extension of building 636 automation. However, unlike a building automation system, the 637 infrastructure in a home is operated in a considerably more ad-hoc 638 manner, with no centralized management system akin to a Building 639 Automation System (BAS) available. 641 Home automation networks need a certain amount of configuration 642 (associating switches or sensors to actors) that is either provided 643 by electricians deploying home automation solutions, by third party 644 home automation service providers (e.g., small specialized companies 645 or home automation device manufacturers) or by residents by using the 646 application user interface provided by home automation devices to 647 configure (parts of) the home automation solution. Similarly, 648 failures may be reported via suitable interfaces to residents or they 649 might be recorded and made available to services providers in charge 650 of the maintenance of the home automation infrastructure. 652 The management responsibility lies either with the residents or it 653 may be outsourced to electricians and/or third parties providing 654 management of home automation solutions as a service. A varying 655 combination of electricians, service providers or the residents may 656 be responsible for different aspects of managing the infrastructure. 657 The time scale for failure detection and resolution is in many cases 658 likely counted in hours to days. 660 3.8. Transport Applications 662 Transport Application is a generic term for the integrated 663 application of communications, control, and information processing in 664 a transportation system. Transport telematics or vehicle telematics 665 are used as a term for the group of technologies that support 666 transportation systems. Transport applications running on such a 667 transportation system cover all modes of the transport and consider 668 all elements of the transportation system, i.e. the vehicle, the 669 infrastructure, and the driver or user, interacting together 670 dynamically. The overall aim is to improve decision making, often in 671 real time, by transport network controllers and other users, thereby 672 improving the operation of the entire transport system. As such, 673 transport applications can be seen as one of the important M2M 674 service scenarios with the involvement of manifold small devices. 676 The definition encompasses a broad array of techniques and approaches 677 that may be achieved through stand-alone technological applications 678 or as enhancements to other transportation communication schemes. 679 Examples for transport applications are inter and intra vehicular 680 communication, smart traffic control, smart parking, electronic toll 681 collection systems, logistic and fleet management, vehicle control, 682 and safety and road assistance. 684 As a distributed system, transport applications require an end-to-end 685 management of different types of networks. It is likely that 686 constrained devices in a network (e.g. a moving in-car network) have 687 to be controlled by an application running on an application server 688 in the network of a service provider. Such a highly distributed 689 network including mobile devices on vehicles is assumed to include a 690 wireless access network using diverse long distance wireless 691 technologies such as WiMAX, 3G/LTE or satellite communication, e.g. 692 based on an embedded hardware module. As a result, the management of 693 constrained devices in the transport system might be necessary to 694 plan top-down and might need to use data models obliged from and 695 defined on the application layer. The assumed device classes in use 696 are mainly C2 devices. In cases, where an in-vehicle network is 697 involved, C1 devices with limited capabilities and a short-distance 698 constrained radio network, e.g. IEEE 802.15.4 might be used 699 additionally. 701 Management responsibility typically rests within the organization 702 running the transport application. The constrained devices in a 703 moving transport network might be initially configured in a factory 704 and a reconfiguration might be needed only rarely. New devices might 705 be integrated in an ad-hoc manner based on self-management and 706 -configuration capabilities. Monitoring and data exchange might be 707 necessary to do via a gateway entity connected to the back-end 708 transport infrastructure. The devices and entities in the transport 709 infrastructure need to be monitored more frequently and can be able 710 to communicate with a higher data rate. The connectivity of such 711 entities does not necessarily need to be wireless. The time scale 712 for detecting and recording failures in a moving transport network is 713 likely measured in hours and repairs might easily take days. It is 714 likely that a self-healing feature would be used locally. 716 3.9. Vehicular Networks 718 Networks involving mobile nodes, especially transport vehicles, are 719 emerging. Such networks are used to provide inter-vehicle 720 communication services, or even tracking of mobile assets, to develop 721 intelligent transportation systems and drivers and passengers 722 assistance services. Constrained devices are deployed within a 723 larger single entity, the vehicle, and must be individually managed. 725 Vehicles can be either private, belonging to individuals or private 726 companies, or public transportation. Scenarios consisting of 727 vehicle-to-vehicle ad-hoc networks, a wired backbone with wireless 728 last hops, and hybrid vehicle-to-road communications are expected to 729 be common. 731 Besides the access control and security, depending on the type of 732 vehicle and service being provided, it would be important for a NMS 733 to be able to function with different architectures, since different 734 manufacturers might have their own proprietary systems. 736 Unlike some mobile networks, most vehicular networks are expected to 737 have specific patterns in the mobility of the nodes. Such patterns 738 could possibly be exploited, managed and monitored by the NMS. 740 The challenges in the management of vehicles in a mobile application 741 are manifold. Firstly, the issues caused through the device mobility 742 need to be taken into consideration. The up-to-date position of each 743 node in the network should be reported to the corresponding 744 management entities, since the nodes could be moving within or 745 roaming between different networks. Secondly, a variety of 746 troubleshooting information, including sensitive location 747 information, needs to be reported to the management system in order 748 to provide accurate service to the customer. 750 The NMS must also be able to handle partitioned networks, which would 751 arise due to the dynamic nature of traffic resulting in large inter- 752 vehicle gaps in sparsely populated scenarios. Constant changes in 753 topology must also be contended with. 755 Auto-configuration of nodes in a vehicular network remains a 756 challenge since based on location, and access network, the vehicle 757 might have different configurations that must be obtained from its 758 management system. Operating configuration updates, while in remote 759 networks also needs to be considered in the design of a network 760 management system." 762 3.10. Community Network Applications 764 Community networks are comprised of constrained routers in a multi- 765 hop mesh topology, communicating over a lossy, and often wireless 766 channel. While the routers are mostly non-mobile, the topology may 767 be very dynamic because of fluctuations in link quality of the 768 (wireless) channel caused by, e.g., obstacles, or other nearby radio 769 transmissions. Depending on the routers that are used in the 770 community network, the resources of the routers (memory, CPU) may be 771 more or less constrained - available resources may range from only a 772 few kilobytes of RAM to several megabytes or more, and CPUs may be 773 small and embedded, or more powerful general-purpose processors. 774 Examples of such community networks are the FunkFeuer network 775 (Vienna, Austria), FreiFunk (Berlin, Germany), Seattle Wireless 776 (Seattle, USA), and AWMN (Athens, Greece). These community networks 777 are public and non-regulated, allowing their users to connect to each 778 other and - through an uplink to an ISP - to the Internet. No fee, 779 other than the initial purchase of a wireless router, is charged for 780 these services. Applications of these community networks can be 781 diverse, e.g., location based services, free Internet access, file 782 sharing between users, distributed chat services, social networking 783 etc, video sharing etc. 785 As an example of a community network, the FunkFeuer network comprises 786 several hundred routers, many of which have several radio interfaces 787 (with omnidirectional and some directed antennas). The routers of 788 the network are small-sized wireless routers, such as the Linksys 789 WRT54GL, available in 2011 for less than 50 Euros. These routers, 790 with 16 MB of RAM and 264 MHz of CPU power, are mounted on the 791 rooftops of the users. When new users want to connect to the 792 network, they acquire a wireless router, install the appropriate 793 firmware and routing protocol, and mount the router on the rooftop. 794 IP addresses for the router are assigned manually from a list of 795 addresses (because of the lack of autoconfiguration standards for 796 mesh networks in the IETF). 798 While the routers are non-mobile, fluctuations in link quality 799 require an ad hoc routing protocol that allows for quick convergence 800 to reflect the effective topology of the network (such as NHDP 801 [RFC6130] and OLSRv2 [I-D.ietf-manet-olsrv2] developed in the MANET 802 WG). Usually, no human interaction is required for these protocols, 803 as all variable parameters required by the routing protocol are 804 either negotiated in the control traffic exchange, or are only of 805 local importance to each router (i.e. do not influence 806 interoperability). However, external management and monitoring of an 807 ad hoc routing protocol may be desirable to optimize parameters of 808 the routing protocol. Such an optimization may lead to a more stable 809 perceived topology and to a lower control traffic overhead, and 810 therefore to a higher delivery success ratio of data packets, a lower 811 end-to-end delay, and less unnecessary bandwidth and energy usage. 813 Different use cases for the management of community networks are 814 possible: 816 o One single Network Management Station, e.g. a border gateway 817 providing connectivity to the Internet, requires managing or 818 monitoring routers in the community network, in order to 819 investigate problems (monitoring) or to improve performance by 820 changing parameters (managing). As the topology of the network is 821 dynamic, constant connectivity of each router towards the 822 management station cannot be guaranteed. Current network 823 management protocols, such as SNMP and Netconf, may be used (e.g., 824 using interfaces such as the NHDP-MIB [RFC6779]). However, when 825 routers in the community network are constrained, existing 826 protocols may require too many resources in terms of memory and 827 CPU; and more importantly, the bandwidth requirements may exceed 828 the available channel capacity in wireless mesh networks. 829 Moreover, management and monitoring may be unfeasible if the 830 connection between the network management station and the routers 831 is frequently interrupted. 833 o A distributed network monitoring, in which more than one 834 management station monitors or manages other routers. Because 835 connectivity to a server cannot be guaranteed at all times, a 836 distributed approach may provide a higher reliability, at the cost 837 of increased complexity. Currently, no IETF standard exists for 838 distributed monitoring and management. 840 o Monitoring and management of a whole network or a group of 841 routers. Monitoring the performance of a community network may 842 require more information than what can be acquired from a single 843 router using a network management protocol. Statistics, such as 844 topology changes over time, data throughput along certain routing 845 paths, congestion etc., are of interest for a group of routers (or 846 the routing domain) as a whole. As of 2012, no IETF standard 847 allows for monitoring or managing whole networks, instead of 848 single routers. 850 3.11. Military Operations 852 The challenges of configuration and monitoring of networks faced by 853 military agencies can be different from the other use cases since the 854 requirements and operating conditions of military networks are quite 855 different. 857 With technology advancements, military networks nowadays have become 858 large and consist of varieties of different types of equipment that 859 run different protocols and tools that obviously increase complexity 860 of the tactical networks. In many scenarios, configurations are, 861 most likely, manually performed. Furthermore, some legacy and even 862 modern devices do not even support IP networking. Majority of 863 protocols and tools developed by vendors that are being used are 864 proprietary which makes integration more difficult. 866 The main reason for this disjoint operation scenario is that most 867 military equipment is developed with specific tasks requirements in 868 mind, rather than interoperability of the varied equipment types. 869 For example, the operating conditions experienced by high altitude 870 equipment is significantly different from that used in desert 871 conditions and interoperation of tactical equipment with 872 telecommunication equipment was not an expected outcome. 874 Currently, most military networks operate with a fixed Network 875 Operations Center (NOC) that physically manages the configuration and 876 evaluation of all field devices. Once configured, the devices might 877 be deployed in fixed or mobile scenarios. Any configuration changes 878 required would need to be appropriately encrypted and authenticated 879 to prevent unauthorized access. 881 Hierarchical management of devices is a common requirement of 882 military operations as well since local managers may need to respond 883 to changing conditions within their platoon, regiment, brigade, 884 division or corps. The level of configuration management available 885 at each hierarchy must also be closely governed. 887 Since most military networks operate in hostile environments, a high 888 failure rate and disconnection rate should be tolerated by the NMS, 889 which must also be able to deal with multiple gateways and disjoint 890 management protocols. 892 Multi-national military operations are becoming increasingly common, 893 requiring the interoperation of a diverse set of equipment designed 894 with different operating conditions in mind. Furthermore, different 895 militaries are likely to have a different set of standards, best 896 practices, rules and regulation, and implementation approaches that 897 may contradict or conflict with each other. The NMS should be able 898 to detect these and handle them in an acceptable manner, which may 899 require human intervention. 901 4. IANA Considerations 903 This document does not introduce any new code-points or namespaces 904 for registration with IANA. 906 Note to RFC Editor: this section may be removed on publication as an 907 RFC. 909 5. Security Considerations 911 In several use cases, constrained devices are deployed in unsafe 912 environments, where attackers can gain physical access to the 913 devices. As a consequence, it is crucial to properly protect any 914 security credentials that may be stored on the device (e.g., by using 915 hardware protection mechanisms). Furthermore, it is important that 916 any credentials leeking from a single device do not simplify the 917 attack on other (similar) devices. In particular, security 918 credentials should never be shared. 920 Since constrained devices often have limited computational resources, 921 care should be taken in choosing efficient but cryptographically 922 strong crytographic algorithms. Designers of constrained devices 923 that have a long expected lifetime need to ensure that cryptographic 924 algorithms can be updated once devices have been deployed. The 925 ability to perform secure firmware and software updates is an 926 important management requirement. 928 Several use cases generate sensitive data or require the processing 929 of sensitive data. It is therefore an important requirement to 930 properly protect access to the data in order to protect the privacy 931 of humans using Internet-enabled devices. For certain types of data, 932 protection during the transmission over the network may not be 933 sufficient and methods should be investigated that provide protection 934 of data while it is cached or stored (e.g., when using a store-and- 935 forward transport mechanism). 937 6. Contributors 939 Following persons made significant contributions to and reviewed this 940 document: 942 o Ulrich Herberg (Fujitsu Laboratories of America) contributed the 943 Section 3.10 on Community Network Applications. 945 o Peter van der Stok contributed to Section 3.6 on Building 946 Automation. 948 o Zhen Cao contributed to Section 2.2 Mobile Access Technologies. 950 o Gilman Tolle contributed the Section 3.4 on Automated Metering 951 Infrastructure. 953 o James Nguyen and Ulrich Herberg contributed to Section 3.11 on 954 Military operations. 956 7. Acknowledgments 958 Following persons reviewed and provided valuable comments to 959 different versions of this document: 961 Dominique Barthel, Carsten Bormann, Zhen Cao, Benoit Claise, Bert 962 Greevenbosch, Ulrich Herberg, James Nguyen, Zach Shelby, and Peter 963 van der Stok. 965 The editors would like to thank the reviewers and the participants on 966 the Coman maillist for their valuable contributions and comments. 968 8. Informative References 970 [RFC6130] Clausen, T., Dearlove, C., and J. Dean, "Mobile Ad Hoc 971 Network (MANET) Neighborhood Discovery Protocol (NHDP)", 972 RFC 6130, April 2011. 974 [RFC6568] Kim, E., Kaspar, D., and JP. Vasseur, "Design and 975 Application Spaces for IPv6 over Low-Power Wireless 976 Personal Area Networks (6LoWPANs)", RFC 6568, April 2012. 978 [RFC6779] Herberg, U., Cole, R., and I. Chakeres, "Definition of 979 Managed Objects for the Neighborhood Discovery Protocol", 980 RFC 6779, October 2012. 982 [RFC6988] Quittek, J., Chandramouli, M., Winter, R., Dietz, T., and 983 B. Claise, "Requirements for Energy Management", RFC 6988, 984 September 2013. 986 [I-D.ietf-lwig-terminology] 987 Bormann, C., Ersue, M., and A. Keranen, "Terminology for 988 Constrained Node Networks", draft-ietf-lwig-terminology-07 989 (work in progress), February 2014. 991 [I-D.ietf-eman-framework] 992 Claise, B., Schoening, B., and J. Quittek, "Energy 993 Management Framework", draft-ietf-eman-framework-15 (work 994 in progress), February 2014. 996 [I-D.ietf-manet-olsrv2] 997 Clausen, T., Dearlove, C., Jacquet, P., and U. Herberg, 998 "The Optimized Link State Routing Protocol version 2", 999 draft-ietf-manet-olsrv2-19 (work in progress), March 2013. 1001 [COM-REQ] Ersue, M., "Constrained Management: Problem statement and 1002 Requirements", draft-ietf-opsawg-coman-probstate-reqs 1003 (work in progress), January 2014. 1005 Appendix A. Open Issues 1007 o Section 3.11 should be replaced by a different use case motivating 1008 similar requirements or perhaps deleted if the IETF prefers to not 1009 work on specific requirements coming from military use cases. 1011 o Section 3.8 and Section 3.9 should be merged. 1013 Appendix B. Change Log 1015 B.1. draft-ietf-opsawg-coman-use-cases-00 - 1016 draft-ietf-opsawg-coman-use-cases-01 1018 o Reordered some use cases to improve the flow. 1020 o Added "Vehicular Networks". 1022 o Shortened the Military Operations use case. 1024 o Started adding substance to the security considerations section. 1026 B.2. draft-ersue-constrained-mgmt-03 - 1027 draft-ersue-opsawg-coman-use-cases-00 1029 o Reduced the terminology section for terminology addressed in the 1030 LWIG and Coman Requirements drafts. Referenced the other drafts. 1032 o Checked and aligned all terminology against the LWIG terminology 1033 draft. 1035 o Spent some effort to resolve the intersection between the 1036 Industrial Application, Home Automation and Building Automation 1037 use cases. 1039 o Moved section section 3. Use Cases from the companion document 1040 [COM-REQ] to this draft. 1042 o Reformulation of some text parts for more clarity. 1044 B.3. draft-ersue-constrained-mgmt-02-03 1046 o Extended the terminology section and removed some of the 1047 terminology addressed in the new LWIG terminology draft. 1048 Referenced the LWIG terminology draft. 1050 o Moved Section 1.3. on Constrained Device Classes to the new LWIG 1051 terminology draft. 1053 o Class of networks considering the different type of radio and 1054 communication technologies in use and dimensions extended. 1056 o Extended the Problem Statement in Section 2. following the 1057 requirements listed in Section 4. 1059 o Following requirements, which belong together and can be realized 1060 with similar or same kind of solutions, have been merged. 1062 * Distributed Management and Peer Configuration, 1064 * Device status monitoring and Neighbor-monitoring, 1066 * Passive Monitoring and Reactive Monitoring, 1068 * Event-driven self-management - Self-healing and Periodic self- 1069 management, 1071 * Authentication of management systems and Authentication of 1072 managed devices, 1074 * Access control on devices and Access control on management 1075 systems, 1077 * Management of Energy Resources and Data models for energy 1078 management, 1080 * Software distribution (group-based firmware update) and Group- 1081 based provisioning. 1083 o Deleted the empty section on the gaps in network management 1084 standards, as it will be written in a separate draft. 1086 o Added links to mentioned external pages. 1088 o Added text on OMA M2M Device Classification in appendix. 1090 B.4. draft-ersue-constrained-mgmt-01-02 1092 o Extended the terminology section. 1094 o Added additional text for the use cases concerning deployment 1095 type, network topology in use, network size, network capabilities, 1096 radio technology, etc. 1098 o Added examples for device classes in a use case. 1100 o Added additional text provided by Cao Zhen (China Mobile) for 1101 Mobile Applications and by Peter van der Stok for Building 1102 Automation. 1104 o Added the new use cases 'Advanced Metering Infrastructure' and 1105 'MANET Concept of Operations in Military'. 1107 o Added the section 'Managing the Constrainedness of a Device or 1108 Network' discussing the needs of very constrained devices. 1110 o Added a note that the requirements in [COM-REQ] need to be seen as 1111 standalone requirements and the current document does not 1112 recommend any profile of requirements. 1114 o Added a section in [COM-REQ] for the detailed requirements on 1115 constrained management matched to management tasks like fault, 1116 monitoring, configuration management, Security and Access Control, 1117 Energy Management, etc. 1119 o Solved nits and added references. 1121 o Added Appendix A on the related development in other bodies. 1123 o Added Appendix B on the work in related research projects. 1125 B.5. draft-ersue-constrained-mgmt-00-01 1127 o Splitted the section on 'Networks of Constrained Devices' into the 1128 sections 'Network Topology Options' and 'Management Topology 1129 Options'. 1131 o Added the use case 'Community Network Applications' and 'Mobile 1132 Applications'. 1134 o Provided a Contributors section. 1136 o Extended the section on 'Medical Applications'. 1138 o Solved nits and added references. 1140 Authors' Addresses 1142 Mehmet Ersue (editor) 1143 Nokia Solutions and Networks 1145 Email: mehmet.ersue@nsn.com 1147 Dan Romascanu 1148 Avaya 1150 Email: dromasca@avaya.com 1152 Juergen Schoenwaelder 1153 Jacobs University Bremen 1155 Email: j.schoenwaelder@jacobs-university.de 1157 Anuj Sehgal 1158 Jacobs University Bremen 1160 Email: a.sehgal@jacobs-university.de