idnits 2.17.1 draft-ersue-opsawg-coman-use-cases-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([COM-REQ]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 25, 2013) is 3829 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 6779 (Obsoleted by RFC 7939) == Outdated reference: A later version (-07) exists of draft-ietf-lwig-terminology-05 == Outdated reference: A later version (-19) exists of draft-ietf-eman-framework-11 -- No information found for draft-ersue-coman-prostate-reqs - is the name correct? Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Ersue, Ed. 3 Internet-Draft Nokia Solutions and Networks 4 Intended status: Informational D. Romascanu 5 Expires: April 28, 2014 Avaya 6 J. Schoenwaelder 7 Jacobs University Bremen 8 October 25, 2013 10 Management of Networks with Constrained Devices: Use Cases 11 draft-ersue-opsawg-coman-use-cases-00 13 Abstract 15 This document discusses the use cases concerning the management of 16 networks, where constrained devices are involved. A problem 17 statement, deployment options and the requirements on the networks 18 with constrained devices can be found in the companion document [COM- 19 REQ]. 21 Status of this Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on April 28, 2014. 38 Copyright Notice 40 Copyright (c) 2013 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 56 1.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . 3 57 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 58 2. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 5 59 2.1. Environmental Monitoring . . . . . . . . . . . . . . . . . 5 60 2.2. Medical Applications . . . . . . . . . . . . . . . . . . . 5 61 2.3. Industrial Applications . . . . . . . . . . . . . . . . . 6 62 2.4. Home Automation . . . . . . . . . . . . . . . . . . . . . 7 63 2.5. Building Automation . . . . . . . . . . . . . . . . . . . 8 64 2.6. Energy Management . . . . . . . . . . . . . . . . . . . . 9 65 2.7. Transport Applications . . . . . . . . . . . . . . . . . . 11 66 2.8. Infrastructure Monitoring . . . . . . . . . . . . . . . . 12 67 2.9. Community Network Applications . . . . . . . . . . . . . . 13 68 2.10. Mobile Applications . . . . . . . . . . . . . . . . . . . 15 69 2.11. Automated Metering Infrastructure (AMI) . . . . . . . . . 16 70 2.12. MANET Concept of Operations (CONOPS) in Military . . . . . 18 71 3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 24 72 4. Security Considerations . . . . . . . . . . . . . . . . . . . 25 73 5. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . 26 74 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 27 75 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 28 76 7.1. Normative References . . . . . . . . . . . . . . . . . . . 28 77 7.2. Informative References . . . . . . . . . . . . . . . . . . 28 78 Appendix A. Open issues . . . . . . . . . . . . . . . . . . . . . 29 79 Appendix B. Change Log . . . . . . . . . . . . . . . . . . . . . 30 80 B.1. draft-ersue-constrained-mgmt-03 - 81 draft-ersue-opsawg-coman-use-cases-00 . . . . . . . . . . 30 82 B.2. draft-ersue-constrained-mgmt-02-03 . . . . . . . . . . . . 30 83 B.3. draft-ersue-constrained-mgmt-01-02 . . . . . . . . . . . . 31 84 B.4. draft-ersue-constrained-mgmt-00-01 . . . . . . . . . . . . 32 85 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33 87 1. Introduction 89 1.1. Overview 91 Small devices with limited CPU, memory, and power resources, so 92 called constrained devices (aka. sensor, smart object, or smart 93 device) can be connected to a network. Such a network of constrained 94 devices itself may be constrained or challenged, e.g. with unreliable 95 or lossy channels, wireless technologies with limited bandwidth and a 96 dynamic topology, needing the service of a gateway or proxy to 97 connect to the Internet. In other scenarios, the constrained devices 98 can be connected to a non-constrained network using off-the-shelf 99 protocol stacks. Constrained devices might be in charge of gathering 100 information in diverse settings including natural ecosystems, 101 buildings, and factories and send the information to one or more 102 server stations. 104 Network management is characterized by monitoring network status, 105 detecting faults, and inferring their causes, setting network 106 parameters, and carrying out actions to remove faults, maintain 107 normal operation, and improve network efficiency and application 108 performance. The traditional network management application 109 periodically collects information from a set of elements that are 110 needed to manage, processes the data, and presents them to the 111 network management users. Constrained devices, however, often have 112 limited power, low transmission range, and might be unreliable. They 113 might also need to work in hostile environments with advanced 114 security requirements or need to be used in harsh environments for a 115 long time without supervision. Due to such constraints, the 116 management of a network with constrained devices offers different 117 type of challenges compared to the management of a traditional IP 118 network. 120 This document aims to understand the use cases for the management of 121 a network, where constrained devices are involved. The document 122 lists and discusses diverse use cases for the management from the 123 network as well as from the application point of view. The 124 application scenarios discussed aim to show where networks of 125 constrained devices are expected to be deployed. For each 126 application scenario, we first briefly describe the characteristics 127 followed by a discussion on how network management can be provided, 128 who is likely going to be responsible for it, and on which time-scale 129 management operations are likely to be carried out. 131 A problem statement, deployment and management topology options as 132 well as the requirements on the networks with constrained devices can 133 be found in the companion document [COM-REQ]. 135 1.2. Terminology 137 This documents builds on the terminology defined in 138 [I-D.ietf-lwig-terminology] and [COM-REQ]. 139 [I-D.ietf-lwig-terminology] is a base document for the terminology 140 concerning constrained devices and constrained networks. 142 2. Use Cases 144 2.1. Environmental Monitoring 146 Environmental monitoring applications are characterized by the 147 deployment of a number of sensors to monitor emissions, water 148 quality, or even the movements and habits of wildlife. Other 149 applications in this category include earthquake or tsunami early- 150 warning systems. The sensors often span a large geographic area, 151 they can be mobile, and they are often difficult to replace. 152 Furthermore, the sensors are usually not protected against tampering. 154 Management of environmental monitoring applications is largely 155 concerned with the monitoring whether the system is still functional 156 and the roll-out of new constrained devices in case the system looses 157 too much of its structure. The constrained devices themselves need 158 to be able to establish connectivity (auto-configuration) and they 159 need to be able to deal with events such as loosing neighbors or 160 being moved to other locations. 162 Management responsibility typically rests with the organization 163 running the environmental monitoring application. Since these 164 monitoring applications must be designed to tolerate a number of 165 failures, the time scale for detecting and recording failures is for 166 some of these applications likely measured in hours and repairs might 167 easily take days. However, for certain environmental monitoring 168 applications, much tighter time scales may exist and might be 169 enforced by regulations (e.g., monitoring of nuclear radiation). 171 2.2. Medical Applications 173 Constrained devices can be seen as an enabling technology for 174 advanced and possibly remote health monitoring and emergency 175 notification systems, ranging from blood pressure and heart rate 176 monitors to advanced devices capable to monitor implanted 177 technologies, such as pacemakers or advanced hearing aids. Medical 178 sensors may not only be attached to human bodies, they might also 179 exist in the infrastructure used by humans such as bathrooms or 180 kitchens. Medical applications will also be used to ensure 181 treatments are being applied properly and they might guide people 182 losing orientation. Fitness and wellness applications, such as 183 connected scales or wearable heart monitors, encourage consumers to 184 exercise and empower self-monitoring of key fitness indicators. 185 Different applications use Bluetooth, Wi-Fi or Zigbee connections to 186 access the patient's smartphone or home cellular connection to access 187 the Internet. 189 Constrained devices that are part of medical applications are managed 190 either by the users of those devices or by an organization providing 191 medical (monitoring) services for physicians. In the first case, 192 management must be automatic and or easy to install and setup by 193 average people. In the second case, it can be expected that devices 194 be controlled by specially trained people. In both cases, however, 195 it is crucial to protect the privacy of the people to which medical 196 devices are attached. Even though the data collected by a heart beat 197 monitor might be protected, the pure fact that someone carries such a 198 device may need protection. As such, certain medical appliances may 199 not want to participate in discovery and self-configuration protocols 200 in order to remain invisible. 202 Many medical devices are likely to be used (and relied upon) to 203 provide data to physicians in critical situations since the biggest 204 market is likely elderly and handicapped people. As such, fault 205 detection of the communication network or the constrained devices 206 becomes a crucial function that must be carried out with high 207 reliability and, depending on the medical appliance and its 208 application, within seconds. 210 2.3. Industrial Applications 212 Industrial Applications and smart manufacturing refer not only to 213 production equipment, but also to a factory that carries out 214 centralized control of energy, HVAC (heating, ventilation, and air 215 conditioning), lighting, access control, etc. via a network. For the 216 management of a factory it is becoming essential to implement smart 217 capabilities. From an engineering standpoint, industrial 218 applications are intelligent systems enabling rapid manufacturing of 219 new products, dynamic response to product demand, and real-time 220 optimization of manufacturing production and supply chain networks. 221 Potential industrial applications e.g. for smart factories and smart 222 manufacturing are: 224 o Digital control systems with embedded, automated process controls, 225 operator tools, as well as service information systems optimizing 226 plant operations and safety. 228 o Asset management using predictive maintenance tools, statistical 229 evaluation, and measurements maximizing plant reliability. 231 o Smart sensors detecting anomalies to avoid abnormal or 232 catastrophic events. 234 o Smart systems integrated within the industrial energy management 235 system and externally with the smart grid enabling real-time 236 energy optimization. 238 Sensor networks are an essential technology used for smart 239 manufacturing. Measurements, automated controls, plant optimization, 240 health and safety management, and other functions are provided by a 241 large number of networked sectors. Data interoperability and 242 seamless exchange of product, process, and project data are enabled 243 through interoperable data systems used by collaborating divisions or 244 business systems. Intelligent automation and learning systems are 245 vital to smart manufacturing but must be effectively integrated with 246 the decision environment. Wireless sensor networks (WSN) have been 247 developed for machinery Condition-based Maintenance (CBM) as they 248 offer significant cost savings and enable new functionalities. 249 Inaccessible locations, rotating machinery, hazardous areas, and 250 mobile assets can be reached with wireless sensors. WSNs can provide 251 today wireless link reliability, real-time capabilities, and quality- 252 of-service and enable industrial and related wireless sense and 253 control applications. 255 Management of industrial and factory applications is largely focused 256 on the monitoring whether the system is still functional, real-time 257 continuous performance monitoring, and optimization as necessary. 258 The factory network might be part of a campus network or connected to 259 the Internet. The constrained devices in such a network need to be 260 able to establish configuration themselves (auto-configuration) and 261 might need to deal with error conditions as much as possible locally. 262 Access control has to be provided with multi-level administrative 263 access and security. Support and diagnostics can be provided through 264 remote monitoring access centralized outside of the factory. 266 Management responsibility is typically owned by the organization 267 running the industrial application. Since the monitoring 268 applications must handle a potentially large number of failures, the 269 time scale for detecting and recording failures is for some of these 270 applications likely measured in minutes. However, for certain 271 industrial applications, much tighter time scales may exist, e.g. in 272 real-time, which might be enforced by the manufacturing process or 273 the use of critical material. 275 2.4. Home Automation 277 Home automation includes the control of lighting, heating, 278 ventilation, air conditioning, appliances, and entertainment devices 279 to improve convenience, comfort, energy efficiency, and security. It 280 can be seen as a residential extension of building automation. 282 Home automation networks need a certain amount of configuration 283 (associating switches or sensors to actors) that is either provided 284 by electricians deploying home automation solutions or done by 285 residents by using the application user interface to configure (parts 286 of) the home automation solution. Similarly, failures may be 287 reported via suitable interfaces to residents or they might be 288 recorded and made available to electricians in charge of the 289 maintenance of the home automation infrastructure. 291 The management responsibility lies either with the residents or it 292 may be outsourced to electricians providing management of home 293 automation solutions as a service. The time scale for failure 294 detection and resolution is in many cases likely counted in hours to 295 days. 297 2.5. Building Automation 299 Building automation comprises the distributed systems designed and 300 deployed to monitor and control the mechanical, electrical and 301 electronic systems inside buildings with various destinations (e.g., 302 public and private, industrial, institutions, or residential). 303 Advanced Building Automation Systems (BAS) may be deployed 304 concentrating the various functions of safety, environmental control, 305 occupancy, security. More and more the deployment of the various 306 functional systems is connected to the same communication 307 infrastructure (possibly Internet Protocol based), which may involve 308 wired or wireless communications networks inside the building. 310 Building automation requires the deployment of a large number (10- 311 100.000) of sensors that monitor the status of devices, and 312 parameters inside the building and controllers with different 313 specialized functionality for areas within the building or the 314 totality of the building. Inter-node distances between neighboring 315 nodes vary between 1 to 20 meters. Contrary to home automation, in 316 building management the devices are expected to be managed assets and 317 known to a set of commissioning tools and a data storage, such that 318 every connected device has a known origin. The management includes 319 verifying the presence of the expected devices and detecting the 320 presence of unwanted devices. 322 Examples of functions performed by such controllers are regulating 323 the quality, humidity, and temperature of the air inside the building 324 and lighting. Other systems may report the status of the machinery 325 inside the building like elevators, or inside the rooms like 326 projectors in meeting rooms. Security cameras and sensors may be 327 deployed and operated on separate dedicated infrastructures connected 328 to the common backbone. The deployment area of a BAS is typically 329 inside one building (or part of it) or several buildings 330 geographically grouped in a campus. A building network can be 331 composed of subnets, where a subnet covers a floor, an area on the 332 floor, or a given functionality (e.g. security cameras). 334 Some of the sensors in Building Automation Systems (for example fire 335 alarms or security systems) register, record and transfer critical 336 alarm information and therefore must be resilient to events like loss 337 of power or security attacks. This leads to the need that some 338 components and subsystems operate in constrained conditions and are 339 separately certified. Also in some environments, the malfunctioning 340 of a control system (like temperature control) needs to be reported 341 in the shortest possible time. Complex control systems can 342 misbehave, and their critical status reporting and safety algorithms 343 need to be basic and robust and perform even in critical conditions. 345 Building Automation solutions are deployed in some cases in newly 346 designed buildings, in other cases it might be over existing 347 infrastructures. In the first case, there is a broader range of 348 possible solutions, which can be planned for the infrastructure of 349 the building. In the second case the solution needs to be deployed 350 over an existing structure taking into account factors like existing 351 wiring, distance limitations, the propagation of radio signals over 352 walls and floors. As a result, some of the existing WLAN solutions 353 (e.g. IEEE 802.11 or IEEE 802.15) may be deployed. In mission- 354 critical or security sensitive environments and in cases where link 355 failures happen often, topologies that allow for reconfiguration of 356 the network and connection continuity may be required. Some of the 357 sensors deployed in building automation may be very simple 358 constrained devices for which class 0 or class 1 may be assumed. 360 For lighting applications, groups of lights must be defined and 361 managed. Commands to a group of light must arrive within 200 ms at 362 all destinations. The installation and operation of a building 363 network has different requirements. During the installation, many 364 stand-alone networks of a few to 100 nodes co-exist without a 365 connection to the backbone. During this phase, the nodes are 366 identified with a network identifier related to their physical 367 location. Devices are accessed from an installation tool to connect 368 them to the network in a secure fashion. During installation, the 369 setting of parameters to common values to enable interoperability may 370 occur (e.g. Trickle parameter values). During operation, the 371 networks are connected to the backbone while maintaining the network 372 identifier to physical location relation. Network parameters like 373 address and name are stored in DNS. The names can assist in 374 determining the physical location of the device. 376 2.6. Energy Management 378 EMAN working group developed [I-D.ietf-eman-framework], which defines 379 a framework for providing Energy Management for devices within or 380 connected to communication networks. This document observes that one 381 of the challenges of energy management is that a power distribution 382 network is responsible for the supply of energy to various devices 383 and components, while a separate communication network is typically 384 used to monitor and control the power distribution network. Devices 385 that have energy management capability are defined as Energy Devices 386 and identified components within a device (Energy Device Components) 387 can be monitored for parameters like Power, Energy, Demand and Power 388 Quality. If a device contains batteries, they can be also monitored 389 and managed. 391 Energy devices differ in complexity and may include basic sensors or 392 switches, specialized electrical meters, or power distribution units 393 (PDU), and subsystems inside the network devices (routers, network 394 switches) or home or industrial appliances. An Energy Management 395 System is a combination of hardware and software used to administer a 396 network with the primary purpose being Energy Management. The 397 operators of such a system are either the utility providers or 398 customers that aim to control and reduce the energy consumption and 399 the associated costs. The topology in use differs and the deployment 400 can cover areas from small surfaces (individual homes) to large 401 geographical areas. EMAN requirements document [RFC6988] discusses 402 the requirements for energy management concerning monitoring and 403 control functions. 405 It is assumed that Energy Management will apply to a large range of 406 devices of all classes and networks topologies. Specific resource 407 monitoring like battery utilization and availability may be specific 408 to devices with lower physical resources (device classes C0 or C1). 410 Energy Management is especially relevant to Smart Grid. A Smart Grid 411 is an electrical grid that uses data networks to gather and act on 412 energy and power-related information, in an automated fashion with 413 the goal to improve the efficiency, reliability, economics, and 414 sustainability of the production and distribution of electricity. As 415 such Smart Grid provides sustainable and reliable generation, 416 transmission, distribution, storage and consumption of electrical 417 energy based on advanced energy and ICT solutions and as such enables 418 e.g. following specific application areas: Smart transmission 419 systems, Demand Response/Load Management, Substation Automation, 420 Advanced Distribution Management, Advanced Metering Infrastructure 421 (AMI), Smart Metering, Smart Home and Building Automation, 422 E-mobility, etc. 424 Smart Metering is a good example of a M2M application and can be 425 realized as one of the vertical applications in an M2M environment. 426 Different types of possibly wireless small meters produce all 427 together a huge amount of data, which is collected by a central 428 entity and processed by an application server. The M2M 429 infrastructure can be provided by a mobile network operator as the 430 meters in urban areas will have most likely a cellular or WiMAX 431 radio. 433 Smart Grid is built on a distributed and heterogeneous network and 434 can use a combination of diverse networking technologies, such as 435 wireless Access Technologies (WiMAX, Cellular, etc.), wireline and 436 Internet Technologies (e.g., IP/MPLS, Ethernet, SDH/PDH over Fiber 437 optic, etc.) as well as low-power radio technologies enabling the 438 networking of smart meters, home appliances, and constrained devices 439 (e.g. BT-LE, ZigBee, Z-Wave, Wi-Fi, etc.). The operational 440 effectiveness of the smart grid is highly dependent on a robust, two- 441 way, secure, and reliable communications network with suitable 442 availability. 444 The management of a distributed system like smart grid requires an 445 end-to-end management of and information exchange through different 446 type of networks. However, as of today there is no integrated smart 447 grid management approach and no common smart grid information model 448 available. Specific smart grid applications or network islands use 449 their own management mechanisms. For example, the management of 450 smart meters depends very much on the AMI environment they have been 451 integrated to and the networking technologies they are using. In 452 general, smart meters do only need seldom reconfiguration and they 453 send a small amount of redundant data to a central entity. For a 454 discussion on the management needs of an AMI network see 455 Section 2.11. The management needs for Smart Home and Building 456 Automation are discussed in Section 2.4 and Section 2.5. 458 2.7. Transport Applications 460 Transport Application is a generic term for the integrated 461 application of communications, control, and information processing in 462 a transportation system. Transport telematics or vehicle telematics 463 are used as a term for the group of technologies that support 464 transportation systems. Transport applications running on such a 465 transportation system cover all modes of the transport and consider 466 all elements of the transportation system, i.e. the vehicle, the 467 infrastructure, and the driver or user, interacting together 468 dynamically. The overall aim is to improve decision making, often in 469 real time, by transport network controllers and other users, thereby 470 improving the operation of the entire transport system. As such, 471 transport applications can be seen as one of the important M2M 472 service scenarios with the involvement of manifold small devices. 474 The definition encompasses a broad array of techniques and approaches 475 that may be achieved through stand-alone technological applications 476 or as enhancements to other transportation communication schemes. 477 Examples for transport applications are inter and intra vehicular 478 communication, smart traffic control, smart parking, electronic toll 479 collection systems, logistic and fleet management, vehicle control, 480 and safety and road assistance. 482 As a distributed system, transport applications require an end-to-end 483 management of different types of networks. It is likely that 484 constrained devices in a network (e.g. a moving in-car network) have 485 to be controlled by an application running on an application server 486 in the network of a service provider. Such a highly distributed 487 network including mobile devices on vehicles is assumed to include a 488 wireless access network using diverse long distance wireless 489 technologies such as WiMAX, 3G/LTE or satellite communication, e.g. 490 based on an embedded hardware module. As a result, the management of 491 constrained devices in the transport system might be necessary to 492 plan top-down and might need to use data models obliged from and 493 defined on the application layer. The assumed device classes in use 494 are mainly C2 devices. In cases, where an in-vehicle network is 495 involved, C1 devices with limited capabilities and a short-distance 496 constrained radio network, e.g. IEEE 802.15.4 might be used 497 additionally. 499 Management responsibility typically rests within the organization 500 running the transport application. The constrained devices in a 501 moving transport network might be initially configured in a factory 502 and a reconfiguration might be needed only rarely. New devices might 503 be integrated in an ad-hoc manner based on self-management and 504 -configuration capabilities. Monitoring and data exchange might be 505 necessary to do via a gateway entity connected to the back-end 506 transport infrastructure. The devices and entities in the transport 507 infrastructure need to be monitored more frequently and can be able 508 to communicate with a higher data rate. The connectivity of such 509 entities does not necessarily need to be wireless. The time scale 510 for detecting and recording failures in a moving transport network is 511 likely measured in hours and repairs might easily take days. It is 512 likely that a self-healing feature would be used locally. 514 2.8. Infrastructure Monitoring 516 Infrastructure monitoring is concerned with the monitoring of 517 infrastructures such as bridges, railway tracks, or (offshore) 518 windmills. The primary goal is usually to detect any events or 519 changes of the structural conditions that can impact the risk and 520 safety of the infrastructure being monitored. Another secondary goal 521 is to schedule repair and maintenance activities in a cost effective 522 manner. 524 The infrastructure to monitor might be in a factory or spread over a 525 wider area but difficult to access. As such, the network in use 526 might be based on a combination of fixed and wireless technologies, 527 which use robust networking equipment and support reliable 528 communication. It is likely that constrained devices in such a 529 network are mainly C2 devices and have to be controlled centrally by 530 an application running on a server. In case such a distributed 531 network is widely spread, the wireless devices might use diverse 532 long-distance wireless technologies such as WiMAX, or 3G/LTE, e.g. 533 based on embedded hardware modules. In cases, where an in-building 534 network is involved, the network can be based on Ethernet or wireless 535 technologies suitable for in-building usage. 537 The management of infrastructure monitoring applications is primarily 538 concerned with the monitoring of the functioning of the system. 539 Infrastructure monitoring devices are typically rolled out and 540 installed by dedicated experts and changes are rare since the 541 infrastructure itself changes rarely. However, monitoring devices 542 are often deployed in unsupervised environments and hence special 543 attention must be given to protecting the devices from being 544 modified. 546 Management responsibility typically rests with the organization 547 owning the infrastructure or responsible for its operation. The time 548 scale for detecting and recording failures is likely measured in 549 hours and repairs might easily take days. However, certain events 550 (e.g., natural disasters) may require that status information be 551 obtained much more quickly and that replacements of failed sensors 552 can be rolled out quickly (or redundant sensors are activated 553 quickly). In case the devices are difficult to access, a self- 554 healing feature on the device might become necessary. 556 2.9. Community Network Applications 558 Community networks are comprised of constrained routers in a multi- 559 hop mesh topology, communicating over a lossy, and often wireless 560 channel. While the routers are mostly non-mobile, the topology may 561 be very dynamic because of fluctuations in link quality of the 562 (wireless) channel caused by, e.g., obstacles, or other nearby radio 563 transmissions. Depending on the routers that are used in the 564 community network, the resources of the routers (memory, CPU) may be 565 more or less constrained - available resources may range from only a 566 few kilobytes of RAM to several megabytes or more, and CPUs may be 567 small and embedded, or more powerful general-purpose processors. 568 Examples of such community networks are the FunkFeuer network 569 (Vienna, Austria), FreiFunk (Berlin, Germany), Seattle Wireless 570 (Seattle, USA), and AWMN (Athens, Greece). These community networks 571 are public and non-regulated, allowing their users to connect to each 572 other and - through an uplink to an ISP - to the Internet. No fee, 573 other than the initial purchase of a wireless router, is charged for 574 these services. Applications of these community networks can be 575 diverse, e.g., location based services, free Internet access, file 576 sharing between users, distributed chat services, social networking 577 etc, video sharing etc. 579 As an example of a community network, the FunkFeuer network comprises 580 several hundred routers, many of which have several radio interfaces 581 (with omnidirectional and some directed antennas). The routers of 582 the network are small-sized wireless routers, such as the Linksys 583 WRT54GL, available in 2011 for less than 50 Euros. These routers, 584 with 16 MB of RAM and 264 MHz of CPU power, are mounted on the 585 rooftops of the users. When new users want to connect to the 586 network, they acquire a wireless router, install the appropriate 587 firmware and routing protocol, and mount the router on the rooftop. 588 IP addresses for the router are assigned manually from a list of 589 addresses (because of the lack of autoconfiguration standards for 590 mesh networks in the IETF). 592 While the routers are non-mobile, fluctuations in link quality 593 require an ad hoc routing protocol that allows for quick convergence 594 to reflect the effective topology of the network (such as NHDP 595 [RFC6130] and OLSRv2 [I-D.ietf-manet-olsrv2] developed in the MANET 596 WG). Usually, no human interaction is required for these protocols, 597 as all variable parameters required by the routing protocol are 598 either negotiated in the control traffic exchange, or are only of 599 local importance to each router (i.e. do not influence 600 interoperability). However, external management and monitoring of an 601 ad hoc routing protocol may be desirable to optimize parameters of 602 the routing protocol. Such an optimization may lead to a more stable 603 perceived topology and to a lower control traffic overhead, and 604 therefore to a higher delivery success ratio of data packets, a lower 605 end-to-end delay, and less unnecessary bandwidth and energy usage. 607 Different use cases for the management of community networks are 608 possible: 610 o One single Network Management Station (NMS), e.g. a border gateway 611 providing connectivity to the Internet, requires managing or 612 monitoring routers in the community network, in order to 613 investigate problems (monitoring) or to improve performance by 614 changing parameters (managing). As the topology of the network is 615 dynamic, constant connectivity of each router towards the 616 management station cannot be guaranteed. Current network 617 management protocols, such as SNMP and Netconf, may be used (e.g., 618 using interfaces such as the NHDP-MIB [RFC6779]). However, when 619 routers in the community network are constrained, existing 620 protocols may require too many resources in terms of memory and 621 CPU; and more importantly, the bandwidth requirements may exceed 622 the available channel capacity in wireless mesh networks. 623 Moreover, management and monitoring may be unfeasible if the 624 connection between the NMS and the routers is frequently 625 interrupted. 627 o A distributed network monitoring, in which more than one 628 management station monitors or manages other routers. Because 629 connectivity to a server cannot be guaranteed at all times, a 630 distributed approach may provide a higher reliability, at the cost 631 of increased complexity. Currently, no IETF standard exists for 632 distributed monitoring and management. 634 o Monitoring and management of a whole network or a group of 635 routers. Monitoring the performance of a community network may 636 require more information than what can be acquired from a single 637 router using a network management protocol. Statistics, such as 638 topology changes over time, data throughput along certain routing 639 paths, congestion etc., are of interest for a group of routers (or 640 the routing domain) as a whole. As of 2012, no IETF standard 641 allows for monitoring or managing whole networks, instead of 642 single routers. 644 2.10. Mobile Applications 646 M2M services are increasingly provided by mobile service providers as 647 numerous devices, home appliances, utility meters, cars, video 648 surveillance cameras, and health monitors, are connected with mobile 649 broadband technologies. This diverse range of machines brings new 650 network and service requirements and challenges. Different 651 applications e.g. in a home appliance or in-car network use 652 Bluetooth, Wi-Fi or Zigbee and connect to a cellular module acting as 653 a gateway between the constrained environment and the mobile cellular 654 network. 656 Such a gateway might provide different options for the connectivity 657 of mobile networks and constrained devices, e.g.: 659 o a smart phone with 3G/4G and WLAN radio might use BT-LE to connect 660 to the devices in a home area network, 662 o a femtocell might be combined with home gateway functionality 663 acting as a low-power cellular base station connecting smart 664 devices to the application server of a mobile service provider. 666 o an embedded cellular module with LTE radio connecting the devices 667 in the car network with the server running the telematics service, 669 o an M2M gateway connected to the mobile operator network supporting 670 diverse IoT connectivity technologies including ZigBee and CoAP 671 over 6LoWPAN over IEEE 802.15.4. 673 Common to all scenarios above is that they are embedded in a service 674 and connected to a network provided by a mobile service provider. 675 Usually there is a hierarchical deployment and management topology in 676 place where different parts of the network are managed by different 677 management entities and the count of devices to manage is high (e.g. 678 many thousands). In general, the network is comprised by manifold 679 type and size of devices matching to different device classes. As 680 such, the managing entity needs to be prepared to manage devices with 681 diverse capabilities using different communication or management 682 protocols. In case the devices are directly connected to a gateway 683 they most likely are managed by a management entity integrated with 684 the gateway, which itself is part of the Network Management System 685 (NMS) run by the mobile operator. Smart phones or embedded modules 686 connected to a gateway might be themselves in charge to manage the 687 devices on their level. The initial and subsequent configuration of 688 such a device is mainly based on self-configuration and is triggered 689 by the device itself. 691 The challenges in the management of devices in a mobile application 692 are manifold. Firstly, the issues caused through the device mobility 693 need to be taken into consideration. While the cellular devices are 694 moving around or roaming between different regional networks, they 695 should report their status to the corresponding management entities 696 with regard to their proximity and management hierarchy. Secondly, a 697 variety of device troubleshooting information needs to be reported to 698 the management system in order to provide accurate service to the 699 customer. Third but not least, the NMS and the used management 700 protocol need to be tailored to keep the cellular devices lightweight 701 and as energy efficient as possible. 703 The data models used in these scenario are mostly derived from the 704 models of the operator NMS and might be used to monitor the status of 705 the devices and to exchange the data sent by or read from the 706 devices. The gateway might be in charge of filtering and aggregating 707 the data received from the device as the information sent by the 708 device might be mostly redundant. 710 2.11. Automated Metering Infrastructure (AMI) 712 An AMI network enables an electric utility to retrieve frequent 713 electric usage data from each electric meter installed at a 714 customer's home or business. With an AMI network, a utility can also 715 receive immediate notification of power outages when they occur, 716 directly from the electric meters that are experiencing those 717 outages. In addition, if the AMI network is designed to be open and 718 extensible, it could serve as the backbone for communicating with 719 other distribution automation devices besides meters, which could 720 include transformers and reclosers. 722 In this use case, each meter in the AMI network contains a 723 constrained device. These devices are typically C2 devices. Each 724 meter connects to a constrained mesh network with a low-bandwidth 725 radio. These radios can be 50, 150, or 200 kbps at raw link speed, 726 but actual network throughput may be significantly lower due to 727 forward error correction, multihop delays, MAC delays, lossy links, 728 and protocol overhead. 730 The constrained devices are used to connect the metering logic with 731 the network, so that usage data and outage notifications can be sent 732 back to the utility's headend systems over the network. These 733 headend systems are located in a data center managed by the utility, 734 and may include meter data collection systems, meter data management 735 systems, and outage management systems. 737 The meters are connected to a mesh network, and each meter can act as 738 both a source of traffic and as a router for other meters' traffic. 739 In a typical AMI application, smaller amounts of traffic (read 740 requests, configuration) flow "downstream" from the headend to the 741 mesh, and larger amounts of traffic flow "upstream" from the mesh to 742 the headend. However, during a firmware update operation, larger 743 amounts of traffic might flow downstream while smaller amounts flow 744 upstream. Other applications that make use of the AMI network may 745 have their own distinct traffic flows. 747 The mesh network is anchored by a collection of higher-end devices, 748 which contain a mesh radio that connects to the constrained network 749 as well as a backhaul link that connects to a less-constrained 750 network. The backhaul link could be cellular, WiMAX, or Ethernet, 751 depending on the backhaul networking technology that the utility has 752 chosen. These higher-end devices (termed "routers" in this use case) 753 are typically installed on utility poles throughout the service 754 territory. Router devices are typically less constrained than 755 meters, and often contain the full routing table for all the 756 endpoints routing through them. 758 In this use case, the utility typically installs on the order of 1000 759 meters per router. The collection of meters comprised in a local 760 network that are routing through a specific router is called in this 761 use case a Local Meter Network (LMN). When powered on, each meter is 762 designed to discover the nearby LMNs, select the optimal LMN to join, 763 and select the optimal meters in that LMN to route through when 764 sending data to the headend. After joining the LMN, the meter is 765 designed to continuously monitor and optimize its connection to the 766 LMN, and it may change routes and LMNs as needed. 768 Each LMN may be configured e.g. to share an encryption key, providing 769 confidentiality for all data traffic within the LMN. This key may be 770 obtained by a meter only after an end-to-end authentication process 771 based on certificates, ensuring that only authorized and 772 authenticated meters are allowed to join the LMN, and by extension, 773 the mesh network as a whole. 775 After joining the LMN, each endpoint obtains a routable and possibly 776 private IPv6 address that enables end-to-end communication between 777 the headend systems and each meter. In this use case, the meters are 778 always-on. However, due to lossy links and network optimization, not 779 every meter will be immediately accessible, though eventually every 780 meter will be able to exchange data with the headend. 782 In a large AMI deployment, there may be 10 million meters supported 783 by 10.000 routers, spread across a very large geographic area. 784 Within a single LMN, the meters may range between 1 and approx. 20 785 hops from the router. During the deployment process, these meters 786 are installed and turned on in large batches, and those meters must 787 be authenticated, given addresses, and provisioned with any 788 configuration information necessary for their operation. During 789 deployment and after deployment is finished, the network must be 790 monitored continuously and failures must be handled. Configuration 791 parameters may need to be changed on large numbers of devices, but 792 most of the devices will be running the same configuration. 793 Moreover, eventually, the firmware in those meters will need to be 794 upgraded, and this must also be done in large batches because most of 795 the devices will be running the same firmware image. 797 Because there may be thousands of routers, this operational model 798 (batch deployment, automatic provisioning, continuous monitoring, 799 batch reconfiguration, batch firmware update) should also apply to 800 the routers as well as the constrained devices. The scale is 801 different (thousands instead of millions) but still large enough to 802 make individual management impractical for routers as well. 804 2.12. MANET Concept of Operations (CONOPS) in Military 806 The use case on the Concept of Operations (CONOPS) focuses on the 807 configuration and monitoring of networks that are currently being 808 used in military and as such, it offers insights and challenges of 809 network management that military agencies are facing. 811 As technology advances, military networks nowadays become large and 812 consist of varieties of different types of equipments that run 813 different protocols and tools that obviously increase complexity of 814 the tactical networks. Moreover, lacks of open common interfaces and 815 Application Programming Interface (API) are often a challenge to 816 network management. Configurations are, most likely, manually 817 performed. Some devices do not support IP networks. Integration and 818 evaluation process are no longer trivial for a large set of protocols 819 and tools. In addition, majority of protocols and tools developed by 820 vendors that are being used are proprietary which makes integration 821 more difficult. The main reason that leads to this problem is that 822 there is no clearly defined standard for the MANET Concept of 823 Operations (CONOPS). In the following, a set of scenarios of network 824 operations are described, which might lead to the development of 825 network management protocols and a framework that can potentially be 826 used in military networks. 828 Note: The term "node" is used at IETF for either a host or router. 829 The term "unit" or "mobile unit" in military (e.g. Humvees, tanks) 830 is a unit that contains multiple routers, hosts, and/or other non-IP- 831 based communication devices. 833 Scenario: Parking Lot Staging Area: 835 The Parking Lot Staging Area is the most common network operation 836 that is currently widely used in military prior to deployment. MANET 837 routers, which can be identical such as the platoon leader's or 838 rifleman's radio, are shipped to a remote location along with a Fixed 839 Network Operations Center (NOC), where they are all connected over 840 traditional wired or wireless networks. The Fixed NOC then performs 841 mass-configuration and evaluation of configuration processes. The 842 same concept can be applied to mobile units. Once all units are 843 successfully configured, they are ready to be deployed. 845 +---------+ +----------+ 846 | Fixed |<---+------->| router_1 | 847 | NOC | | +----------+ 848 +---------+ | 849 | +----------+ 850 +------->| router_2 | 851 | +----------+ 852 | 0 853 | 0 854 | 0 855 | +----------+ 856 +------->| router_N | 857 +----------+ 859 Figure 1: Parking Lot Staging Area 861 Scenario: Monitoring with SatCom Reachback: 863 The Monitoring with SatCom Reachback, which is considered another 864 possible common scenario to military's network operations, is similar 865 to the Parking Lot Staging Area. Instead, the Fixed NOC and MANET 866 routers are connected through a Satellite Communications (SatCom) 867 network. The Monitoring with SatCom Reachback is a scenario where 868 MANET routers are augmented with SatCom Reachback capabilities while 869 On-The-Move (OTM). Vehicles carrying MANET routers support multiple 870 types of wireless interfaces, including High Capacity Short Range 871 Radio interfaces as well as Low Capacity OTM SatCom interfaces. The 872 radio interfaces are the preferred interfaces for carrying data 873 traffic due to their high capacity, but the range is limiting with 874 respect to connectivity to a Fixed NOC. Hence, OTM SatCom interfaces 875 offer a more persistent but lower capacity reachback capability. The 876 existence of a SatCom persistent Reachback capability offers the NOC 877 the ability to monitor and manage the MANET routers over the air. 878 Similarly to the Parking Lot Staging scenario, the same concept can 879 be applied to mobile units. 881 --- +--+ --- 882 / /---|SC|---/ / 883 --- +--+ --- 884 +---------+ | 885 | Fixed |<---------------------+ 886 | NOC | +--------------| 887 +---------+ | +-------------------+ 888 | | | 889 +----------+ | +----------+ 890 | router_1 | +----------+ | router_N | 891 +----------+ | | +----------+ 892 * | | * * 893 * +----------+ | * * 894 *********| router_2 |*****|******* * 895 +----------+ | * 896 * | * 897 * +----------+ * 898 ********| router_3 |**** 899 +----------+ 901 --- SatCom links 902 *** Radio links 904 Figure 2: Monitoring with one-hop SatCom Reachback network 906 Scenario: Hierarchical Management: 908 Another reasonable scenario common to military operations in a MANET 909 environment is the Hierarchical Management scenario. Vehicles carry 910 a rather complex set of networking devices, including routers running 911 MANET control protocols. In this hierarchical architecture, the 912 MANET mobile unit has a rather complex internal architecture where a 913 local manager within the unit is responsible for local management. 914 The local management includes management of the MANET router and 915 control protocols, the firewall, servers, proxies, hosts and 916 applications. In addition, a standard management interface is 917 required in this architecture. Moreover, in addition to requiring 918 standard management interfaces into the components comprising the 919 MANET nodal architecture, the local manager is responsible for local 920 monitoring and the generation of periodic reports back to the Fixed 921 NOC. 923 Interface 924 | 925 V 926 +---------+ +-------------------------+ 927 | Fixed | Interface | +---+ +---+ | 928 | NOC |<---+------->| | R |--+--| F | | 929 +---------+ | | +---+ | +---+ | 930 | | | | +---+ | 931 | | +---+ | +--| P | | 932 | | | M |--+ | +---+ | 933 | | +---+ | | 934 | | | +---+ | 935 | | +--| D | | 936 | | | +---+ | 937 | | | | 938 | | | +---+ | 939 | | +--| H | | 940 | | | +---+ | 941 | | unit_1 | 942 | +-------------------------+ 943 | 944 | 945 | +--------+ 946 +------->| unit_2 | 947 | +--------+ 948 | 0 949 | 0 950 | 0 951 | +--------+ 952 +------->| unit_N | 953 +--------+ 955 Key: R-Router 956 F-Firewall 957 P-PEP (Performance Enhancing Proxy) 958 D-Servers, e.g., DNS 959 H-hosts 960 M-Local Manager 962 Figure 3: Hierarchical Management 964 Scenario: Management over Lossy/Intermittent Links: 966 In the future of military operations, the standard management will be 967 done over lossy and intermittent links and ideally the Fixed NOC will 968 become mobile. In this architecture, the nature and current quality 969 of each link are distinct. However, there are a number of issues 970 that would arise and need to be addressed: 972 1. Common and specific configurations are undefined: 974 A. When mass-configuring devices, common set of configurations 975 are undefined at this time. 977 B. Similarly, when performing a specific device, set of specific 978 configurations is unknown. 980 2. Once the total number of units becomes quite large, scalability 981 would be an issue and need to be addressed. 983 3. The state of the devices are different and may be in various 984 states of operations, e.g., ON/OFF, etc. 986 4. Pushing large data files over reliable transport, e.g., TCP, 987 would be problematic. Would a new mechanism of transmitting 988 large configurations over the air in low bandwidth be 989 implemented? Which protocol would be used at transport layer? 991 5. How to validate network configuration (and local configuration) 992 is complex, even when to cutover is an interesting question. 994 6. Security as a general issue needs to be addressed as it could be 995 problematic in military operations. 997 +---------+ +----------+ 998 | Mobile |<----------->| router_1 | 999 | NOC |?--+ +----------+ 1000 +---------+ | 1001 ^ | +----------+ 1002 | +------->| router_2 | 1003 | +----------+ 1004 | 0 1005 | 0 1006 | 0 1007 | +----------+ 1008 +---------------->| router_N | 1009 +----------+ 1011 Figure 4: Management over Lossy/intermittent Links 1013 3. IANA Considerations 1015 This document does not introduce any new code-points or namespaces 1016 for registration with IANA. 1018 Note to RFC Editor: this section may be removed on publication as an 1019 RFC. 1021 4. Security Considerations 1023 This document discusses the use cases for a network of constrained 1024 devices and does not introduce any security issues by itself. 1026 5. Contributors 1028 Following persons made significant contributions to and reviewed this 1029 document: 1031 o Ulrich Herberg (Fujitsu Laboratories of America) contributed the 1032 Section 2.9 on Community Network Applications. 1034 o Peter van der Stok contributed to Section 2.5 on Building 1035 Automation. 1037 o Zhen Cao contributed to Section 2.10 on Mobile Applications. 1039 o Gilman Tolle contributed the Section 2.11 on Automated Metering 1040 Infrastructure. 1042 o James Nguyen and Ulrich Herberg contributed the Section 2.12 on 1043 MANET Concept of Operations (CONOPS) in Military. 1045 6. Acknowledgments 1047 Following persons reviewed and provided valuable comments to 1048 different versions of this document: 1050 Dominique Barthel, Carsten Bormann, Zhen Cao, Benoit Claise, Bert 1051 Greevenbosch, Ulrich Herberg, James Nguyen, Anuj Sehgal, Zach Shelby, 1052 and Peter van der Stok. 1054 The editors would like to thank the reviewers and the participants on 1055 the Coman maillist for their valuable contributions and comments. 1057 7. References 1059 7.1. Normative References 1061 7.2. Informative References 1063 [RFC6130] Clausen, T., Dearlove, C., and J. Dean, "Mobile Ad Hoc 1064 Network (MANET) Neighborhood Discovery Protocol (NHDP)", 1065 RFC 6130, April 2011. 1067 [RFC6779] Herberg, U., Cole, R., and I. Chakeres, "Definition of 1068 Managed Objects for the Neighborhood Discovery Protocol", 1069 RFC 6779, October 2012. 1071 [RFC6988] Quittek, J., Chandramouli, M., Winter, R., Dietz, T., and 1072 B. Claise, "Requirements for Energy Management", RFC 6988, 1073 September 2013. 1075 [I-D.ietf-lwig-terminology] 1076 Bormann, C., Ersue, M., and A. Keranen, "Terminology for 1077 Constrained Node Networks", draft-ietf-lwig-terminology-05 1078 (work in progress), July 2013. 1080 [I-D.ietf-eman-framework] 1081 Parello, J., Claise, B., Schoening, B., and J. Quittek, 1082 "Energy Management Framework", 1083 draft-ietf-eman-framework-11 (work in progress), 1084 October 2013. 1086 [I-D.ietf-manet-olsrv2] 1087 Clausen, T., Dearlove, C., Jacquet, P., and U. Herberg, 1088 "The Optimized Link State Routing Protocol version 2", 1089 draft-ietf-manet-olsrv2-19 (work in progress), March 2013. 1091 [COM-REQ] Ersue, M., "Constrained Management: Problem statement and 1092 Requirements", draft-ersue-coman-prostate-reqs (work in 1093 progress), October 2013. 1095 Appendix A. Open issues 1097 o It has been noted that the use cases the Industrial Application, 1098 Home Automation and Building Automation have an intersect. 1100 Appendix B. Change Log 1102 B.1. draft-ersue-constrained-mgmt-03 - 1103 draft-ersue-opsawg-coman-use-cases-00 1105 o Reduced the terminology section for terminology addressed in the 1106 LWIG and Coman Requirements drafts. Referenced the other drafts. 1108 o Checked and aligned all terminology against the LWIG terminology 1109 draft. 1111 o Spent some effort to resolve the intersection between the 1112 Industrial Application, Home Automation and Building Automation 1113 use cases. 1115 o Moved section section 3. Use Cases from the companion document 1116 [COM-REQ] to this draft. 1118 o Reformulation of some text parts for more clarity. 1120 B.2. draft-ersue-constrained-mgmt-02-03 1122 o Extended the terminology section and removed some of the 1123 terminology addressed in the new LWIG terminology draft. 1124 Referenced the LWIG terminology draft. 1126 o Moved Section 1.3. on Constrained Device Classes to the new LWIG 1127 terminology draft. 1129 o Class of networks considering the different type of radio and 1130 communication technologies in use and dimensions extended. 1132 o Extended the Problem Statement in Section 2. following the 1133 requirements listed in Section 4. 1135 o Following requirements, which belong together and can be realized 1136 with similar or same kind of solutions, have been merged. 1138 * Distributed Management and Peer Configuration, 1140 * Device status monitoring and Neighbor-monitoring, 1142 * Passive Monitoring and Reactive Monitoring, 1144 * Event-driven self-management - Self-healing and Periodic self- 1145 management, 1147 * Authentication of management systems and Authentication of 1148 managed devices, 1150 * Access control on devices and Access control on management 1151 systems, 1153 * Management of Energy Resources and Data models for energy 1154 management, 1156 * Software distribution (group-based firmware update) and Group- 1157 based provisioning. 1159 o Deleted the empty section on the gaps in network management 1160 standards, as it will be written in a separate draft. 1162 o Added links to mentioned external pages. 1164 o Added text on OMA M2M Device Classification in appendix. 1166 B.3. draft-ersue-constrained-mgmt-01-02 1168 o Extended the terminology section. 1170 o Added additional text for the use cases concerning deployment 1171 type, network topology in use, network size, network capabilities, 1172 radio technology, etc. 1174 o Added examples for device classes in a use case. 1176 o Added additional text provided by Cao Zhen (China Mobile) for 1177 Mobile Applications and by Peter van der Stok for Building 1178 Automation. 1180 o Added the new use cases 'Advanced Metering Infrastructure' and 1181 'MANET Concept of Operations in Military'. 1183 o Added the section 'Managing the Constrainedness of a Device or 1184 Network' discussing the needs of very constrained devices. 1186 o Added a note that the requirements in [COM-REQ] need to be seen as 1187 standalone requirements and the current document does not 1188 recommend any profile of requirements. 1190 o Added a section in [COM-REQ] for the detailed requirements on 1191 constrained management matched to management tasks like fault, 1192 monitoring, configuration management, Security and Access Control, 1193 Energy Management, etc. 1195 o Solved nits and added references. 1197 o Added Appendix A on the related development in other bodies. 1199 o Added Appendix B on the work in related research projects. 1201 B.4. draft-ersue-constrained-mgmt-00-01 1203 o Splitted the section on 'Networks of Constrained Devices' into the 1204 sections 'Network Topology Options' and 'Management Topology 1205 Options'. 1207 o Added the use case 'Community Network Applications' and 'Mobile 1208 Applications'. 1210 o Provided a Contributors section. 1212 o Extended the section on 'Medical Applications'. 1214 o Solved nits and added references. 1216 Authors' Addresses 1218 Mehmet Ersue (editor) 1219 Nokia Solutions and Networks 1221 Email: mehmet.ersue@nsn.com 1223 Dan Romascanu 1224 Avaya 1226 Email: dromasca@avaya.com 1228 Juergen Schoenwaelder 1229 Jacobs University Bremen 1231 Email: j.schoenwaelder@jacobs-university.de