idnits 2.17.1 draft-ersue-constrained-mgmt-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 14, 2013) is 4089 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2119' is defined on line 2756, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-lwig-guidance' is defined on line 2778, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 6779 (Obsoleted by RFC 7939) == Outdated reference: A later version (-19) exists of draft-ietf-manet-olsrv2-17 == Outdated reference: A later version (-03) exists of draft-ietf-lwig-guidance-02 == Outdated reference: A later version (-18) exists of draft-ietf-core-coap-13 == Outdated reference: A later version (-19) exists of draft-ietf-eman-framework-06 == Outdated reference: A later version (-14) exists of draft-ietf-eman-requirements-11 == Outdated reference: A later version (-13) exists of draft-ietf-roll-terminology-10 Summary: 0 errors (**), 0 flaws (~~), 9 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Ersue, Ed. 3 Internet-Draft Nokia Siemens Networks 4 Intended status: Informational D. Romascanu, Ed. 5 Expires: August 18, 2013 Avaya 6 J. Schoenwaelder, Ed. 7 Jacobs University Bremen 8 February 14, 2013 10 Management of Networks with Constrained Devices: Problem Statement, Use 11 Cases and Requirements 12 draft-ersue-constrained-mgmt-03 14 Abstract 16 This document provides a problem statement and discusses the use 17 cases and requirements for the management of networks with 18 constrained devices. 20 Status of this Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at http://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on August 18, 2013. 37 Copyright Notice 39 Copyright (c) 2013 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 55 1.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . 4 56 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 5 57 1.3. Class of Networks in Focus . . . . . . . . . . . . . . . . 7 58 1.4. Constrained Device Deployment Options . . . . . . . . . . 10 59 1.5. Management Topology Options . . . . . . . . . . . . . . . 11 60 1.6. Managing the Constrainedness of a Device or Network . . . 11 61 2. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 15 62 3. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 17 63 3.1. Environmental Monitoring . . . . . . . . . . . . . . . . . 17 64 3.2. Medical Applications . . . . . . . . . . . . . . . . . . . 17 65 3.3. Industrial Applications . . . . . . . . . . . . . . . . . 18 66 3.4. Home Automation . . . . . . . . . . . . . . . . . . . . . 19 67 3.5. Building Automation . . . . . . . . . . . . . . . . . . . 20 68 3.6. Energy Management . . . . . . . . . . . . . . . . . . . . 22 69 3.7. Transport Applications . . . . . . . . . . . . . . . . . . 23 70 3.8. Infrastructure Monitoring . . . . . . . . . . . . . . . . 24 71 3.9. Community Network Applications . . . . . . . . . . . . . . 25 72 3.10. Mobile Applications . . . . . . . . . . . . . . . . . . . 27 73 3.11. Automated Metering Infrastructure (AMI) . . . . . . . . . 29 74 3.12. MANET Concept of Operations (CONOPS) in Military . . . . . 31 75 4. Requirements on the Management of Networks with 76 Constrained Devices . . . . . . . . . . . . . . . . . . . . . 36 77 4.1. Management Architecture/System . . . . . . . . . . . . . . 36 78 4.2. Management protocols and data model . . . . . . . . . . . 41 79 4.3. Configuration management . . . . . . . . . . . . . . . . . 44 80 4.4. Monitoring functionality . . . . . . . . . . . . . . . . . 46 81 4.5. Self-management . . . . . . . . . . . . . . . . . . . . . 51 82 4.6. Security and Access Control . . . . . . . . . . . . . . . 52 83 4.7. Energy Management . . . . . . . . . . . . . . . . . . . . 54 84 4.8. SW Distribution . . . . . . . . . . . . . . . . . . . . . 56 85 4.9. Traffic management . . . . . . . . . . . . . . . . . . . . 56 86 4.10. Transport Layer . . . . . . . . . . . . . . . . . . . . . 57 87 4.11. Implementation Requirements . . . . . . . . . . . . . . . 59 88 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 61 89 6. Security Considerations . . . . . . . . . . . . . . . . . . . 62 90 7. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . 63 91 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 64 92 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 65 93 9.1. Normative References . . . . . . . . . . . . . . . . . . . 65 94 9.2. Informative References . . . . . . . . . . . . . . . . . . 65 95 Appendix A. Related Development in other Bodies . . . . . . . . . 67 96 A.1. ETSI TC M2M . . . . . . . . . . . . . . . . . . . . . . . 67 97 A.2. OASIS . . . . . . . . . . . . . . . . . . . . . . . . . . 68 98 A.3. OMA . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 99 A.4. IPSO Alliance . . . . . . . . . . . . . . . . . . . . . . 69 100 Appendix B. Related Research Projects . . . . . . . . . . . . . . 71 101 Appendix C. Open issues . . . . . . . . . . . . . . . . . . . . . 72 102 Appendix D. Change Log . . . . . . . . . . . . . . . . . . . . . 73 103 D.1. 02-03 . . . . . . . . . . . . . . . . . . . . . . . . . . 73 104 D.2. 01-02 . . . . . . . . . . . . . . . . . . . . . . . . . . 74 105 D.3. 00-01 . . . . . . . . . . . . . . . . . . . . . . . . . . 74 106 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 76 108 1. Introduction 110 1.1. Overview 112 Small devices with limited CPU, memory, and power resources, so 113 called constrained devices (aka. sensor, smart object, or smart 114 device) can constitute a network. Such a network of constrained 115 devices itself may be constrained or challenged, e.g. with unreliable 116 or lossy channels, wireless technologies with limited bandwidth and a 117 dynamic topology, needing the service of a gateway or proxy to 118 connect to the Internet. In other scenarios, the constrained devices 119 can be connected to a non-constrained network using off-the-shelf 120 protocol stacks. 122 Constrained devices might be in charge of gathering information in 123 diverse settings including natural ecosystems, buildings, and 124 factories and send the information to one or more server stations. 125 Constrained devices may work under severe resource constraints such 126 as limited battery and computing power, little memory and 127 insufficient wireless bandwidth, and communication capabilities. A 128 central entity, e.g., a base station or controlling server, might 129 have more computational and communication resources and can act as a 130 gateway between the constrained devices and the application logic in 131 the core network. 133 Today diverse size of small devices with different resources and 134 capabilities are becoming connected. Mobile personal gadgets, 135 building-automation devices, cellular phones, Machine-to-machine 136 (M2M) devices, etc. benefit from interacting with other "things" in 137 the near or somewhere in the Internet. With this the Internet of 138 Things (IoT) becomes a reality build up of uniquely identifiable 139 objects (things). And over the next decade, this could grow to 140 trillions of constrained devices and will greatly increase the 141 Internet's size and scope. 143 Network management is characterized by monitoring network status, 144 detecting faults, and inferring their causes, setting network 145 parameters, and carrying out actions to remove faults, maintain 146 normal operation, and improve network efficiency and application 147 performance. The traditional network management application 148 periodically collects information from a set of elements that are 149 needed to manage, processes the data, and presents them to the 150 network management users. Constrained devices, however, often have 151 limited power, low transmission range, and might be unreliable. They 152 might also need to work in hostile environments with advanced 153 security requirements or need to be used in harsh environments for a 154 long time without supervision. Due to such constraints, the 155 management of a network with constrained devices offers different 156 type of challenges compared to the management of a traditional IP 157 network. 159 The IETF has already done a lot of standardization work to enable the 160 communication in IP networks and to manage such networks as well as 161 the manifold type of nodes in these networks [RFC6632]. However, the 162 IETF so far has not developed any specific technologies for the 163 management of constrained devices and the networks comprised by 164 constrained devices. IP-based sensors or constrained devices in such 165 an environment, i.e., devices with very limited memory and CPU 166 resources, use today application-layer protocols in an ad-hoc manner 167 to do simple resource management and monitoring. 169 This document raises the questions on and aims to understand the use 170 cases and requirements for the management of a network with 171 constrained devices. The document especially aims to avoid 172 recommending any particular solutions. Section 1.3 and Section 1.5 173 describe different topology options for the networking and management 174 of constrained devices. Section 1.4 explains different deployment 175 options for the networking of constrained devices. Section 2 176 provides a problem statement on the issue of the management of 177 networked constrained devices. Section 3 lists diverse use cases and 178 scenarios for the management from the network as well as from the 179 application point of view. Section 4 lists requirements on the 180 management of applications and networks with constrained devices. 181 Note that the requirements in Section 4 need to be seen as standalone 182 requirements. As of today this document does not recommend the 183 realization of a profile of requirements. 185 1.2. Terminology 187 Concerning constrained devices and networks this document generally 188 builds on the terminology defined in [LWIG-TERMS]. As such the terms 189 like Constrained Device, Constrained Network, etc. are defined in 190 [LWIG-TERMS]. 192 The following terms are additionally used throughout this 193 documentation: 195 AMI: (Advanced Metering Infrastructure) A system including hardware, 196 software, and networking technologies that measures, collects, and 197 analyzes energy usage, and communicates with a hierarchically 198 deployed network of metering devices, either on request or on a 199 schedule. 201 C0: Class 0 constrained device as defined in Section 3. of [LWIG- 202 TERMS]. 204 C1: Class 1 constrained device as defined in Section 3. of [LWIG- 205 TERMS]. 207 C2: Class 2 constrained device as defined in Section 3. of [LWIG- 208 TERMS]. 210 Client: The originating endpoint of a request; the destination 211 endpoint of a response. 213 Intermediary entity: As defined in the CoAP document an intermediary 214 entity can be a CoAP endpoint that acts both as a server and as a 215 client towards (possibly via further intermediaries) an origin 216 server. An intermediary entity can be used to support 217 hierarchical management. 219 Network of Constrained Devices: A network to which constrained 220 devices are connected. It may or may not be a Constrained Network 221 (see [LWIG-TERMS] for the definition of the term Constrained 222 Network). 224 M2M: (Machine to Machine) stands for the automatic data transfer 225 between devices of different kind. In M2M scenarios a device 226 (such as a sensor or meter) captures an event, which is relayed 227 through a network (wireless, wired or hybrid) to an application. 229 MANET: Mobile Ad-hoc Networks, a self-configuring and 230 infrastructureless network of mobile devices connected by wireless 231 technologies. 233 Mote: A sensor node in a wireless network that is capable of 234 performing some limited processing, gathering sensory information 235 and communicating with other connected nodes in the network. 237 Server: The destination endpoint of a request; the originating 238 endpoint of a response. 240 Smart Grid: An electrical grid that uses communication technologies 241 to gather and act on information in an automated fashion to 242 improve the efficiency, reliability and sustainability of the 243 production and distribution of electricity. 245 Smart Meter: An electrical meter (in the context of a Smart Grid) 246 that records consumption of electric energy in intervals of an 247 hour or less and communicates that information at least daily back 248 to the utility network for monitoring and billing purposes. 250 For a detailed discussion on the constrained networks as well as 251 classes of constrained devices and their capabilities please see 252 [LWIG-TERMS]. 254 1.3. Class of Networks in Focus 256 In this document we differentiate following type of networks 257 concerning their transport and communication technologies: 259 (Note that a network in general can involve constrained and non- 260 constrained devices.) 262 o Wireline non-constrained networks (CN0), e.g. an Ethernet-LAN with 263 non-constrained and constrained devices involved. 265 o A combination of wireline and wireless networks (CN1), which may 266 or may not be mesh-based but have a multi-hop connectivity between 267 constrained devices, utilizing dynamic routing in both the 268 wireless and wireline portions of the network. CN1 usually 269 support highly distributed applications with many nodes (e.g. 270 environmental monitoring). CN1 tend to deal with large-scale 271 multipoint-to-point systems with massive data flows. Wireless 272 Mesh Networks (WMN), as a specific type of CN1 networks, use off- 273 the-shelf radio technology such as Wi-Fi, WiMax, and cellular 274 3G/4G. WMNs are reliable based on the redundancy they offer and 275 have often a more planned deployment to provide dynamic and cost 276 effective connectivity over a certain geographic area. 278 o A combination of wireline and wireless networks with point-to- 279 point or point-to-multipoint communication (CN2) generally with 280 single-hop connectivity to constrained devices, utilizing static 281 routing over the wireless network. CN2 support short-range, 282 point-to-point, low-data-rate, source-to-sink type of applications 283 such as RFID systems, light switches, fire and smoke detectors, 284 and home appliances. CN2 usually support confined short-range 285 spaces such as a home, a factory, a building, or the human body. 286 IEEE 802.15.1 (Bluetooth) and IEEE 802.15.4 are well-known 287 examples of applicable standards for CN2 networks. 289 o Mobile Adhoc networks (MANET) are self-configuring 290 _infrastructureless_ networks of mobile devices connected by 291 wireless technologies. MANETs are based on point-to-point 292 communications of devices moving independently in any direction 293 and changing the links to other devices frequently. MANET devices 294 do act as a router to forward traffic unrelated to their own use. 296 A CN0 is used for specific applications like Building Automation or 297 Infrastructure Monitoring. However, CN1 and CN2 networks are 298 especially in the interest of the analysis on the management of 299 constrained devices in this document. 301 Furthermore different network characteristics are determined by 302 multiple dimensions: dynamicity of the topology, bandwidth, and loss 303 rate. In the following, each dimension is explained, and networks in 304 scope for this document are outlined: 306 Network Topology: 308 The topology of a network can be represented as a graph, with edges 309 (i.e., links) and vertices (routers and hosts). Examples of 310 different topologies include "star" topologies (with one central node 311 and multiple nodes in one hop distance), tree structures (with each 312 node having exactly one parent), directed acyclic graphs (with each 313 node having one or more parents), clustered topologies (where one or 314 more "cluster heads" are responsible for a certain area of the 315 network), mesh topologies (fully distributed), etc. 317 Management protocols may take advantage of specific network 318 topologies, for example by distributing large-scale management tasks 319 amongst multiple distributed network management stations (e.g., in 320 case of a mesh topology), or by using a hierarchical management 321 approach (e.g., in case of a tree topology). These different 322 management topology options are described in Section 1.6. 324 Note that in certain network deployments, such as community ad hoc 325 networks (as described in Section 3.9, the topology is not pre- 326 planned, and thus may be unknown for management purposes. In other 327 use cases, such as industrial applications (as described in 328 Section 3.3, the topology may be designed in advance and therefore 329 taken advantage of when managing the network. 331 Dynamicity of the network topology: 333 The dynamicity of the network topology determines the rate of change 334 of the graph per time. Such changes can occur due to different 335 factors, such as mobility of nodes (e.g., in MANETs or cellular 336 networks), duty cycles (for low-power devices enabling their network 337 interface only periodically to transmit or receive packets), or 338 unstable links (in particular wireless links with strongly 339 fluctuating link quality). 341 Examples of different levels of dynamicity of the topology are 342 Ethernets (with typically a very static topology) on the one side, 343 and low-power and lossy networks (LLNs) on the other side. LLNs 344 nodes often using duty cycles, operate on unreliable wireless links 345 and are potentially mobile (e.g. for sensor networks). 347 The more the topology is dynamic, the more routing, transport and 348 application layer protocols have to cope with interrupted 349 connectivity and/or longer delays. For example, management protocols 350 (with a given underlying transport protocol) that expect continuous 351 session flows without changes of routes during a communication flow, 352 may fail to operate. 354 Networks with a very low dynamicity (e.g. Ethernet) with no or 355 infrequent topology changes (e.g. less than once every 30 minutes), 356 are in-scope of this document if they are used with constrained 357 devices (see e.g. the use case "Building Automation" in Section 3.5). 359 Traffic flows: 361 The traffic flow in a network determines from which sources data 362 traffic is sent to which destinations in the network. Several 363 different traffic flows are defined in [I-D.ietf-roll-terminology], 364 including "point-to-point" (P2P), "multipoint-to-point" (MP2P), and 365 "point-to-multipoint" (P2MP) flows as: 367 o P2P: Point To Point. This refers to traffic exchanged between two 368 nodes (regardless of the number of hops between the two nodes). 370 o P2MP: Point-to-Multipoint traffic refers to traffic between one 371 node and a set of nodes. This is similar to the P2MP concept in 372 Multicast or MPLS Traffic Engineering. 374 o MP2P: Multipoint-to-Point is used to describe a particular traffic 375 pattern (e.g. MP2P flows collecting information from many nodes 376 flowing inwards towards a collecting sink). 378 If one of these traffic patterns is predominant in a network, 379 protocols (routing, transport, application) may be optimized for the 380 specific traffic flow. For example, in a network with a tree 381 topology and MP2P traffic, collection tree protocols are efficient to 382 send data from the leaves of the tree to the root of the tree, via 383 each node's parent. 385 Bandwidth: 387 The bandwidth of the network is the amount of data that can be sent 388 per time between two communication end-points. It is usually 389 determined by the link with the minimum bandwidth on the path from 390 the source to the destination of data packets. The bandwidth in 391 networks can range from a few Kilobytes per second (such as on some 392 802.15.4 link layers) to many Gigabytes per second (e.g., on fiber 393 optics). 395 For management purposes, the management protocol typically requires 396 to send information between the network management station and the 397 clients, for monitoring or control purposes. If the available 398 bandwidth is insufficient for the management protocol, packets will 399 be buffered and eventually dropped, and thus management is not 400 possible with such a protocol. 402 Networks without bandwidth limitation (e.g. Ethernet) are in-scope 403 of this document if they are used with constrained devices (see the 404 use case "Building Automation" in Section 3.5). 406 Loss rate: 408 The loss rate (or bit error rate) is the number of bit errors divided 409 by the total number of bits transmitted. For wired networks, loss 410 rates are typically extremely low, e.g. around 10^-12 or 10^-13 for 411 the latest 10Gbit Ethernet. For wireless networks, such as 802.15.4, 412 the bit error rate can be as high as 10^-1 to 10^-0 in case of 413 interferences.Even when using a reliable transport protocol, 414 management operations can fail if the loss rate is too high, unless 415 they are specifically designed to cope with these situations. 417 Note: The discussion on the management requirements of MANETs is 418 currently not in the focus of this document. The use case in 419 Section 3.4 has been provided to make it clear how a MANET-based 420 application differs from others. 422 1.4. Constrained Device Deployment Options 424 We differentiate following Deployment options for the constrained 425 devices: 427 o a network of constrained devices, which communicate with each 428 other, 430 o Constrained devices, which are connected directly to the Internet 431 or an IP network 433 o A network of constrained devices which communicate with a gateway 434 or proxy with more communication capabilities acting possibly as a 435 representative of the device to entities in the non-constrained 436 network 438 o Constrained devices, which are connected to the Internet or an IP 439 network via a gateway/proxy 441 o A hierarchy of constrained devices, e.g., a network of C0 devices 442 connected to one or more C1 devices - connected to one or more C2 443 devices - connected to one or more gateways - connected to some 444 application servers or NMS system 446 o The possibility of device grouping (possibly in a dynamic manner) 447 such as that the grouped devices can act as one logical device at 448 the edge of the network and one device in this group can act as 449 the managing entity 451 1.5. Management Topology Options 453 We differentiate following options for the management of networks of 454 constrained devices: 456 o A network of constrained devices managed by one central manager. 457 A logically centralized management might be implemented in a 458 hierarchical fashion for scalability and robustness reasons. The 459 manager and the management application logic might have a gateway/ 460 proxy in between or might be on different nodes in different 461 networks, e.g., management application running on a cloud server. 463 o Distributed management, where a constrained network is managed by 464 more than one manager. Each manager controls a subnetwork and may 465 communicate directly with other manager stations in a cooperative 466 fashion. The distributed management may be weakly distributed, 467 where functions are broken down and assigned to many managers 468 dynamically, or strongly distributed, where almost all managed 469 things have embedded management functionality and explicit 470 management disappears, which usually comes with the price that the 471 strongly distributed management logic now needs to be managed. 473 o Hierarchical management, where a hierarchy of constrained networks 474 are managed by the managers at their corresponding hierarchy 475 level. I.e. each manager is responsible for managing the nodes in 476 its sub-network. It passes information from its sub-network to 477 its higher-level manager, and disseminates management functions 478 received from the higher-level manager to its sub-network. 479 Hierarchical management is essentially a scalability mechanism, 480 logically the decision-making may be still centralized. 482 1.6. Managing the Constrainedness of a Device or Network 484 The capabilities of a constrained device or network and the 485 constrainedness thereof influence and have an impact on the 486 requirements for the management of such network or devices. 488 A constrained device: 490 o might only support an unreliable radio with lossy links, i.e. the 491 client and server of a management protocol need to gracefully 492 ignore incomplete commands or repeat commands as necessary. 494 o might only be able to go online from time-to-time, where it is 495 reachable, i.e. a command might be necessary to repeat after a 496 longer timeout or the timeout value with which one endpoint waits 497 on a response needs to be sufficiently high. 499 o might only be able to support a limited operating time (e.g. based 500 on the available battery), i.e. the devices need to economize 501 their energy usage with suitable mechanisms and the managing 502 entity needs to monitor and control the energy status of the 503 constrained devices it manages. 505 o might only be able to support one simple communication protocol, 506 i.e. the management protocol needs to be possible to downscale 507 from constrained (C2) to very constrained (C0) devices with 508 modular implementation and a very basic version with just a few 509 simple commands. 511 o might only be able to support limited or no user and/or transport 512 security, i.e. the management system needs to support a less- 513 costly and simple but sufficiently secure authentication 514 mechanism. 516 o might not be able to support compression and decompression of 517 exchanged data based on limited CPU power, i.e. an intermediary 518 entity which is capable of data compression should be able to 519 communicate with both, devices, which support data compression 520 (e.g. C2) and devices, which do not support data compression 521 (e.g. C1 and C0). 523 o might only be able to support very simple encryption, i.e. it 524 would be efficient if the devices use cryptographic algorithms 525 that are supported in hardware. 527 o might only be able to communicate with one single managing entity 528 and cannot support the parallel access of many managing entities. 530 o might depend on a self-configuration feature, i.e. the managing 531 entity might not know all devices in a network and the device 532 needs to be able to initiate connection setup for the device 533 configuration. 535 o might depend on self- or neighbor-monitoring feature, i.e. the 536 managing entity might not be able to monitor all devices in a 537 network continuously. 539 o might only be able to communicate with its neighbors, i.e. the 540 device should be able to get its configuration from a neighbor. 542 o might only be able to support parsing of data models with limited 543 size, i.e. the device data models need to be compact containing 544 the most necessary data and if possible parsable as a stream. 546 o might only be able to support a limited or no failure detection, 547 i.e. the managing entity needs to handle the situation, where a 548 failure does not get detected or gets detected late gracefully 549 e.g. with asking repeatedly. 551 o might only be able to support the reporting of just one or a 552 limited set failure types. 554 o might only be able to support a limited set of notifications, 555 possible only an "I-am-alive" message. 557 o might only be able to support a soft-reset from failure recovery. 559 o might possibly generate a huge amount of redundant reporting data, 560 i.e. the intermediary management entity should be able to filter 561 and aggregate redundant data. 563 A constrained network: 565 o might only support an unreliable radio with lossy links, i.e. the 566 client and server of a management protocol need to repeat commands 567 as necessary or gracefully ignore incomplete commands. 569 o might be necessary to manage based on multicast communication, 570 i.e. the managing entity needs to be prepared to configure many 571 devices at once based on the same data model. 573 o might have a very large topology supporting 10.000 or more nodes 574 for some applications and as such node naming is a specific issue 575 for constrained networks. 577 o must be able to self-organize, i.e. given the large number of 578 nodes and their potential placement in hostile locations and 579 frequently changing topology, manual configuration is typically 580 not feasible. As such the network must be able to reconfigure 581 itself so that it can continue to operate properly and support 582 reliable connectivity. 584 o needs a management solution, which is energy-efficient, using as 585 little wireless bandwidth as possible since communication is 586 highly energy demanding. 588 o needs to support localization schemes to determine the location of 589 devices since the devices might be moving and location information 590 is important for some applications. 592 o needs a management solution, which is scalable as the network may 593 consist of thousands of nodes and may need to be extended 594 continuously. 596 o needs to provide fault tolerance. Faults in network operation 597 including hardware and software errors, failures detected by the 598 transport protocol and other self-monitoring mechanisms can be 599 used to provide fault tolerance. 601 o might require new management capabilities: for example, network 602 coverage information and a constrained device power-distribution- 603 map. 605 o might require a new management function for data management, since 606 the type and amount of data collected in constrained networks is 607 different from those of the traditional networks. 609 o might also need energy-efficient key management algorithms for 610 security. 612 2. Problem Statement 614 The terminology for the "Internet of Things" is still nascent, and 615 depending on the network type or layer in focus diverse technologies 616 and terms are in use. Common to all these considerations is the 617 "Things" or "Objects" are supposed to have physical or virtual 618 identities using interfaces to communicate. In this context, we need 619 to differentiate between the Constrained and Smart Devices identified 620 by an IP address compared to virtual entities such as Smart Objects, 621 which can be identified as a resource or a virtual object by using a 622 unique identifier. Furthermore, the smart devices usually have a 623 limited memory and CPU power as well as aim to be self-configuring 624 and easy to deploy. 626 However, the tininess of the network nodes requires a rethinking of 627 the protocol characteristics concerning power consumption, 628 performance, memory, and CPU usage. As such, there is a demand for 629 protocol simplification, energy-efficient communication, less CPU 630 usage and small memory footprint. 632 On the application layer the IETF is already developing protocols 633 like the Constrained Application Protocol (CoAP) [I-D.ietf-core-coap] 634 supporting constrained devices and networks e.g., for smart energy 635 applications or home automation environments. The deployment of such 636 an environment involves in fact many, in some scenarios up to million 637 small devices (e.g. smart meters), which produce a huge amount of 638 data. This data needs to be collected, filtered, and pre-processed 639 for further use in diverse services. 641 Considering the high number of nodes to deploy, one has to think on 642 the manageability aspects of the smart devices and plan for easy 643 deployment, configuration, and management of the networks of 644 constrained devices as well as the devices themselves. Consequently, 645 seamless monitoring and self-configuration of such network nodes 646 becomes more and more imperative. Self-configuration and self- 647 management is already a reality in the standards of some of the 648 bodies such as 3GPP. To introduce self-configuration of smart 649 devices successfully a device-initiated connection establishment is 650 required. 652 A simple application layer protocol, such as CoAP, is essential to 653 address the issue of efficient object-to-object communication and 654 information exchange. Such an information exchange should be done 655 based on interoperable data models to enable the exchange and 656 interpretation of diverse application and management related data. 658 In an ideal world, we would have only one network management protocol 659 for monitoring, configuration, and exchanging management data, 660 independently of the type of the network (e.g., Smart Grid, wireless 661 access, or core network). Furthermore, it would be desirable to 662 derive the basic data models for constrained devices from the core 663 models used today to enable reuse of functionality and end-to-end 664 information exchange. However, the current management protocols seem 665 to be too heavyweight compared to the capabilities the constrained 666 devices have and are not applicable directly for the use in a network 667 of constrained devices. Furthermore, the data models addressing the 668 requirements of such smart devices need yet to be designed. 670 The IETF so far has not developed any specific technologies for the 671 management of constrained devices and the networks comprised by 672 constrained devices. IP-based sensors or constrained devices in such 673 an environment, i.e., devices with very limited memory and CPU 674 resources, use today, e.g., application-layer protocols to do simple 675 resource management and monitoring. This might be sufficient for 676 some basic cases, however, there is a need to reconsider the network 677 management mechanisms based on the new, changed, as well as reduced 678 requirements coming from smart devices and the network of such 679 constrained devices. Albeit it is questionable whether we can take 680 the same comprehensive approach we use in an IP network also for the 681 management of constrained devices. Hence, the management of a 682 network with constrained devices might become necessary to design as 683 much as possible simplified and less complex. 685 As the Section 1.6 highlights, there are diverse characterists of 686 constrained devices or networks, which stem from their constraindness 687 and therefor have an impact on the requirements for the management of 688 such a network with constrained devices. The use cases discussed in 689 Section 3 show that the requirements on constrained networks are 690 manifold and need to be analyzed from different angles, e.g. 691 concerning the design of the management architecture, the selection 692 of the appropriate protocol features as well as the specific issues 693 which are new in the context of constrained devices. Examples of 694 such issues are e.g. the careful management of the scarce energy 695 resources, the necessity for self-organization and self-management of 696 such devices but also the implementation considerations to enable the 697 use of common communication technologies on a constrained hardware in 698 an efficient manner. For an exhaustive list of issues and 699 requirements, which need to be addressed for the management of a 700 network with constrained devices please see Section 1.6 and 701 Section 4. 703 3. Use Cases 705 This section discusses some application scenarios where networks of 706 constrained devices are expected to be deployed. For each 707 application scenario, we first briefly describe the characteristics 708 followed by a discussion how network management can be provided, who 709 is likely going to be responsible for it, and on which time-scale 710 management operations are likely to be carried out. 712 3.1. Environmental Monitoring 714 Environmental monitoring applications are characterized by the 715 deployment of a number of sensors to monitor emissions, water 716 quality, or even the movements and habits of wildlife. Other 717 applications in this category include earthquake or tsunami early- 718 warning systems. The sensors often span a large geographic area, 719 they can be mobile, and they are often difficult to replace. 720 Furthermore, the sensors are usually not protected against tampering. 722 Management of environmental monitoring applications is largely 723 concerned with the monitoring whether the system is still functional 724 and the roll-out of new constrained devices in case the system looses 725 too much of its structure. The constrained devices themselves need 726 to be able to establish connectivity (auto-configuration) and they 727 need to be able to deal with events such as loosing neighbors or 728 being moved to other locations. 730 Management responsibility typically rests with the organization 731 running the environmental monitoring application. Since these 732 monitoring applications must be designed to tolerate a number of 733 failures, the time scale for detecting and recording failures is for 734 some of these applications likely measured in hours and repairs might 735 easily take days. However, for certain environmental monitoring 736 applications, much tighter time scales may exist and might be 737 enforced by regulations (e.g., monitoring of nuclear radiation). 739 3.2. Medical Applications 741 Constrained devices can be seen as an enabling technology for 742 advanced and possibly remote health monitoring and emergency 743 notification systems, ranging from blood pressure and heart rate 744 monitors to advanced devices capable to monitor implanted 745 technologies, such as pacemakers or advanced hearing aids. Medical 746 sensors may not only be attached to human bodies, they might also 747 exist in the infrastructure used by humans such as bathrooms or 748 kitchens. Medical applications will also be used to ensure 749 treatments are being applied properly and they might guide people 750 losing orientation. Fitness and wellness applications, such as 751 connected scales or wearable heart monitors, encourage consumers to 752 exercise and empower self-monitoring of key fitness indicators. 753 Different applications use Bluetooth, Wi-Fi or Zigbee connections to 754 access the patient's smartphone or home cellular connection to access 755 the Internet. 757 Constrained devices that are part of medical applications are managed 758 either by the users of those devices or by an organization providing 759 medical (monitoring) services for physicians. In the first case, 760 management must be automatic and or easy to install and setup by 761 average people. In the second case, it can be expected that devices 762 be controlled by specially trained people. In both cases, however, 763 it is crucial to protect the privacy of the people to which medical 764 devices are attached. Even though the data collected by a heart beat 765 monitor might be protected, the pure fact that someone carries such a 766 device may need protection. As such, certain medical appliances may 767 not want to participate in discovery and self-configuration protocols 768 in order to remain invisible. 770 Many medical devices are likely to be used (and relied upon) to 771 provide data to physicians in critical situations since the biggest 772 market is likely elderly and handicapped people. As such, fault 773 detection of the communication network or the constrained devices 774 becomes a crucial function that must be carried out with high 775 reliability and, depending on the medical appliance and its 776 application, within seconds. 778 3.3. Industrial Applications 780 Industrial Applications and smart manufacturing refer not only to 781 production equipment, but also to a factory that carries out 782 centralized control of energy, HVAC (heating, ventilation, and air 783 conditioning), lighting, access control, etc. via a network. For the 784 management of a factory it is becoming essential to implement smart 785 capabilities. From an engineering standpoint, industrial 786 applications are intelligent systems enabling rapid manufacturing of 787 new products, dynamic response to product demand, and real-time 788 optimization of manufacturing production and supply chain networks. 789 Potential industrial applications e.g. for smart factories and smart 790 manufacturing are: 792 o Digital control systems with embedded, automated process controls, 793 operator tools, as well as service information systems optimizing 794 plant operations and safety. 796 o Asset management using predictive maintenance tools, statistical 797 evaluation, and measurements maximizing plant reliability. 799 o Smart sensors detecting anomalies to avoid abnormal or 800 catastrophic events. 802 o Smart systems integrated within the industrial energy management 803 system and externally with the smart grid enabling real-time 804 energy optimization. 806 Sensor networks are an essential technology used for smart 807 manufacturing. Measurements, automated controls, plant optimization, 808 health and safety management, and other functions are provided by a 809 large number of networked sectors. Data interoperability and 810 seamless exchange of product, process, and project data are enabled 811 through interoperable data systems used by collaborating divisions or 812 business systems. Intelligent automation and learning systems are 813 vital to smart manufacturing but must be effectively integrated with 814 the decision environment. Wireless sensor networks (WSN) have been 815 developed for machinery Condition-based Maintenance (CBM) as they 816 offer significant cost savings and enable new functionalities. 817 Inaccessible locations, rotating machinery, hazardous areas, and 818 mobile assets can be reached with wireless sensors. WSNs can provide 819 today wireless link reliability, real-time capabilities, and quality- 820 of-service and enable industrial and related wireless sense and 821 control applications. 823 Management of industrial and factory applications is largely focused 824 on the monitoring whether the system is still functional, real-time 825 continuous performance monitoring, and optimization as necessary. 826 The factory network might be part of a campus network or connected to 827 the Internet. The constrained devices in such a network need to be 828 able to establish configuration themselves (auto-configuration) and 829 might need to deal with error conditions as much as possible locally. 830 Access control has to be provided with multi-level administrative 831 access and security. Support and diagnostics can be provided through 832 remote monitoring access centralized outside of the factory. 834 Management responsibility is typically owned by the organization 835 running the industrial application. Since the monitoring 836 applications must handle a potentially large number of failures, the 837 time scale for detecting and recording failures is for some of these 838 applications likely measured in minutes. However, for certain 839 industrial applications, much tighter time scales may exist, e.g. in 840 real-time, which might be enforced by the manufacturing process or 841 the use of critical material. 843 3.4. Home Automation 845 Home automation includes the control of lighting, heating, 846 ventilation, air conditioning, appliances, and entertainment devices 847 to improve convenience, comfort, energy efficiency, and security. It 848 can be seen as a residential extension of building automation. 850 Home automation networks need a certain amount of configuration 851 (associating switches or sensors to actors) that is either provided 852 by electricians deploying home automation solutions or done by 853 residents by using the application user interface to configure (parts 854 of) the home automation solution. Similarly, failures may be 855 reported via suitable interfaces to residents or they might be 856 recorded and made available to electricians in charge of the 857 maintenance of the home automation infrastructure. 859 The management responsibility lies either with the residents or it 860 may be outsourced to electricians providing management of home 861 automation solutions as a service. The time scale for failure 862 detection and resolution is in many cases likely counted in hours to 863 days. 865 3.5. Building Automation 867 Building automation comprises the distributed systems designed and 868 deployed to monitor and control the mechanical, electrical and 869 electronic systems inside buildings with various destinations (e.g., 870 public and private, industrial, institutions, or residential). 871 Advanced Building Automation Systems (BAS) may be deployed 872 concentrating the various functions of safety, environmental control, 873 occupancy, security. More and more the deployment of the various 874 functional systems is connected to the same communication 875 infrastructure (possibly Internet Protocol based), which may involve 876 wired or wireless communications networks inside the building. 878 Building automation requires the deployment of a large number (10- 879 100.000) of sensors that monitor the status of devices, and 880 parameters inside the building and controllers with different 881 specialized functionality for areas within the building or the 882 totality of the building. Inter-node distances between neighboring 883 nodes vary between 1 to 20 meters. Contrary to home automation in 884 building management all devices are known to a set of commissioning 885 tools and a data storage, such that every connected device has a 886 known origin. The management includes verifying the presence of the 887 expected devices and detecting the presence of unwanted devices. 889 Examples of functions performed by such controllers are regulating 890 the quality, humidity, and temperature of the air inside the building 891 and lighting. Other systems may report the status of the machinery 892 inside the building like elevators, or inside the rooms like 893 projectors in meeting rooms. Security cameras and sensors may be 894 deployed and operated on separate dedicated infrastructures connected 895 to the common backbone. The deployment area of a BAS is typically 896 inside one building (or part of it) or several buildings 897 geographically grouped in a campus. A building network can be 898 composed of subnets, where a subnet covers a floor, an area on the 899 floor, or a given functionality (e.g. security cameras). 901 Some of the sensors in Building Automation Systems (for example fire 902 alarms or security systems) register, record and transfer critical 903 alarm information and therefore must be resilient to events like loss 904 of power or security attacks. This leads to the need that some 905 components and subsystems operate in constrained conditions and are 906 separately certified. Also in some environments, the malfunctioning 907 of a control system (like temperature control) needs to be reported 908 in the shortest possible time. Complex control systems can 909 misbehave, and their critical status reporting and safety algorithms 910 need to be basic and robust and perform even in critical conditions. 912 Building Automation solutions are deployed in some cases in newly 913 designed buildings, in other cases it might be over existing 914 infrastructures. In the first case, there is a broader range of 915 possible solutions, which can be planned for the infrastructure of 916 the building. In the second case the solution needs to be deployed 917 over an existing structure taking into account factors like existing 918 wiring, distance limitations, the propagation of radio signals over 919 walls and floors. As a result, some of the existing WLAN solutions 920 (e.g. IEEE 802.11 or IEEE 802.15) may be deployed. In mission- 921 critical or security sensitive environments and in cases where link 922 failures happen often, topologies that allow for reconfiguration of 923 the network and connection continuity may be required. Some of the 924 sensors deployed in building automation may be very simple 925 constrained devices for which class 0 or class 1 may be assumed. 927 For lighting applications, groups of lights must be defined and 928 managed. Commands to a group of light must arrive within 200 ms at 929 all destinations. The installation and operation of a building 930 network has different requirements. During the installation, many 931 stand-alone networks of a few to 100 nodes co-exist without a 932 connection to the backbone. During this phase, the nodes are 933 identified with a network identifier related to their physical 934 location. Devices are accessed from an installation tool to connect 935 them to the network in a secure fashion. During installation, the 936 setting of parameters to common values to enable interoperability may 937 occur (e.g. Trickle parameter values). During operation, the 938 networks are connected to the backbone while maintaining the network 939 identifier to physical location relation. Network parameters like 940 address and name are stored in DNS. The names can assist in 941 determining the physical location of the device. 943 3.6. Energy Management 945 EMAN working group developed [I-D.ietf-eman-framework], which defines 946 a framework for providing Energy Management for devices within or 947 connected to communication networks. This document observes that one 948 of the challenges of energy management is that a power distribution 949 network is responsible for the supply of energy to various devices 950 and components, while a separate communication network is typically 951 used to monitor and control the power distribution network. Devices 952 that have energy management capability are defined as Energy Devices 953 and identified components within a device (Energy Device Components) 954 can be monitored for parameters like Power, Energy, Demand and Power 955 Quality. If a device contains batteries, they can be also monitored 956 and managed. 958 Energy devices differ in complexity and may include basic sensors or 959 switches, specialized electrical meters, or power distribution units 960 (PDU), and subsystems inside the network devices (routers, network 961 switches) or home or industrial appliances. An Energy Management 962 System is a combination of hardware and software used to administer a 963 network with the primary purpose being Energy Management. The 964 operators of such a system are either the utility providers or 965 customers that aim to control and reduce the energy consumption and 966 the associated costs. The topology in use differs and the deployment 967 can cover areas from small surfaces (individual homes) to large 968 geographical areas. EMAN requirements document 969 [I-D.ietf-eman-requirements] discusses the requirements for energy 970 management concerning monitoring and control functions. 972 It is assumed that Energy Management will apply to a large range of 973 devices of all classes and networks topologies. Specific resource 974 monitoring like battery utilization and availability may be specific 975 to devices with lower physical resources (device classes C0 or C1). 977 Energy Management is especially relevant to Smart Grid. A Smart Grid 978 is an electrical grid that uses data networks to gather and act on 979 energy and power-related information, in an automated fashion with 980 the goal to improve the efficiency, reliability, economics, and 981 sustainability of the production and distribution of electricity. As 982 such Smart Grid provides sustainable and reliable generation, 983 transmission, distribution, storage and consumption of electrical 984 energy based on advanced energy and ICT solutions and as such enables 985 e.g. following specific application areas: Smart transmission 986 systems, Demand Response/Load Management, Substation Automation, 987 Advanced Distribution Management, Advanced Metering Infrastructure 988 (AMI), Smart Metering, Smart Home and Building Automation, 989 E-mobility, etc. 991 Smart Metering is a good example of a M2M application and can be 992 realized as one of the vertical applications in an M2M environment. 993 Different types of possibly wireless small meters produce all 994 together a huge amount of data, which is collected by a central 995 entity and processed by an application server. The M2M 996 infrastructure can be provided by a mobile network operator as the 997 meters in urban areas will have most likely a cellular or WiMAX 998 radio. 1000 Smart Grid is built on a distributed and heterogeneous network and 1001 can use a combination of diverse networking technologies, such as 1002 wireless Access Technologies (WiMAX, Cellular, etc.), wireline and 1003 Internet Technologies (e.g., IP/MPLS, Ethernet, SDH/PDH over Fiber 1004 optic, etc.) as well as low-power radio technologies enabling the 1005 networking of smart meters, home appliances, and constrained devices 1006 (e.g. BT-LE, ZigBee, Z-Wave, Wi-Fi, etc.). The operational 1007 effectiveness of the smart grid is highly dependent on a robust, two- 1008 way, secure, and reliable communications network with suitable 1009 availability. 1011 The management of a distributed system like smart grid requires an 1012 end-to-end management of and information exchange through different 1013 type of networks. However, as of today there is no integrated smart 1014 grid management approach and no common smart grid information model 1015 available. Specific smart grid applications or network islands use 1016 their own management mechanisms. For example, the management of 1017 smart meters depends very much on the AMI environment they have been 1018 integrated to and the networking technologies they are using. In 1019 general, smart meters do only need seldom reconfiguration and they 1020 send a small amount of redundant data to a central entity. For a 1021 discussion on the management needs of an AMI network see 1022 Section 3.11. The management needs for Smart Home and Building 1023 Automation are discussed in Section 3.4 and Section 3.5. 1025 3.7. Transport Applications 1027 Transport Application is a generic term for the integrated 1028 application of communications, control, and information processing in 1029 a transportation system. Transport telematics or vehicle telematics 1030 are used as a term for the group of technologies that support 1031 transportation systems. Transport applications running on such a 1032 transportation system cover all modes of the transport and consider 1033 all elements of the transportation system, i.e. the vehicle, the 1034 infrastructure, and the driver or user, interacting together 1035 dynamically. The overall aim is to improve decision making, often in 1036 real time, by transport network controllers and other users, thereby 1037 improving the operation of the entire transport system. As such, 1038 transport applications can be seen as one of the important M2M 1039 service scenarios with the involvement of manifold small devices. 1041 The definition encompasses a broad array of techniques and approaches 1042 that may be achieved through stand-alone technological applications 1043 or as enhancements to other transportation communication schemes. 1044 Examples for transport applications are inter and intra vehicular 1045 communication, smart traffic control, smart parking, electronic toll 1046 collection systems, logistic and fleet management, vehicle control, 1047 and safety and road assistance. 1049 As a distributed system, transport applications require an end-to-end 1050 management of different types of networks. It is likely that 1051 constrained devices in a network (e.g. a moving in-car network) have 1052 to be controlled by an application running on an application server 1053 in the network of a service provider. Such a highly distributed 1054 network including mobile devices on vehicles is assumed to include a 1055 wireless access network using diverse long distance wireless 1056 technologies such as WiMAX, 3G/LTE or satellite communication, e.g. 1057 based on an embedded hardware module. As a result, the management of 1058 constrained devices in the transport system might be necessary to 1059 plan top-down and might need to use data models obliged from and 1060 defined on the application layer. The assumed device classes in use 1061 are mainly C2 devices. In cases, where an in-vehicle network is 1062 involved, C1 devices with limited capabilities and a short-distance 1063 constrained radio network, e.g. IEEE 802.15.4 might be used 1064 additionally. 1066 Management responsibility typically rests within the organization 1067 running the transport application. The constrained devices in a 1068 moving transport network might be initially configured in a factory 1069 and a reconfiguration might be needed only rarely. New devices might 1070 be integrated in an ad-hoc manner based on self-management and 1071 -configuration capabilities. Monitoring and data exchange might be 1072 necessary to do via a gateway entity connected to the back-end 1073 transport infrastructure. The devices and entities in the transport 1074 infrastructure need to be monitored more frequently and can be able 1075 to communicate with a higher data rate. The connectivity of such 1076 entities does not necessarily need to be wireless. The time scale 1077 for detecting and recording failures in a moving transport network is 1078 likely measured in hours and repairs might easily take days. It is 1079 likely that a self-healing feature would be used locally. 1081 3.8. Infrastructure Monitoring 1083 Infrastructure monitoring is concerned with the monitoring of 1084 infrastructures such as bridges, railway tracks, or (offshore) 1085 windmills. The primary goal is usually to detect any events or 1086 changes of the structural conditions that can impact the risk and 1087 safety of the infrastructure being monitored. Another secondary goal 1088 is to schedule repair and maintenance activities in a cost effective 1089 manner. 1091 The infrastructure to monitor might be in a factory or spread over a 1092 wider area but difficult to access. As such, the network in use 1093 might be based on a combination of fixed and wireless technologies, 1094 which use robust networking equipment and support reliable 1095 communication. It is likely that constrained devices in such a 1096 network are mainly C2 devices and have to be controlled centrally by 1097 an application running on a server. In case such a distributed 1098 network is widely spread, the wireless devices might use diverse 1099 long-distance wireless technologies such as WiMAX, or 3G/LTE, e.g. 1100 based on embedded hardware modules. In cases, where an in-building 1101 network is involved, the network can be based on Ethernet or wireless 1102 technologies suitable for in-building usage. 1104 The management of infrastructure monitoring applications is primarily 1105 concerned with the monitoring of the functioning of the system. 1106 Infrastructure monitoring devices are typically rolled out and 1107 installed by dedicated experts and changes are rare since the 1108 infrastructure itself changes rarely. However, monitoring devices 1109 are often deployed in unsupervised environments and hence special 1110 attention must be given to protecting the devices from being 1111 modified. 1113 Management responsibility typically rests with the organization 1114 owning the infrastructure or responsible for its operation. The time 1115 scale for detecting and recording failures is likely measured in 1116 hours and repairs might easily take days. However, certain events 1117 (e.g., natural disasters) may require that status information be 1118 obtained much more quickly and that replacements of failed sensors 1119 can be rolled out quickly (or redundant sensors are activated 1120 quickly). In case the devices are difficult to access, a self- 1121 healing feature on the device might become necessary. 1123 3.9. Community Network Applications 1125 Community networks are comprised of constrained routers in a multi- 1126 hop mesh topology, communicating over a lossy, and often wireless 1127 channel. While the routers are mostly non-mobile, the topology may 1128 be very dynamic because of fluctuations in link quality of the 1129 (wireless) channel caused by, e.g., obstacles, or other nearby radio 1130 transmissions. Depending on the routers that are used in the 1131 community network, the resources of the routers (memory, CPU) may be 1132 more or less constrained - available resources may range from only a 1133 few kilobytes of RAM to several megabytes or more, and CPUs may be 1134 small and embedded, or more powerful general-purpose processors. 1136 Examples of such community networks are the FunkFeuer network 1137 (Vienna, Austria), FreiFunk (Berlin, Germany), Seattle Wireless 1138 (Seattle, USA), and AWMN (Athens, Greece). These community networks 1139 are public and non-regulated, allowing their users to connect to each 1140 other and - through an uplink to an ISP - to the Internet. No fee, 1141 other than the initial purchase of a wireless router, is charged for 1142 these services. Applications of these community networks can be 1143 diverse, e.g., location based services, free Internet access, file 1144 sharing between users, distributed chat services, social networking 1145 etc, video sharing etc. 1147 As an example of a community network, the FunkFeuer network comprises 1148 several hundred routers, many of which have several radio interfaces 1149 (with omnidirectional and some directed antennas). The routers of 1150 the network are small-sized wireless routers, such as the Linksys 1151 WRT54GL, available in 2011 for less than 50 Euros. These routers, 1152 with 16 MB of RAM and 264 MHz of CPU power, are mounted on the 1153 rooftops of the users. When new users want to connect to the 1154 network, they acquire a wireless router, install the appropriate 1155 firmware and routing protocol, and mount the router on the rooftop. 1156 IP addresses for the router are assigned manually from a list of 1157 addresses (because of the lack of autoconfiguration standards for 1158 mesh networks in the IETF). 1160 While the routers are non-mobile, fluctuations in link quality 1161 require an ad hoc routing protocol that allows for quick convergence 1162 to reflect the effective topology of the network (such as NHDP 1163 [RFC6130] and OLSRv2 [I-D.ietf-manet-olsrv2] developed in the MANET 1164 WG). Usually, no human interaction is required for these protocols, 1165 as all variable parameters required by the routing protocol are 1166 either negotiated in the control traffic exchange, or are only of 1167 local importance to each router (i.e. do not influence 1168 interoperability). However, external management and monitoring of an 1169 ad hoc routing protocol may be desirable to optimize parameters of 1170 the routing protocol. Such an optimization may lead to a more stable 1171 perceived topology and to a lower control traffic overhead, and 1172 therefore to a higher delivery success ratio of data packets, a lower 1173 end-to-end delay, and less unnecessary bandwidth and energy usage. 1175 Different use cases for the management of community networks are 1176 possible: 1178 o One single Network Management Station (NMS), e.g. a border gateway 1179 providing connectivity to the Internet, requires managing or 1180 monitoring routers in the community network, in order to 1181 investigate problems (monitoring) or to improve performance by 1182 changing parameters (managing). As the topology of the network is 1183 dynamic, constant connectivity of each router towards the 1184 management station cannot be guaranteed. Current network 1185 management protocols, such as SNMP and Netconf, may be used (e.g., 1186 using interfaces such as the NHDP-MIB [RFC6779]). However, when 1187 routers in the community network are constrained, existing 1188 protocols may require too many resources in terms of memory and 1189 CPU; and more importantly, the bandwidth requirements may exceed 1190 the available channel capacity in wireless mesh networks. 1191 Moreover, management and monitoring may be unfeasible if the 1192 connection between the NMS and the routers is frequently 1193 interrupted. 1195 o A distributed network monitoring, in which more than one 1196 management station monitors or manages other routers. Because 1197 connectivity to a server cannot be guaranteed at all times, a 1198 distributed approach may provide a higher reliability, at the cost 1199 of increased complexity. Currently, no IETF standard exists for 1200 distributed monitoring and management. 1202 o Monitoring and management of a whole network or a group of 1203 routers. Monitoring the performance of a community network may 1204 require more information than what can be acquired from a single 1205 router using a network management protocol. Statistics, such as 1206 topology changes over time, data throughput along certain routing 1207 paths, congestion etc., are of interest for a group of routers (or 1208 the routing domain) as a whole. As of 2012, no IETF standard 1209 allows for monitoring or managing whole networks, instead of 1210 single routers. 1212 3.10. Mobile Applications 1214 M2M services are increasingly provided by mobile service providers as 1215 numerous devices, home appliances, utility meters, cars, video 1216 surveillance cameras, and health monitors, are connected with mobile 1217 broadband technologies. This diverse range of machines brings new 1218 network and service requirements and challenges. Different 1219 applications e.g. in a home appliance or in-car network use 1220 Bluetooth, Wi-Fi or Zigbee and connect to a cellular module acting as 1221 a gateway between the constrained environment and the mobile cellular 1222 network. 1224 Such a gateway might provide different options for the connectivity 1225 of mobile networks and constrained devices, e.g.: 1227 o a smart phone with 3G/4G and WLAN radio might use BT-LE to connect 1228 to the devices in a home area network, 1230 o a femtocell might be combined with home gateway functionality 1231 acting as a low-power cellular base station connecting smart 1232 devices to the application server of a mobile service provider. 1234 o an embedded cellular module with LTE radio connecting the devices 1235 in the car network with the server running the telematics service, 1237 o an M2M gateway connected to the mobile operator network supporting 1238 diverse IoT connectivity technologies including ZigBee and CoAP 1239 over 6LoWPAN over IEEE 802.15.4. 1241 Common to all scenarios above is that they are embedded in a service 1242 and connected to a network provided by a mobile service provider. 1243 Usually there is a hierarchical deployment and management topology in 1244 place where different parts of the network are managed by different 1245 management entities and the count of devices to manage is high (e.g. 1246 many thousands). In general, the network is comprised by manifold 1247 type and size of devices matching to different device classes. As 1248 such, the managing entity needs to be prepared to manage devices with 1249 diverse capabilities using different communication or management 1250 protocols. In case the devices are directly connected to a gateway 1251 they most likely are managed by a management entity integrated with 1252 the gateway, which itself is part of the Network Management System 1253 (NMS) run by the mobile operator. Smart phones or embedded modules 1254 connected to a gateway might be themselves in charge to manage the 1255 devices on their level. The initial and subsequent configuration of 1256 such a device is mainly based on self-configuration and is triggered 1257 by the device itself. 1259 The challenges in the management of devices in a mobile application 1260 are manifold. Firstly, the issues caused through the device mobility 1261 need to be taken into consideration. While the cellular devices are 1262 moving around or roaming between different regional networks, they 1263 should report their status to the corresponding management entities 1264 with regard to their proximity and management hierarchy. Secondly, a 1265 variety of device troubleshooting information needs to be reported to 1266 the management system in order to provide accurate service to the 1267 customer. Third but not least, the NMS and the used management 1268 protocol need to be tailored to keep the cellular devices lightweight 1269 and as energy efficient as possible. 1271 The data models used in these scenario are mostly derived from the 1272 models of the operator NMS and might be used to monitor the status of 1273 the devices and to exchange the data sent by or read from the 1274 devices. The gateway might be in charge of filtering and aggregating 1275 the data received from the device as the information sent by the 1276 device might be mostly redundant. 1278 3.11. Automated Metering Infrastructure (AMI) 1280 An AMI network enables an electric utility to retrieve frequent 1281 electric usage data from each electric meter installed at a 1282 customer's home or business. With an AMI network, a utility can also 1283 receive immediate notification of power outages when they occur, 1284 directly from the electric meters that are experiencing those 1285 outages. In addition, if the AMI network is designed to be open and 1286 extensible, it could serve as the backbone for communicating with 1287 other distribution automation devices besides meters, which could 1288 include transformers and reclosers. 1290 In this use case, each meter in the AMI network contains a 1291 constrained device. These devices are typically C2 devices. Each 1292 meter connects to a constrained mesh network with a low-bandwidth 1293 radio. These radios can be 50, 150, or 200 kbps at raw link speed, 1294 but actual network throughput may be significantly lower due to 1295 forward error correction, multihop delays, MAC delays, lossy links, 1296 and protocol overhead. 1298 The constrained devices are used to connect the metering logic with 1299 the network, so that usage data and outage notifications can be sent 1300 back to the utility's headend systems over the network. These 1301 headend systems are located in a data center managed by the utility, 1302 and may include meter data collection systems, meter data management 1303 systems, and outage management systems. 1305 The meters are connected to a mesh network, and each meter can act as 1306 both a source of traffic and as a router for other meters' traffic. 1307 In a typical AMI application, smaller amounts of traffic (read 1308 requests, configuration) flow "downstream" from the headend to the 1309 mesh, and larger amounts of traffic flow "upstream" from the mesh to 1310 the headend. However, during a firmware update operation, larger 1311 amounts of traffic might flow downstream while smaller amounts flow 1312 upstream. Other applications that make use of the AMI network may 1313 have their own distinct traffic flows. 1315 The mesh network is anchored by a collection of higher-end devices, 1316 which contain a mesh radio that connects to the constrained network 1317 as well as a backhaul link that connects to a less-constrained 1318 network. The backhaul link could be cellular, WiMAX, or Ethernet, 1319 depending on the backhaul networking technology that the utility has 1320 chosen. These higher-end devices (termed "routers" in this use case) 1321 are typically installed on utility poles throughout the service 1322 territory. Router devices are typically less constrained than 1323 meters, and often contain the full routing table for all the 1324 endpoints routing through them. 1326 In this use case, the utility typically installs on the order of 1000 1327 meters per router. The collection of meters comprised in a local 1328 network that are routing through a specific router is called in this 1329 use case a Local Meter Network (LMN). When powered on, each meter is 1330 designed to discover the nearby LMNs, select the optimal LMN to join, 1331 and select the optimal meters in that LMN to route through when 1332 sending data to the headend. After joining the LMN, the meter is 1333 designed to continuously monitor and optimize its connection to the 1334 LMN, and it may change routes and LMNs as needed. 1336 Each LMN may be configured e.g. to share an encryption key, providing 1337 confidentiality for all data traffic within the LMN. This key may be 1338 obtained by a meter only after an end-to-end authentication process 1339 based on certificates, ensuring that only authorized and 1340 authenticated meters are allowed to join the LMN, and by extension, 1341 the mesh network as a whole. 1343 After joining the LMN, each endpoint obtains a routable and possibly 1344 private IPv6 address that enables end-to-end communication between 1345 the headend systems and each meter. In this use case, the meters are 1346 always-on. However, due to lossy links and network optimization, not 1347 every meter will be immediately accessible, though eventually every 1348 meter will be able to exchange data with the headend. 1350 In a large AMI deployment, there may be 10 million meters supported 1351 by 10.000 routers, spread across a very large geographic area. 1352 Within a single LMN, the meters may range between 1 and approx. 20 1353 hops from the router. During the deployment process, these meters 1354 are installed and turned on in large batches, and those meters must 1355 be authenticated, given addresses, and provisioned with any 1356 configuration information necessary for their operation. During 1357 deployment and after deployment is finished, the network must be 1358 monitored continuously and failures must be handled. Configuration 1359 parameters may need to be changed on large numbers of devices, but 1360 most of the devices will be running the same configuration. 1361 Moreover, eventually, the firmware in those meters will need to be 1362 upgraded, and this must also be done in large batches because most of 1363 the devices will be running the same firmware image. 1365 Because there may be thousands of routers, this operational model 1366 (batch deployment, automatic provisioning, continuous monitoring, 1367 batch reconfiguration, batch firmware update) should also apply to 1368 the routers as well as the constrained devices. The scale is 1369 different (thousands instead of millions) but still large enough to 1370 make individual management impractical for routers as well. 1372 3.12. MANET Concept of Operations (CONOPS) in Military 1374 The use case on the Concept of Operations (CONOPS) focuses on the 1375 configuration and monitoring of networks that are currently being 1376 used in military and as such, it offers insights and challenges of 1377 network management that military agencies are facing. 1379 As technology advances, military networks nowadays become large and 1380 consist of varieties of different types of equipments that run 1381 different protocols and tools that obviously increase complexity of 1382 the tactical networks. Moreover, lacks of open common interfaces and 1383 Application Programming Interface (API) are often a challenge to 1384 network management. Configurations are, most likely, manually 1385 performed. Some devices do not support IP networks. Integration and 1386 evaluation process are no longer trivial for a large set of protocols 1387 and tools. In addition, majority of protocols and tools developed by 1388 vendors that are being used are proprietary which makes integration 1389 more difficult. The main reason that leads to this problem is that 1390 there is no clearly defined standard for the MANET Concept of 1391 Operations (CONOPS). In the following, a set of scenarios of network 1392 operations are described, which might lead to the development of 1393 network management protocols and a framework that can potentially be 1394 used in military networks. 1396 Note: The term "node" is used at IETF for either a host or router. 1397 The term "unit" or "mobile unit" in military (e.g. Humvees, tanks) 1398 is a unit that contains multiple routers, hosts, and/or other non-IP- 1399 based communication devices. 1401 Scenario: Parking Lot Staging Area: 1403 The Parking Lot Staging Area is the most common network operation 1404 that is currently widely used in military prior to deployment. MANET 1405 routers, which can be identical such as the platoon leader's or 1406 rifleman's radio, are shipped to a remote location along with a Fixed 1407 Network Operations Center (NOC), where they are all connected over 1408 traditional wired or wireless networks. The Fixed NOC then performs 1409 mass-configuration and evaluation of configuration processes. The 1410 same concept can be applied to mobile units. Once all units are 1411 successfully configured, they are ready to be deployed. 1413 +---------+ +----------+ 1414 | Fixed |<---+------->| router_1 | 1415 | NOC | | +----------+ 1416 +---------+ | 1417 | +----------+ 1418 +------->| router_2 | 1419 | +----------+ 1420 | 0 1421 | 0 1422 | 0 1423 | +----------+ 1424 +------->| router_N | 1425 +----------+ 1427 Figure 1: Parking Lot Staging Area 1429 Scenario: Monitoring with SatCom Reachback: 1431 The Monitoring with SatCom Reachback, which is considered another 1432 possible common scenario to military's network operations, is similar 1433 to the Parking Lot Staging Area. Instead, the Fixed NOC and MANET 1434 routers are connected through a Satellite Communications (SatCom) 1435 network. The Monitoring with SatCom Reachback is a scenario where 1436 MANET routers are augmented with SatCom Reachback capabilities while 1437 On-The-Move (OTM). Vehicles carrying MANET routers support multiple 1438 types of wireless interfaces, including High Capacity Short Range 1439 Radio interfaces as well as Low Capacity OTM SatCom interfaces. The 1440 radio interfaces are the preferred interfaces for carrying data 1441 traffic due to their high capacity, but the range is limiting with 1442 respect to connectivity to a Fixed NOC. Hence, OTM SatCom interfaces 1443 offer a more persistent but lower capacity reachback capability. The 1444 existence of a SatCom persistent Reachback capability offers the NOC 1445 the ability to monitor and manage the MANET routers over the air. 1446 Similarly to the Parking Lot Staging scenario, the same concept can 1447 be applied to mobile units. 1449 --- +--+ --- 1450 / /---|SC|---/ / 1451 --- +--+ --- 1452 +---------+ | 1453 | Fixed |<---------------------+ 1454 | NOC | +--------------| 1455 +---------+ | +-------------------+ 1456 | | | 1457 +----------+ | +----------+ 1458 | router_1 | +----------+ | router_N | 1459 +----------+ | | +----------+ 1460 * | | * * 1461 * +----------+ | * * 1462 *********| router_2 |*****|******* * 1463 +----------+ | * 1464 * | * 1465 * +----------+ * 1466 ********| router_3 |**** 1467 +----------+ 1469 --- SatCom links 1470 *** Radio links 1472 Figure 2: Monitoring with one-hop SatCom Reachback network 1474 Scenario: Hierarchical Management: 1476 Another reasonable scenario common to military operations in a MANET 1477 environment is the Hierarchical Management scenario. Vehicles carry 1478 a rather complex set of networking devices, including routers running 1479 MANET control protocols. In this hierarchical architecture, the 1480 MANET mobile unit has a rather complex internal architecture where a 1481 local manager within the unit is responsible for local management. 1482 The local management includes management of the MANET router and 1483 control protocols, the firewall, servers, proxies, hosts and 1484 applications. In addition, a standard management interface is 1485 required in this architecture. Moreover, in addition to requiring 1486 standard management interfaces into the components comprising the 1487 MANET nodal architecture, the local manager is responsible for local 1488 monitoring and the generation of periodic reports back to the Fixed 1489 NOC. 1491 Interface 1492 | 1493 V 1494 +---------+ +-------------------------+ 1495 | Fixed | Interface | +---+ +---+ | 1496 | NOC |<---+------->| | R |--+--| F | | 1497 +---------+ | | +---+ | +---+ | 1498 | | | | +---+ | 1499 | | +---+ | +--| P | | 1500 | | | M |--+ | +---+ | 1501 | | +---+ | | 1502 | | | +---+ | 1503 | | +--| D | | 1504 | | | +---+ | 1505 | | | | 1506 | | | +---+ | 1507 | | +--| H | | 1508 | | | +---+ | 1509 | | unit_1 | 1510 | +-------------------------+ 1511 | 1512 | 1513 | +--------+ 1514 +------->| unit_2 | 1515 | +--------+ 1516 | 0 1517 | 0 1518 | 0 1519 | +--------+ 1520 +------->| unit_N | 1521 +--------+ 1523 Key: R-Router 1524 F-Firewall 1525 P-PEP (Performance Enhancing Proxy) 1526 D-Servers, e.g., DNS 1527 H-hosts 1528 M-Local Manager 1530 Figure 3: Hierarchical Management 1532 Scenario: Management over Lossy/Intermittent Links: 1534 In the future of military operations, the standard management will be 1535 done over lossy and intermittent links and ideally the Fixed NOC will 1536 become mobile. In this architecture, the nature and current quality 1537 of each link are distinct. However, there are a number of issues 1538 that would arise and need to be addressed: 1540 1. Common and specific configurations are undefined: 1542 A. When mass-configuring devices, common set of configurations 1543 are undefined at this time. 1545 B. Similarly, when performing a specific device, set of specific 1546 configurations is unknown. 1548 2. Once the total number of units becomes quite large, scalability 1549 would be an issue and need to be addressed. 1551 3. The state of the devices are different and may be in various 1552 states of operations, e.g., ON/OFF, etc. 1554 4. Pushing large data files over reliable transport, e.g., TCP, 1555 would be problematic. Would a new mechanism of transmitting 1556 large configurations over the air in low bandwidth be 1557 implemented? Which protocol would be used at transport layer? 1559 5. How to validate network configuration (and local configuration) 1560 is complex, even when to cutover is an interesting question. 1562 6. Security as a general issue needs to be addressed as it could be 1563 problematic in military operations. 1565 +---------+ +----------+ 1566 | Mobile |<----------->| router_1 | 1567 | NOC |?--+ +----------+ 1568 +---------+ | 1569 ^ | +----------+ 1570 | +------->| router_2 | 1571 | +----------+ 1572 | 0 1573 | 0 1574 | 0 1575 | +----------+ 1576 +---------------->| router_N | 1577 +----------+ 1579 Figure 4: Management over Lossy/intermittent Links 1581 4. Requirements on the Management of Networks with Constrained Devices 1583 This section describes the requirements categorized by management 1584 areas listed in subsections. 1586 Note that the requirements in this section need to be seen as 1587 standalone requirements. A device might be able to provide selected 1588 requirements but might not be capable to provide all requirements at 1589 once. On the other hand a device vendor might select a subset of the 1590 requirements to implement. As of today this document does not 1591 recommend the realization of a profile of requirements. 1593 Following template is used for the definition of the requirements. 1595 Req-ID: An ID uniquely identified by a three-digit number 1597 Title: The title of the requirement. 1599 Description: The rational and description of the requirement. 1601 Source: The origin of the requirement and the matching use case or 1602 application. 1604 Requirement Type: Functional Requirement, Non-Functional 1605 Requirement, Design Constraint 1607 Device type: The device types by which this requirement can be 1608 supported: C0, C1 and/or C2. 1610 Priority: The priority of the requirement showing the importance: 1611 Mandatory (M), Optional (O), Conditional (C). 1613 4.1. Management Architecture/System 1615 Req-ID: 4.1.001 1617 Title: Support multiple device classes within a single network. 1619 Description: Larger networks usually are made up of devices 1620 belonging to different device classes (e.g., constrained mesh 1621 endpoints and less constrained routers) that work together. 1622 Hence, the management architecture must be applicable to networks 1623 that have a mix of different device classes. See Section 3. of 1624 [LWIG-TERMS] for the definition of Constrained Device Classes. 1626 Source: All use cases. 1628 Requirement Type: Non-Functional Requirement 1630 Device type: Managing and intermediary entities. 1632 Priority: Mandatory 1634 --- 1636 Req-ID: 4.1.002 1638 Title: Management scalability. 1640 Description: The management architecture must be able to scale with 1641 the number of devices involved and operate efficiently in any 1642 network size and topology. This implies that e.g. the managing 1643 entity is able to handle huge amount of device monitoring data and 1644 the management protocol is not sensitive to the decrease of the 1645 time between two client requests. To achieve good scalability, 1646 caching techniques, in-network data aggregation techniques, 1647 hierarchical management models may be used. 1649 Source: General requirement for all use cases to enable large scale 1650 networks. 1652 Requirement Type: Design Constraint 1654 Device type: C0, C1, and C2 1656 Priority: Mandatory 1658 --- 1660 Req-ID: 4.1.003 1662 Title: Hierarchical management 1664 Description: Provide a means of hierarchical management, i.e. 1665 provide intermediary management entities on different levels, 1666 which can take over the responsibility for the management of a 1667 sub-hierarchy of the network of constraint devices. The 1668 intermediary management entity can e.g. support management data 1669 aggregation to handle e.g. high-frequent monitoring data or 1670 provide a caching mechanism for the uplink and downlink 1671 communication. Hierarchical management contributes to management 1672 scalability. 1674 Source: Use cases where a huge amount of devices are deployed with a 1675 hierarchical topology. 1677 Requirement Type: Non-Functional Requirement 1679 Device type: Managing and intermediary entities. 1681 Priority: Optional 1683 --- 1685 Req-ID: 4.1.004 1687 Title: Minimize state maintained on constrained devices. 1689 Description: The amount of state that needs to be maintained on 1690 constrained devices should be minimized. This is important in 1691 order to save memory (especially relevant for C0 and C1 devices) 1692 and in order to allow devices to restart for example to apply 1693 configuration changes or to recover from extended periods of 1694 inactivity. One way to achieve this is to adopt a RESTful 1695 architecture that minimizes the amount of state maintained by 1696 managed constrained devices and that makes resources of a device 1697 addressable via URIs. 1699 Source: Basic requirement which concerns all use cases. 1701 Requirement Type: Non-Functional Requirement 1703 Device type: C0, C1, and C2 1705 Priority: Mandatory 1707 --- 1709 Req-ID: 4.1.005 1711 Title: Automatic re-synchronization with eventual consistency. 1713 Description: To support large scale networks, where some constrained 1714 devices may be offline at any point in time, it is necessary to 1715 distribute configuration parameters in a way that allows temporary 1716 inconsistencies but eventually converges, after a sufficiently 1717 long period of time without further changes, towards global 1718 consistency. 1720 Source: Use cases with large scale networks with many devices. 1722 Requirement Type: Functional Requirement 1724 Device type: C0, C1, and C2 1726 Priority: Mandatory 1728 --- 1730 Req-ID: 4.1.006 1732 Title: Support for lossy links and unreachable devices. 1734 Description: Some constrained devices will only be able to support 1735 lossy and unreliable links characterized by a limited data rate, a 1736 high latency, and a high transmission error rate. Furthermore 1737 constrained devices often duty cycle their radio or the whole 1738 device in order to save energy. In both cases the management 1739 system must not assume that constrained devices are always 1740 reachable. The management protocol(s) must act gracefully if a 1741 conctrained device is not reachable and provide a high degree of 1742 resilience. Intermediaries may be used that provide information 1743 for devices currently inactive or that take responsibility to re- 1744 synchronize devices when they become reachable again after an 1745 extended offline period. 1747 Source: Basic requirement for constrained networks with unreliable 1748 links and constrained devices which sleep to save energy. 1750 Requirement Type: Design Constraint 1752 Device type: C0, C1, and C2 1754 Priority: Mandatory 1756 --- 1758 Req-ID: 4.1.007 1760 Title: Network-wide configuration 1762 Description: Provide means by which the behavior of the network can 1763 be specified at a level of abstraction (network-wide 1764 configuration) higher than a set of configuration information 1765 specific to individual devices. It is useful to derive the device 1766 specific configuration from the network-wide configuration. The 1767 identification of the relevant subset of the policies to be 1768 provisioned is according to the capabilities of each device and 1769 can be obtained from a pre-configured data-repository. Such a 1770 repository can be used to configure pre-defined device or protocol 1771 parameters for the whole network. Furthermore, such a network- 1772 wide view can be used to monitor and manage a group of routers or 1773 a whole network. E.g. monitoring the performance of a network 1774 requires additional information other than what can be acquired 1775 from a single router using a management protocol. 1777 Source: In general all use cases, which want to configure the 1778 network and its devices based on a network view in a top-down 1779 manner. 1781 Requirement Type: Non-Functional Requirement 1783 Device type: C0, C1, and C2 1785 Priority: Optional 1787 --- 1789 Req-ID: 4.1.008 1791 Title: Distributed Management 1793 Description: Provide a means of simple distributed management, where 1794 a constrained network can be managed or monitored by more than one 1795 manager. Since the connectivity to a server cannot be guaranteed 1796 at all times, a distributed approach may provide a higher 1797 reliability, at the cost of increased complexity. This 1798 requirement implies the handling of data consistency in case of 1799 concurrent read and write access to the device datastore. It 1800 might also happen that no management (configuration) server is 1801 accessible and the only reachable node is a peer device. In this 1802 case the device should be able to obtain its configuration from 1803 peer devices. 1805 Source: Use cases where the count of devices to manage is high. 1807 Requirement Type: Non-Functional Requirement 1809 Device type: C1 and C2 1811 Priority: Optional 1813 4.2. Management protocols and data model 1815 Req-ID: 4.2.001 1817 Title: Modular implementation of management protocols 1819 Description: Management protocols should allow modular 1820 implementations, i.e., it should be possible to implement only a 1821 basic set of protocol primitives on highly constrained devices 1822 while devices with additional resources may provide more support 1823 for additional protocol primitives. It should be possible to 1824 discover the management protocol primitives by a device. 1826 Source: Basic requirement interesting for all use cases. 1828 Requirement Type: Non-Functional Requirement 1830 Device type: C0, C1, and C2 1832 Priority: Mandatory 1834 --- 1836 Req-ID: 4.2.002 1838 Title: Compact encoding of management data 1840 Description: The encoding of management data should be compact and 1841 space efficient, enabling small message sizes. 1843 Source: General requirement to save memory for the receiver buffer 1844 and on-air bandwith. 1846 Requirement Type: Functional Requirement 1848 Device type: C0, C1, and C2 1850 Priority: Mandatory 1852 --- 1854 Req-ID: 4.2.003 1856 Title: Compression of management data or complete messages 1857 Description: Management data exchanges can be further optimized by 1858 applying data compression techniques or delta encoding techniques. 1859 Compression typically requires additional code size and some 1860 additional buffers and/or the maintenance of some additional state 1861 information. For C0 devices compression may not be feasible. As 1862 such, this requirement is marked as optional. 1864 Source: Use cases where it is beneficial to reduce transmission time 1865 and bandwith, e.g. mobile applications which require to save on- 1866 air bandwith. 1868 Requirement Type: Functional Requirement 1870 Device type: C1 and C2 1872 Priority: Optional 1874 --- 1876 Req-ID: 4.2.004 1878 Title: Mapping of management protocol interactions. 1880 Description: It is desirable to have a loss-less automated mapping 1881 between the management protocol used to manage constrained devices 1882 and the management protocols used to manage regular devices. In 1883 the ideal case, the same core management protocol can be used with 1884 certain restrictions taking into account the resource limitations 1885 of constrained devices. However, for very resource constrained 1886 devices, this goal might not be achievable. Hence this 1887 requirement is marked optional for device class C2. 1889 Source: Use cases where high-frequent interaction with the 1890 management system of a non-constrained network is required. 1892 Requirement Type: Functional Requirement 1894 Device type: C2 1896 Priority: Optional 1898 --- 1900 Req-ID: 4.2.005 1901 Title: Consistency of data models with the underlying information 1902 model. 1904 Description: The data models used by the management protocol must be 1905 consistent with the information model used to define data models 1906 for non-constrained networks. This is essential to facilitate the 1907 integration of the management of constrained networks with the 1908 management of non-constrained networks. Using an underlying 1909 information model for future data model design enables furthermore 1910 top-down model design and model reuse as well as data 1911 interoperability (i.e. exchange of management information between 1912 the constrained and non-constrained networks). This is a strong 1913 requirement, even despite the fact that the underlying information 1914 models are often not explicitly documented in the IETF. 1916 Source: General requirement to support data interoperability, 1917 consistency and model reuse. 1919 Requirement Type: Non-Functional Requirement 1921 Device type: C0, C1, and C2 1923 Priority: Mandatory 1925 --- 1927 Req-ID: 4.2.006 1929 Title: Loss-less mapping of management data models. 1931 Description: It is desirable to have a loss-less automated mapping 1932 between the management data models used to manage regular devices 1933 and the management data models used for managing constrained 1934 devices. In the ideal case, the same core data models can be used 1935 with certain restrictions taking into account the resource 1936 limitations of constrained devices. However, for very resource 1937 constrained devices, this goal might not be achievable. Hence 1938 this requirement is marked optional for device class C2. 1940 Source: Use cases where consistent data exchange with the management 1941 system of a non-constrained network is required. 1943 Requirement Type: Functional Requirement 1945 Device type: C2 1946 Priority: Optional 1948 --- 1950 Req-ID: 4.2.007 1952 Title: Protocol extensibility 1954 Description: Provide means of extensibility for the management 1955 protocol, i.e. by adding new protocol messages or mechanisms that 1956 can deal with the changing requirements on a supported message and 1957 data types effectively, without causing inter-operability problems 1958 or having to replace/update large amounts of deployed devices. 1960 Source: Basic requirement useful for all use cases. 1962 Requirement Type: Functional Requirement 1964 Device type: C0, C1, and C2 1966 Priority: Mandatory 1968 4.3. Configuration management 1970 Req-ID: 4.3.001 1972 Title: Self-configuration capability 1974 Description: Automatic configuration and re-configuration of devices 1975 without manual intervention. Compared to the traditional 1976 management of devices where the management application is the 1977 central entity configuring the devices, in the auto-configuration 1978 scenario the device is the active part and initiates the 1979 configuration process. Self-configuration can be initiated during 1980 the initial configuration or for subsequent configurations, where 1981 the configuration data needs to be refreshed. Self-configuration 1982 should be also supported during the initialization phase or in the 1983 event of failures, where prior knowledge of the network topology 1984 is not available or the topology of the network is uncertain. 1986 Source: In general all use cases requiring easy deployment and plug& 1987 play behavior as well as easy maintenance of many constrained 1988 devices. 1990 Requirement Type: Functional Requirement 1991 Device type: C0, C1, and C2 1993 Priority: Mandatory for C0 and C1, Optional for C2. 1995 --- 1997 Req-ID: 4.3.002 1999 Title: Capability Discovery 2001 Description: Enable the discovery of supported optional management 2002 capabilities of a device and their exposure via at least one 2003 protocol and/or data model. 2005 Source: Use cases where the device interaction with other devices or 2006 applications is a function of the level of support for its 2007 capabilities. 2009 Requirement Type: Functional Requirement 2011 Device type: C1 and C2 2013 Priority: Optional 2015 --- 2017 Req-ID: 4.3.003 2019 Title: Asynchronous Transaction Support 2021 Description: Provide configuration management with asynchronous 2022 transaction support. Configuration operations must support a 2023 transactional model, with asynchronous indications that the 2024 transaction was completed. 2026 Source: Use cases, which require transaction-oriented processing 2027 because of reliability or distributed architecture functional 2028 requirements. 2030 Requirement Type: Functional Requirement 2032 Device type: C1 and C2 2034 Priority: Conditional 2036 --- 2037 Req-ID: 4.3.004 2039 Title: Network reconfiguration 2041 Description: Provide a means of iterative network reconfiguration in 2042 order to recover the network functionality from node and 2043 communication faults. The network reconfiguration can be failure- 2044 driven and self-initiated (automatic reconfiguration). The 2045 network reconfiguration can be also performed on the whole 2046 hierarchical structure of a network (network topology). 2048 Source: Practically all use cases, as network connectivity is a 2049 basic requirement. 2051 Requirement Type: Functional Requirement 2053 Device type: C0, C1, and C2 2055 Priority: Mandatory, Conditional if the network has a hierarchical 2056 topology. 2058 4.4. Monitoring functionality 2060 Req-ID: 4.4.001 2062 Title: Device status monitoring 2064 Description: Provide a monitoring function to collect and expose 2065 information about device status and exposing it via at least one 2066 management interface. The device monitoring might make use of the 2067 hierarchical management through the intermediary entities and the 2068 data caching mechanism. The device monitoring might also make use 2069 of neighbor-monitoring (fault detection in local network) to 2070 support fast fault detection and recovery, e.g. in a scenario 2071 where a managing entity is unreachable and a neighbor can take 2072 over the monitoring responsibility. 2074 Source: All use cases 2076 Requirement Type: Functional Requirement 2078 Device type: C0, C1, and C2 2080 Priority: Mandatory, Conditional for neighbor-monitoring. 2082 --- 2083 Req-ID: 4.4.002 2085 Title: Energy status monitoring 2087 Description: Provide a monitoring function to collect and expose 2088 information about device energy parameters and usage (e.g. battery 2089 level and communication power). 2091 Source: Use case Energy Management 2093 Requirement Type: Functional Requirement 2095 Device type: C0, C1, and C2 2097 Priority: Mandatory for energy reporting devices, Optional for the 2098 rest 2100 --- 2102 Req-ID: 4.4.003 2104 Title: Monitoring of current and estimated device availability 2106 Description: Provide a monitoring function to collect and expose 2107 information about current device availability (energy, memory, 2108 computing power, forwarding plane utilization, queue buffers, 2109 etc.) and estimation of remaining available resources. 2111 Source: All use cases. Note that monitoring energy resources (like 2112 battery status) may be required on all kinds of devices. 2114 Requirement Type: Functional Requirement 2116 Device type: C0, C1, and C2 2118 Priority: Optional 2120 --- 2122 Req-ID: 4.4.004 2124 Title: Network status monitoring 2126 Description: Provide a monitoring function to collect and expose 2127 information related to the status of a network or network segments 2128 connected to the interfaces of the device. 2130 Source: All use cases. 2132 Requirement Type: Functional Requirement 2134 Device type: C1 and C2 2136 Priority: Optional 2138 --- 2140 Req-ID: 4.4.005 2142 Title: Self-monitoring 2144 Description: Provide self-monitoring (local fault detection) feature 2145 for fast fault detection and recovery. 2147 Source: Use cases where the devices cannot be monitored centrally in 2148 appropriate manner, e.g. self-healing is required. 2150 Requirement Type: Functional Requirement 2152 Device type: C1 and C2 2154 Priority: Mandatory for C2, Optional for C1 2156 --- 2158 Req-ID: 4.4.006 2160 Title: Performance Monitoring 2162 Description: The device will provide a monitoring function to 2163 collect and expose information about the basic TBD performance of 2164 the device. The performance management functionality might make 2165 use of the hierarchical management through the intermediary 2166 devices. 2168 Source: Use cases Building automation, and Transport applications 2170 Requirement Type: Functional Requirement 2172 Device type: C1 and C2 2174 Priority: Optional 2176 --- 2177 Req-ID: 4.4.007 2179 Title: Fault detection monitoring 2181 Description: The device will provide fault detection monitoring. 2182 The system collects information about network states in order to 2183 identify whether faults have occurred. In some cases the 2184 detection of the faults might be based on the processing and 2185 analysis of the parameters retrieved from the network or other 2186 devices. In case of C0 devices the monitoring might be limited to 2187 the check whether the device is alive or not. 2189 Source: Use cases Environmental Monitoring, Building Automation, 2190 Energy Management, Infrastructure Monitoring 2192 Requirement Type: Functional Requirement 2194 Device type: C0, C1 and C2 2196 Priority: Optional 2198 --- 2200 Req-ID: 4.4.008 2202 Title: Passive and Reactive Monitoring 2204 Description: The device will provide passive and reactive monitoring 2205 capabilities. The system or manager collects information about 2206 device components and network states (passive monitoring) and may 2207 perform postmortem analysis of collected data. In case events of 2208 interest have occurred the system or manager can adaptively react 2209 (reactive monitoring), e.g. reconfigure the network. Typically 2210 actions (re-actions) will be executed or sent as commands by the 2211 management applications. 2213 Source: Diverse use cases relevant for device status and network 2214 state monitoring 2216 Requirement Type: Functional Requirement 2218 Device type: C2 2220 Priority: Optional 2222 --- 2223 Req-ID: 4.4.009 2225 Title: Recovery 2227 Description: Provide local, central and hierarchical recovery 2228 mechanisms (recovery is in some cases achieved by recovering the 2229 whole network of constrained devices). 2231 Source: Use cases Industrial applications, Home and Building 2232 Automation, Mobile Applications that involve different forms of 2233 clustering or area managers. 2235 Requirement Type: Functional Requirement 2237 Device type: C2 2239 Priority: Optional 2241 --- 2243 Req-ID: 4.4.010 2245 Title: Network topology discovery 2247 Description: Provide a network topology discovery capability (e.g. 2248 use of topology extraction algorithms to retrieve the network 2249 state) and a monitoring function to collect and expose information 2250 about the network topology. 2252 Source: Use cases Community Network Applications and Mobile 2253 Applications 2255 Requirement Type: Functional Requirement 2257 Device type: C1 and C2 2259 Priority: Optional 2261 --- 2263 Req-ID: 4.4.011 2265 Title: Notifications 2267 Description: The device will provide the capability of sending 2268 notifications on critical events and faults. 2270 Source: All use cases. 2272 Requirement Type: Functional Requirement 2274 Device type: C0, C1, and C2 2276 Priority: Mandatory for C2, Optional for C1 2278 --- 2280 Req-ID: 4.4.012 2282 Title: Logging 2284 Description: The device will provide the capability of building, 2285 keeping, and allowing retrieval of logs of events (including but 2286 not limited to critical faults and alarms). 2288 Source: Use cases Industrial Applications, Building Automation, 2289 Infrastructure monitoring 2291 Requirement Type: Functional Requirement 2293 Device type: C2 2295 Priority: Mandatory for some medical or industrial applications, 2296 Optional otherwise 2298 4.5. Self-management 2300 Req-ID: 4.5.001 2302 Title: Self-management - Self-healing 2304 Description: Enable event-driven and/or periodic self-management 2305 functionality in a device. The device should be able to react in 2306 case of a failure e.g. by initiating a fully or partly reset and 2307 initiate a self-configuration or management data update as 2308 necessary. A device might be further able to check for failures 2309 cyclically or schedule-controlled to trigger self-management as 2310 necessary. It is a matter of device design and subject for 2311 discussion how much self-management a C1 device can support. A 2312 minimal failure detection and self-management logic is assumed to 2313 be generally useful for the self-healing of a device. 2315 Source: The requirement generally relates to all use cases in this 2316 document. 2318 Requirement Type: Functional Requirement 2320 Device type: C1 and C2 2322 Priority: Optional 2324 4.6. Security and Access Control 2326 Req-ID: 4.6.001 2328 Title: Authentication of management system and devices. 2330 Description: Systems having a management role must be properly 2331 authenticated to the device such that the device can exercise 2332 proper access control and in particular distinguish rightful 2333 management systems from rogue systems. On the other hand managed 2334 devices must authenticate themselves to systems having a 2335 management role such that management systems can protect 2336 themselves from rogue devices. In certain application scenarios, 2337 it is possible that a large number of devices need to be 2338 (re)started at about the same time. Protocols and authentication 2339 systems should be designed such that a large number of devices 2340 (re)starting simultaneously does not negatively impact the device 2341 authentication process. 2343 Source: Basic security requirement for all use cases. 2345 Requirement Type: Functional Requirement 2347 Device type: C0, C1, and C2 2349 Priority: Mandatory, Optional for the (re)start of a large number of 2350 devices 2352 --- 2354 Req-ID: 4.6.002 2356 Title: Support suitable security bootstrapping mechanisms 2358 Description: Mechanisms should be supported that simplify the 2359 bootstrapping of device that is the discovery of newly deployed 2360 devices in order to add them to access control lists. 2362 Source: Basic security requirement for all use cases. 2364 Requirement Type: Functional Requirement 2366 Device type: C0, C1, and C2 2368 Priority: Mandatory 2370 --- 2372 Req-ID: 4.6.003 2374 Title: Access control on management system and devices 2376 Description: Systems acting in a management role must provide an 2377 access control mechanism that allows the security administrator to 2378 restrict which devices can access the managing system (e.g., using 2379 an access control white list of known devices). On the other hand 2380 managed constrained devices must provide an access control 2381 mechanism that allows the security administrator to restrict how 2382 systems in a management role can access the device (e.g., no- 2383 access, read-only access, and read-write access). 2385 Source: Basic security requirement for use cases where access 2386 control is essential. 2388 Requirement Type: Functional Requirement 2390 Device type: C0, C1, and C2 2392 Priority: Mandatory 2394 --- 2396 Req-ID: 4.6.004 2398 Title: Select cryptographic algorithms that are efficient in both 2399 code space and execution time. 2401 Description: Cryptographic algorithms have a major impact in terms 2402 of both code size and overall execution time. It is therefore 2403 necessary to select mandatory to implement cryptographic 2404 algorithms (like some elliptic curve algorithm) that are 2405 reasonable to implement with the available code space and that 2406 have a small impact at runtime. Furthermore some wireless 2407 technologies (e.g., IEEE 802.15.4) require the support of certain 2408 cryptographic algorithms. It might be useful to choose algorithms 2409 that are likely to be supported in wireless chipsets for certain 2410 wireless technologies. 2412 Source: Generic requirement to reduce the footprint and CPU usage of 2413 a constrained device. 2415 Requirement Type: Non-Functional Requirement 2417 Device type: C0, C1, and C2 2419 Priority: Mandatory, Optional for hardware-supported algorithms. 2421 4.7. Energy Management 2423 Req-ID: 4.7.001 2425 Title: Management of Energy Resources 2427 Description: Enable managing power resources in the network, e.g. 2428 reduce the sampling rate of nodes with critical battery and reduce 2429 node transmission power, put nodes to sleep, put single interfaces 2430 to sleep, reject a management job based on available energy, 2431 criteria e.g. importance levels pre-defined by the management 2432 application, etc. (e.g. a task marked as essential can be executed 2433 even if the energy level is low). The device may further 2434 implement standard data models for energy management and expose it 2435 through a management protocol interface, e.g. EMAN MIB modules 2436 and extensions. It might be necessary to downscale EMAN MIBs for 2437 the use in C1 and C2 devices. 2439 Source: Use case Energy Management 2441 Requirement Type: Functional Requirement 2443 Device type: C0, C1, and C2 2445 Priority: Mandatory for the use case Energy Management, Optional 2446 otherwise. 2448 --- 2450 Req-ID: 4.7.002 2452 Title: Support of energy-optimized communication protocols 2454 Description: Use of an optimized communication protocol to minimize 2455 energy usage for the device (radio) receiver/transmitter, on-air 2456 bandwidth (protocol efficiency), reduced amount of data 2457 communication between nodes (implies data aggregation and 2458 filtering but also a compact format for the transferred data). 2460 Source: Use cases Energy Management and Mobile Applications. 2462 Requirement Type: Functional Requirement 2464 Device type: C2 2466 Priority: Optional 2468 --- 2470 Req-ID: 4.7.003 2472 Title: Support for layer 2 energy-aware protocols 2474 Description: The device will support layer 2 energy management 2475 protocols (e.g. energy-efficient Ethernet IEEE 802.3az) and be 2476 able to report on these. 2478 Source: Use case Energy Management 2480 Requirement Type: Functional Requirement 2482 Device type: C0, C1, and C2 2484 Priority: Optional 2486 --- 2488 Req-ID: 4.7.004 2490 Title: Dying gasp 2492 Description: When energy resources draw below the red line level, 2493 the device will send a dying gasp notification and perform if 2494 still possible a graceful shutdown including conservation of 2495 critical device configuration and status information. 2497 Source: Use case Energy Management 2499 Requirement Type: Functional Requirement 2501 Device type: C0, C1, and C2 2502 Priority: Optional 2504 4.8. SW Distribution 2506 Req-ID: 4.8.001 2508 Title: Group-based provisioning 2510 Description: Support group-based provisioning, i.e. firmware update 2511 and configuration management, of a large set of constrained 2512 devices with eventual consistency and coordinated reload times. 2513 The device should accept group-based configuration management 2514 based on bulk commands, which aim similar configurations of a 2515 large set of constrained devices of the same type in a given 2516 group. Activation of configuration may be based on pre-loaded 2517 sets of default values. 2519 Source: All use cases 2521 Requirement Type: Functional Requirement 2523 Device type: C0, C1, and C2 2525 Priority: Optional 2527 4.9. Traffic management 2529 Req-ID: 4.9.001 2531 Title: Congestion avoidance 2533 Description: Provide the ability to avoid congestion by modifying 2534 the device's reporting rate for periodical data (which is usually 2535 redundant) based on the importance and reliability level of the 2536 management data. This functionality is usually controlled by the 2537 managing entity, where the managing entity marks the data as 2538 important or relevant for reliability. However reducing a 2539 device's reporting rate can also be initiated by a device if it is 2540 able to detect congestion or has insufficient buffer memory. 2542 Source: Use cases with high reporting rate and traffic e.g. AMI or 2543 M2M. 2545 Requirement Type: Design Constraint 2546 Device type: C1 and C2 2548 Priority: Optional 2550 --- 2552 Req-ID: 4.9.002 2554 Title: Redirect traffic 2556 Description: Provide the ability for network nodes to redirect 2557 traffic from overloaded intermediary nodes in a network to another 2558 path in order to prevent congestion on a central server and in the 2559 primary network. 2561 Source: Use cases with high reporting rate and traffic e.g. AMI or 2562 M2M. 2564 Requirement Type: Design Constraint 2566 Device type: Intermediary entity in the network. 2568 Priority: Optional 2570 --- 2572 Req-ID: 4.9.003 2574 Title: Traffic delay schemes. 2576 Description: Provide the ability to apply delay schemes to incoming 2577 and outgoing links on an overloaded intermediary node as necessary 2578 in order to reduce the amount of traffic in the network. 2580 Source: Use cases with high reporting rate and traffic e.g. AMI or 2581 M2M. 2583 Requirement Type: Design Constraint 2585 Device type: Intermediary entity in the network. 2587 Priority: Optional 2589 4.10. Transport Layer 2590 Req-ID: 4.10.001 2592 Title: Scalable transport layer 2594 Description: Enable the use of a scalable transport layer, i.e. not 2595 sensitive to the decrease of the time between two client requests, 2596 which is useful for applications requiring frequent access to 2597 device data. 2599 Source: Applications with high frequent access to the device data. 2601 Requirement Type: Design Constraint 2603 Device type: C0, C1 and C2 2605 Priority: Conditional, in case such scalability is a prerequisite. 2607 --- 2609 Req-ID: 4.10.002 2611 Title: Reliable unicast transport. 2613 Description: Provide reliable unicast transport of messages. 2615 Source: Generally all applications benefit from the reliability of 2616 the message transport. 2618 Requirement Type: Functional Requirement 2620 Device type: C0, C1, and C2 2622 Priority: Mandatory 2624 --- 2626 Req-ID: 4.10.003 2628 Title: Best-effort multicast 2630 Description: Provide best-effort multicast of messages, which is 2631 generally useful when devices need to discover a service provided 2632 by a server or many devices need to be configured by a managing 2633 entity at once based on the same data model. 2635 Source: Use cases where a device needs to discover services as well 2636 as use cases with high amount of devices to manage, which are 2637 hierarchically deployed, e.g. AMI or M2M. 2639 Requirement Type: Functional Requirement 2641 Device type: C0, C1, and C2 2643 Priority: Optional 2645 Req-ID: 4.10.004 2647 Title: Secure message transport. 2649 Description: Enable secure message transport providing 2650 authentication, data integrity, confidentiality by using existing 2651 transport layer technologies with small footprint such as TLS/ 2652 DTLS. 2654 Source: All use cases. 2656 Requirement Type: Non-Functional Requirements 2658 Device type: C1 and C2 2660 Priority: Mandatory 2662 4.11. Implementation Requirements 2664 Req-ID: 4.11.001 2666 Title: Avoid complex application layer transactions requiring large 2667 application layer messages. 2669 Description: Complex application layer transactions tend to require 2670 large memory buffers that are typically not available on C0 or C1 2671 devices and only by limiting functionality on C2 devices. 2672 Furthermore, the failure of a single large transaction requires 2673 repeating the whole transaction. On constrained devices, it is 2674 often more desirable to a large transaction down into a sequence 2675 of smaller transactions, which require less resources and allow to 2676 make progress using a sequence of smaller steps. 2678 Source: Basic requirement which concerns all use cases with memory 2679 constrained devices. 2681 Requirement Type: Design Constraint 2683 Device type: C0, C1, and C2 2685 Priority: Mandatory 2687 Req-ID: 4.11.002 2689 Title: Avoid reassembly of messages at multiple layers in the 2690 protocol stack. 2692 Description: Reassembly of messages at multiple layers in the 2693 protocol stack requires buffers at multiple layers, which leads to 2694 inefficient use of memory resources. This can be avoided by 2695 making sure the application layer, the security layer, the 2696 transport layer, the IPv6 layer and any adaptation layers are 2697 aware of the limitations of each other such that unnecessary 2698 fragmentation and reassembly can be avoided. In addition, message 2699 size constraints must be announced to protocol peers such that 2700 they can adapt and avoid sending messages that can't be processed 2701 due to resource constraints on the receiving device. 2703 Source: Basic requirement which concerns all use cases with memory 2704 constrained devices. 2706 Requirement Type: Design Constraint 2708 Device type: C0, C1, and C2 2710 Priority: Mandatory 2712 5. IANA Considerations 2714 This document does not introduce any new code-points or namespaces 2715 for registration with IANA. 2717 Note to RFC Editor: this section may be removed on publication as an 2718 RFC. 2720 6. Security Considerations 2722 This document discusses the use cases and requirements on the network 2723 of constrained devices. If specific requirements for security will 2724 be identified, they will be described in future versions of this 2725 document. 2727 7. Contributors 2729 Following persons made significant contributions to and reviewed this 2730 document: 2732 o Ulrich Herberg (Fujitsu Laboratories of America) contributed the 2733 Section 3.9 on Community Network Applications and to the 2734 Section 1.3 on Class of Networks in Focus. 2736 o Peter van der Stok contributed to Section 3.5 on Building 2737 Automation. 2739 o Zhen Cao contributed to Section 3.10 on Mobile Applications. 2741 o Gilman Tolle contributed the Section 3.11 on Automated Metering 2742 Infrastructure. 2744 o James Nguyen and Ulrich Herberg contributed the Section 3.12 on 2745 MANET Concept of Operations (CONOPS) in Military. 2747 8. Acknowledgments 2749 The editors would like to thank the contributors and the participants 2750 on the Coman maillist for their valuable contributions and comments. 2752 9. References 2754 9.1. Normative References 2756 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2757 Requirement Levels", BCP 14, RFC 2119, March 1997. 2759 9.2. Informative References 2761 [RFC6632] Ersue, M. and B. Claise, "An Overview of the IETF Network 2762 Management Standards", RFC 6632, June 2012. 2764 [RFC6130] Clausen, T., Dearlove, C., and J. Dean, "Mobile Ad Hoc 2765 Network (MANET) Neighborhood Discovery Protocol (NHDP)", 2766 RFC 6130, April 2011. 2768 [RFC6779] Herberg, U., Cole, R., and I. Chakeres, "Definition of 2769 Managed Objects for the Neighborhood Discovery Protocol", 2770 RFC 6779, October 2012. 2772 [I-D.ietf-manet-olsrv2] 2773 Clausen, T., Dearlove, C., Jacquet, P., and U. Herberg, 2774 "The Optimized Link State Routing Protocol version 2", 2775 draft-ietf-manet-olsrv2-17 (work in progress), 2776 October 2012. 2778 [I-D.ietf-lwig-guidance] 2779 Bormann, C., "Guidance for Light-Weight Implementations of 2780 the Internet Protocol Suite", draft-ietf-lwig-guidance-02 2781 (work in progress), August 2012. 2783 [I-D.ietf-core-coap] 2784 Shelby, Z., Hartke, K., Bormann, C., and B. Frank, 2785 "Constrained Application Protocol (CoAP)", 2786 draft-ietf-core-coap-13 (work in progress), December 2012. 2788 [I-D.ietf-eman-framework] 2789 Claise, B., Parello, J., Schoening, B., Quittek, J., and 2790 B. Nordman, "Energy Management Framework", 2791 draft-ietf-eman-framework-06 (work in progress), 2792 October 2012. 2794 [I-D.ietf-eman-requirements] 2795 Quittek, J., Chandramouli, M., Winter, R., Dietz, T., and 2796 B. Claise, "Requirements for Energy Management", 2797 draft-ietf-eman-requirements-11 (work in progress), 2798 January 2013. 2800 [I-D.ietf-roll-terminology] 2801 Vasseur, J., "Terminology in Low power And Lossy 2802 Networks", draft-ietf-roll-terminology-10 (work in 2803 progress), January 2013. 2805 [M2MDEVCLASS] 2806 Open Mobile Alliance, "OMA M2M Device Classification 2807 v1.0", October 2012, . 2811 [EU-IOT-A] 2812 EU Commission Seventh Framework Programme, "EU FP7 Project 2813 Internet-of-Things Architecture", . 2815 [EU-SENSEI] 2816 EU Commission Seventh Framework Programme, "EU Project 2817 SENSEI", . 2819 [EU-FI-WARE] 2820 EU Commission Future Internet Public Private Partnership 2821 (FI-PPP), "EU Project Future Internet-Core Platform", 2822 . 2824 [EU-IOT-BUTLER] 2825 EU Commission Seventh Framework Programme, "EU FP7 Project 2826 Butler Smartlife", . 2828 [LWIG-TERMS] 2829 Bormann, C., "Terminology for Constrained Node Networks", 2830 draft-bormann-lwig-terms (work in progress), 2831 November 2012. 2833 Appendix A. Related Development in other Bodies 2835 Note that over time the summary on the related work in other bodies 2836 might become outdated. 2838 A.1. ETSI TC M2M 2840 ETSI Technical Committee Machine-to-Machine (ETSI TC M2M) aims to 2841 provide an end-to-end view of M2M standardization, which enables the 2842 integration of multiple vertical M2M applications. The main goal is 2843 to overcome the current M2M market fragmentation and to reuse 2844 existing mechanisms from telecom standards such as from OMA or 3GPP. 2846 ETSI Release 1 is functionally frozen. The main focus is on use 2847 cases for Smart Metering (Technical Report (TR) 102 691) but it also 2848 includes eHealth use cases (TR 102 732) and some others. The Service 2849 requirements (Technical Standard (TS) 102 689) derived from the use 2850 cases, and the functional architecture specification (TS 102 690), 2851 will together define the M2M platform. The architecture consists of 2852 Service Capabilities (SC), which are basic functional building blocks 2853 for building the M2M platform. 2855 Smart Metering is seen as the important showcase for M2M. It is 2856 believed that the Service Enablers that were defined based on the 2857 work done for Smart Metering and eHealth segments will also allow the 2858 building of other services like vending machines, alarm systems etc. 2860 The functional architecture includes following management-related 2861 definitions: 2863 o Network Management Functions: consists of all functions required 2864 to manage the Access, Transport and Core networks: these include 2865 Provisioning, Supervision, Fault Management, etc. 2867 o M2M Management Functions: consists of functions required to manage 2868 generic functionalities of M2M Applications and M2M Service 2869 Capabilities in the Network and Applications Domain. The 2870 management of the M2M Devices and Gateways may use specific M2M 2871 Service Capabilities. 2873 The Release 2 work of ETSI TC M2M has started beginning of 2012. 2874 Following is a list of networking- and management-related topics 2875 under work: 2877 o Interworking with 3GPP networks. This is a new work item, and no 2878 discussion has been held on technical details. The intent is to 2879 define which ETSI TC M2M functions are applicable when 3GPP NW is 2880 used as transport. It is possible that this work would also cover 2881 details on how to use 3GPP interfaces, e.g. those defined in the 2882 SIMTC work, but also for charging and policy control. 2884 o Creating a Semantic Model or Data Abstraction layer for vertical 2885 industries and interworking. This would provide some high level 2886 information description that would be usable for interworking with 2887 local networks (e.g. ZigBee), and also for verticals, and it 2888 would allow the ETSI Service Enablement layer to also understand 2889 the data, instead of being just a bit storage and bit pipe. All 2890 technical details are still under discussion, but it has been 2891 agreed that a function for this exists in the architecture at 2892 least for interworking. 2894 A.2. OASIS 2896 Developments in OASIS related to management of constrained networks 2897 are following: 2899 o The Energy Interoperation TC works to define interaction between 2900 Smart Grids and their end nodes, including Smart Buildings, 2901 Enterprises, Industry, Homes, and Vehicles. The TC develops data 2902 and communication models that enable the interoperable and 2903 standard exchange of signals for dynamic pricing, reliability, and 2904 emergencies. The TC's agenda also extends to the communication of 2905 market participation data (such as bids), load predictability, and 2906 generation information. The first version of the Energy 2907 Interoperation specification is in final review. 2909 o OASIS Open Data Protocol (OData) aims to simplify the querying and 2910 sharing of data across disparate applications and multiple 2911 stakeholders for re-use in the enterprise, Cloud, and mobile 2912 devices. As a REST-based protocol, OData builds on HTTP, AtomPub, 2913 and JSON using URIs to address and access data feed resources. It 2914 enables information to be accessed from a variety of sources 2915 including (but not limited to) relational databases, file systems, 2916 content management systems, and traditional Web sites. 2918 o Open Building Information Exchange (oBIX) aims to enable the 2919 mechanical and electrical control systems in buildings to 2920 communicate with enterprise applications, and to provide a 2921 platform for developing new classes of applications that integrate 2922 control systems with other enterprise functions. Enterprise 2923 functions include processes such as Human Resources, Finance, 2924 Customer Relationship Management (CRM), and Manufacturing. 2926 A.3. OMA 2928 OMA is currently working on Lightweight M2M Enabler, OMA Device 2929 Management (OMA DM) Next Generation, and a white paper on M2M Device 2930 Classification. 2932 The Lightweight M2M Enabler covers both M2M device management and 2933 service management for constrained devices. In the case of less 2934 constrained devices, OMA DM Next Generation Enabler may be more 2935 appropriate. OMA DM is structured around Management Objects (MO), 2936 each specified for a specific purpose. There is also ongoing work 2937 with various other MOs such as the Gateway Management Object (GwMO). 2938 A draft for the "Lightweight M2M Requirements" is available. 2940 OMA Lightweight M2M and OMA DM Next Generation are important to M2M 2941 device management, provisioning and service managements in both the 2942 protocol and management objects. OMA Lightweight M2M work seems to 2943 have grown from its original scope of being targeted for very simple 2944 devices only, i.e. such that could not handle all those protocols 2945 that ETSI M2M requires. 2947 The white paper on the M2M Device Classification [M2MDEVCLASS] 2948 provides an M2M device classification framework based on the 2949 horizontal attributes (e.g., wide or local area communication 2950 interface, IP stack, I/O capabilities) of interest to communication 2951 service providers and M2M service providers, independent of vertical 2952 markets, such as smart grid, connected cars, e-health, etc. The 2953 white paper can be used as a tool to analyze the applicability of 2954 existing requirements and specifications developed by OMA and other 2955 cooperative standards development organizations. 2957 A.4. IPSO Alliance 2959 IPSO Alliance developed a profile for Device Functions supporting 2960 devices such as sensors with a limited user interface, where the 2961 configuration of even basic parameters is impossible to do manually. 2962 This is a challenge especially for consumer devices that are managed 2963 by non-professional users. The configuration of a web service 2964 application running on a constrained device goes beyond the 2965 autoconfiguration of the IP stack and local information (e.g. proxy 2966 address). Constrained devices need additionally service provider and 2967 user account related configuration, such as an address/locator and 2968 the username for a web server. 2970 IPSO discusses the use cases and requirements for user friendly 2971 configuration of such information on a constrained device, and 2972 specifies how IPSO profile Device Function Set can be used in the 2973 process. It furthermore defines a standard format for the basic 2974 application configuration information. 2976 Appendix B. Related Research Projects 2978 o The EU project IoT-A (Internet-of-Things Architecture) develops an 2979 architectural reference model together with the definition of an 2980 initial set of key building blocks. These enable the integration 2981 of IoT into the service layer of the Future Internet, and realize 2982 a novel resolution infrastructure, as well as a network 2983 infrastructure that allows the seamless communication flow between 2984 IoT devices and services. The development includes a conceptual 2985 model of a smart object as well as a basic Internet of Things 2986 reference model defining the interaction and communication between 2987 IoT devices and relevant entities. The requirements document 2988 includes also network and information management requirements (see 2989 [EU-IOT-A]). 2991 o The EU project SENSEI specified the document on 'End to End 2992 Networking and Management' for Wireless Sensor and Actuator 2993 Networks. This report presents several research results carried 2994 out in SENSEI's tasks related to End-to-End Networking and 2995 Management. Particular analyses have been addressed related to 2996 naming and addressing of resources, management of resources, 2997 resource plug and play, resource level mobility and traffic 2998 modelling. The detailed analysis on each of these topics is 2999 intended to identify possible gaps between their specific 3000 mechanisms and the functional requirements in the SENSEI reference 3001 architecture (see [EU-SENSEI]). 3003 o The EU project FI-WARE is developing the Things Management GE 3004 (generic enabler), which uses a data model derived from the OMA DM 3005 NGSI data model. Using the abstraction level of things which 3006 include non-technical things like rooms, places and people, Things 3007 Management GE aims to discover and look up IoT resources that can 3008 provide information about things or actuate on these things. The 3009 system aimes to manage the dynamic associations between IoT 3010 resources and things in order to allow internal components as well 3011 as external applications to interact with the system using the 3012 thing abstraction as the core concept (see [EU-FI-WARE]). 3014 o EU project BUTLER Smart Life discusses different IoT management 3015 aspects and collects requirements for smart life use cases (e.g. 3016 smart home or smart city) mainly from service management pov. (see 3017 [EU-IOT-BUTLER]). 3019 Appendix C. Open issues 3021 o Section 4 on the management requirements, as the core section in 3022 the document, needs further discussion and consolidation. 3024 Appendix D. Change Log 3026 D.1. 02-03 3028 o Extended the terminology section and removed some of the 3029 terminology addressed in the new LWIG terminology draft. 3030 Referenced the LWIG terminology draft. 3032 o Moved Section 1.3. on Constrained Device Classes to the new LWIG 3033 terminology draft. 3035 o Class of networks considering the different type of radio and 3036 communication technologies in use and dimensions extended. 3038 o Extended the Problem Statement in Section 2. following the 3039 requirements listed in Section 4. 3041 o Following requirements, which belong together and can be realized 3042 with similar or same kind of solutions, have been merged. 3044 * Distributed Management and Peer Configuration, 3046 * Device status monitoring and Neighbor-monitoring, 3048 * Passive Monitoring and Reactive Monitoring, 3050 * Event-driven self-management - Self-healing and Periodic self- 3051 management, 3053 * Authentication of management systems and Authentication of 3054 managed devices, 3056 * Access control on devices and Access control on management 3057 systems, 3059 * Management of Energy Resources and Data models for energy 3060 management, 3062 * Software distribution (group-based firmware update) and Group- 3063 based provisioning. 3065 o Deleted the empty section on the gaps in network management 3066 standards, as it will be written in a separate draft. 3068 o Added links to mentioned external pages. 3070 o Added text on OMA M2M Device Classification in appendix. 3072 D.2. 01-02 3074 o Extended the terminology section. 3076 o Added additional text for the use cases concerning deployment 3077 type, network topology in use, network size, network capabilities, 3078 radio technology, etc. 3080 o Added examples for device classes in a use case. 3082 o Added additional text provided by Cao Zhen (China Mobile) for 3083 Mobile Applications and by Peter van der Stok for Building 3084 Automation. 3086 o Added the new use cases 'Advanced Metering Infrastructure' and 3087 'MANET Concept of Operations in Military'. 3089 o Added the section 'Managing the Constrainedness of a Device or 3090 Network' discussing the needs of very constrained devices. 3092 o Added a note that the requirements in Section 4 need to be seen as 3093 standalone requirements and the current document does not 3094 recommend any profile of requirements. 3096 o Added Section 4 on the detailed requirements on constrained 3097 management matched to management tasks like fault, monitoring, 3098 configuration management, Security and Access Control, Energy 3099 Management, etc. 3101 o Solved nits and added references. 3103 o Added Appendix A on the related development in other bodies. 3105 o Added Appendix B on the work in related research projects. 3107 D.3. 00-01 3109 o Splitted the section on 'Networks of Constrained Devices' into the 3110 sections 'Network Topology Options' and 'Management Topology 3111 Options'. 3113 o Added the use case 'Community Network Applications' and 'Mobile 3114 Applications'. 3116 o Provided a Contributors section. 3118 o Extended the section on 'Medical Applications'. 3120 o Solved nits and added references. 3122 Authors' Addresses 3124 Mehmet Ersue (editor) 3125 Nokia Siemens Networks 3127 Email: mehmet.ersue@nsn.com 3129 Dan Romascanu (editor) 3130 Avaya 3132 Email: dromasca@avaya.com 3134 Juergen Schoenwaelder (editor) 3135 Jacobs University Bremen 3137 Email: j.schoenwaelder@jacobs-university.de