idnits 2.17.1 draft-distributed-ml-iot-edge-cmp-foukalas-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 653 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 30, 2021) is 1115 days in the past. Is this intentional? Checking references for intended status: Draft Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. '1' -- Possible downref: Non-RFC (?) normative reference: ref. '2' -- Possible downref: Non-RFC (?) normative reference: ref. '3' -- Possible downref: Non-RFC (?) normative reference: ref. '4' -- Possible downref: Non-RFC (?) normative reference: ref. '6' Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 6 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 IoT Operations Working Group F. Foukalas 2 Internet-Draft A. Tziouvaras 3 Intended status: Draft Standard March 30, 2021 4 Expires: September, 2019 6 draft-distributed-ml-iot-edge-cmp-foukalas-00.txt 8 Status of this Memo 10 This Internet-Draft is submitted in full conformance with the 11 provisions of BCP 78 and BCP 79. 13 Internet-Drafts are working documents of the Internet Engineering 14 Task Force (IETF). Note that other groups may also distribute 15 working documents as Internet-Drafts. The list of current Internet- 16 Drafts is at https://datatracker.ietf.org/drafts/current/. 18 Internet-Drafts are draft documents valid for a maximum of six months 19 and may be updated, replaced, or obsoleted by other documents at any 20 time. It is inappropriate to use Internet-Drafts as reference 21 material or to cite them other than as "work in progress." 23 This Internet-Draft will expire on September 22, 2021. 25 Copyright Notice 27 Copyright (c) 2021 IETF Trust and the persons identified as the 28 document authors. All rights reserved. 30 This document is subject to BCP 78 and the IETF Trust's Legal 31 Provisions Relating to IETF Documents 32 (https://trustee.ietf.org/license-info) in effect on the date of 33 publication of this document. Please review these documents 34 carefully, as they describe your rights and restrictions with 35 respect to this document. 37 Abstract 39 Next generation Internet requires decentralized and distributed 40 intelligence in order to make available a new type of 41 experience to serve the user's interests. Such new services 42 will be enabled by deploying the intelligence 43 over a high volume of IoT devices in a form of distributed 44 protocol. Such a protocol will orchestrate the machine learning 45 (ML) application in order to train the aggregated data available 46 from the IoT devices. The training is not an easy task in such 47 a distributed environment, where the amount of connected IoT 48 devices will scale up and the needs for both interoperability 49 and computing are high. This draft, addresses both issues 50 by combining two emerging technologies known as edge AI 51 and fog computing. The protocol procedures aggregate the data 52 collected by the IoT devices into a fog node and apply edge AI 53 for data analysis at the edge of the infrastructure. The 54 analysis of the IoT requirements resulted in an end-to-end ML 55 protocol specification which is presented throughout this draft. 57 Table of Contents 59 1. Introduction 2 60 2. Background and terminology 3 61 3. Edge computing architecture 4 62 4. Protocol stages 8 63 4.1. Initial configuration 8 64 4.2. FL training 11 65 4.3. Cloud update 12 66 5. Security Considerations 14 67 6. IANA Considerations 15 68 7. Conclusions 15 69 8. References 15 70 8.1. Normative References 15 71 9. Acknowledgments 16 73 1. Introduction 75 There is an evident requirement to address several challenges 76 to offer robust IoT services by leveraging the integration of 77 Edge computing with IoT known as IoT edge computing. The concept 78 of IoT edge computing has not been specified in detail yet 79 although two recent drafts described already some aspects of such 80 Internet architecture. Such architecture is way more useful in case 81 of distributed machine learning deployment to future Internet, 82 where the edge artificial intelligence will play an important role. 83 Towards this end, the proposed draft provides first the IoT edge 84 computing architecture, which includes the necessary elements 85 to deploy distributed machine learning. Second, three stages of 86 such a distributed intelligence are described in a sort of protocol 87 procedures, where the initialization, the learning and cloud updates 88 were devised. Details are given for all the protocol procedures 89 of the distributed machine learning for IoT edge computing. 91 2. Background and terminology 93 Below we list a number of terms related with the distributed 94 machine learning solution: 96 End devices: End devices [1] are IoT devices that collect 97 data while also having computing and networking capabilities. 98 End devices can be any type of device that can connect to the 99 Edge gateway and facilitate sensors for data collection. 101 Edge gateway: The Edge gateway is a server that is located to 102 the Edge of the network [1]. It facilitates large computational 103 and networking capabilities and coordinates the FL process. 104 The Edge gateway is used to relieve the traffic from the network 105 backhaul as the end devices connect to the Edge instead of the 106 cloud. 108 Cloud: Cloud supports very large computational capabilities [1] 109 and is geographically located far from the end devices. It provides 110 accessibility to the Edge gateway and remains agnostic on the amount 111 and type of participating end devices. As a result, the cloud does 112 not have an active role in the FL training process. 114 Federated learning (FL): FL is a distributed ML technique which 115 utilizes a large number of End devices that train their ML models 116 locally without communicating with each other. The locally trained 117 models are dispatched to the Edge gateway which aggregates the 118 collected models into one global model. In the sequel the global 119 model is broadcasted to the end devices in order for the next 120 training round to begin. During the FL process, the end devices 121 do not share data or any other information. 123 Constrained application protocol (CoAP): CoAP is a UDP 124 communication protocol which supports lightweight communication between 125 two entities [RFC 7252]. CoAP is ideal for devices with limited 126 computational capabilities as it does not require full protocol 127 stack to operate. CoAP supports the following message formats: 128 Confirmable (CON) messages, non-confirmable (NON) messages, 129 acknowledgement (ACK) reply messages and reset (RST) reply messages. 130 CON messages are reliable message requests and are provided by 131 marking a message as confirmable. A confirmable message is 132 retransmitted using a default timeout and exponential back off 133 between retransmissions, until the recipient sends an Acknowledgement 134 message (ACK) with the same Message ID. When a recipient is not able 135 to process a Confirmable message, it replies with a Reset message (RST) 136 instead of an Acknowledgement. NON messages are message requests 137 that do not require reliable transmission. These are not acknowledged, 138 but still have a Message ID for duplicate detection. When a recipient 139 is not able to process a Non-confirmable message, it may reply with a 140 Reset message (RST). 142 3. Edge computing architecture 144 Fig 1 below depicts the IoT architecture we employ, where the three 145 main entities are the end devices, the edge gateway and the cloud 146 server. Below we describe the functionalities of each module 147 and how each module it interacts with the rest of the 148 architecture: 150 End devices: End devices can be classified into constrained and 151 non-constrained according to the processing capabilities they 152 employ. Previous work in [2] classifies the end devices into the 153 following categories: 155 Class 0 (C0): This class contains sensor-like devices. Although 156 they may answer keep-alive signals and send basic indications, 157 they most likely do not have the resources to securely 158 communicate with the Internet directly (larger devices act as 159 proxies, gateways, or servers) and cannot be secured or managed 160 comprehensively in the traditional sense. 162 Class 1 (C1): Such devices are quite constrained in code space 163 and processing capabilities and cannot easily talk to other 164 Internet nodes nor employ a full protocol stack. Thus they 165 are considered ideal for the Constrained Application Protocol 166 (CoAP) over UDP. 168 Class 2 (C2): C2 devices are less constrained and capable of 169 supporting most of the same protocol stacks as servers and 170 laptop computers. 172 Other (C3): Devices with capabilities significantly beyond that 173 of Class 2 are left uncategorized (Others). They may still be 174 constrained by a limited energy supply, but can largely use 175 existing protocols unchanged. 177 To this end, the IoT architecture provides cameras as C1 devices 178 and mobile phones as C2-other devices. Each device stores a 179 local dataset independently from the others and does not have any 180 access to the data sets of the rest of the devices. Also, end 181 devices are responsible for training their local ML model and for 182 reporting the trained model to the edge gateway for the 183 aggregation process. 185 Edge gateway: The edge gateway is responsible for collecting the 186 locally trained models from the end devices and for aggregating 187 such models into a global model. Further, the edge gateway is 188 responsible for dispatching the trained model to the cloud in 189 order to make it available to the developers. In order to support 190 the aforementioned services the edge gateway employs the 191 following controller interfaces: 193 Southbound controller: The southbound interface is responsible 194 for handling the communication between the edge gateway and 195 the end devices [5]. The southbound controller also performs 196 the resource discovery, resource authentication, device 197 configuration and global model dispatch tasks. The resource 198 discovery process manages to detect and identify the devices 199 that participate on the FL training and also to establish a 200 communication link between the edge and the device. The resource 201 authentication process authenticates the end devices by matching 202 each device's unique ID with a trusted ID list that is stored 203 at the edge. The resource configuration broadcasts the ML model 204 hyperparameters to the participating end devices. Finally the 205 global model dispatch operation broadcasts the aggregated global 206 model to the trusted connected devices. 208 Central controller: The Central controller is the core component 209 of Network Artificial Intelligence, which can be called as 210 "Network Brain" [4]. It carries on the FL aggregation process and is 211 responsible to stop the FL process when the model converges. It also 212 performs the data sharing, global model training, global model 213 aggregation and device scheduling functionalities. 215 Northbound interface: The northbound interface is provided by a 216 gateway component to a remote network [5], e.g. a cloud, home 217 or enterprise network. The northbound interface is a data plane 218 interface, which facilitates the communication management of the 219 edge gateway with the cloud. Under this premise the northbound 220 interface is responsible for the model sharing and the model 221 publish functionalities. Model sharing is the function under 222 which the edge is authenticated by the cloud as a trusted 223 party and thus, gains the rights to upload the trained FL 224 model to the cloud. Model publish the uploading process of 225 the trained model to the cloud so that to make it available 226 to the developers. 228 Cloud server: The Cloud server may provide virtually unlimited 229 storage and processing power [3]. The reliance of IoT on 230 back-end cloud computing brings additional advantages such 231 as flexibility and efficiency. The cloud will facilitate the 232 trained FL model which can be used by developers for AR 233 applications. 235 FL model: The FL model should operate separately from the dataset 236 used for the training process. In this sense, the ML model 237 architecture and the dataset type may change without affecting the 238 overall FL training process. This interoperability is ensured as 239 we design the FL independently of the web protocol and thus, the 240 end device-edge communication is not affected by any changes in 241 the IoT architecture. Further, the datasets of each device 242 are stored locally and interact only with the local FL model 243 while the edge does not have any access to them. As a result 244 the functionality of the FL training is not affected by either 245 the dataset type or size, or by the FL model architecture. 247 +------------------------------------------------------------------+ 248 | | 249 | +------------------------+ | 250 | | End devices | | 251 | | * Data collection | | 252 | | * Reporting | | 253 | | * Local model training | | 254 | | +---------------------+| | 255 | | FL training | 256 | | | 257 | +---------------------------------------------------------------+| 258 | | Edge gateway || 259 | | || 260 | | +------------------+ +----------------+ +-----------------+ || 261 | | | Southbound | | Central | | Northbound | || 262 | | | interface | | controller | | interface | || 263 | | | | | | | | || 264 | | | * Resource | | * Device | | * Model sharing | || 265 | | | discovery | | scheduling | | * Model publish | || 266 | | | * Resource | | * Global model | +-----------------+ || 267 | | | authentication | | aggregation | || 268 | | | * Device | +----------------+ || 269 | | | configuration | || 270 | | | * Global model | || 271 | | | dispatch | || 272 | | +------------------+ || 273 | | || 274 | +---------------------------------------------------------------+| 275 | | | 276 | | Model to cloud | 277 | +---------------+ | 278 | | Cloud server | | 279 | | | | 280 | | * Store model | | 281 | +---------------+ | 282 | | 283 +------------------------------------------------------------------+ 284 Figure 1: Protocol architecture 286 4. Protocol stages 288 In this section we describe the stages which are used by the Edge 289 computing protocol to perform the FL process. 291 4.1. Initial configuration 293 Fig. 2 below depicts the initial configuration stage of the Edge 294 IoT protocol using the CoAP. The initial configuration stage provides 295 the necessary functionalities for establishing the IoT-edge gateway 296 communication link and for identifying the end devices that will 297 participate in the training process. Such functionalities are 298 considered as follows: 300 1.Resource discovery: The end devices are discovered by the edge 301 and employ the CoAP to inform the edge gateway about their 302 computational capabilities. More specific, the end devices send an 303 NON message to the edge containing the resource type of the 304 corresponding device, i.e. C0, C1, C2 or C3. The NON message type 305 is not confirmable and thus, the edge informs the devices with an 306 RST message only in case of a transmission error. In the sequel 307 the edge decides which device types may participate in the training 308 process and send back a NON message containing the resource discovery 309 decision to the corresponding devices. 311 2.Resource authentication: The end devices are authenticated by the 312 edge as trusted parties and are allowed to participate in the training 313 process. On the contrary, any unauthenticated devices cannot participate 314 in the training. To this end, the previously discovered end devices 315 send a NON message to edge containing the ID information of the 316 transmitted device. The edge then informs each device if it failed to 317 receive the corresponding ID by dispatching an RST message. Once the 318 edge collects all the IDs of the devices it performs the device 319 authentication process which designates which end devices will 320 participate on the FL process. Finally each device is informed about the 321 edge decision by a NON message that contains the authentication outcome. 322 Only authenticated end devices are eligible in participating in 323 the FL training. 325 3.Device scheduling: The edge gateway selects the amount of the 326 authenticated end devices that will participate in the training 327 and dispatches the necessary messages to inform them about its decision. 328 Under this premise, it dispatches a NON message containing such 329 information to each of the authenticated devices. The devices send back 330 an RST response in case of transmission failure and thus, making the 331 edge to retransmit the message. In case of successful transmission 332 of the original NON message the eligible devices proceed to the 333 device configuration phase. 335 4.Device configuration: The edge gateway employs the CoAP to broadcast 336 the FL model hyperparameters to the end devices in order to properly 337 configure their local models. To this end, the end devices dispatch 338 a NON message informing the edge about their computational capabilities. 339 The edge sends back an RST response in case of transmission error, 340 or no message in case of successfully message delivery. In the sequel, 341 the edge processes the obtained information and designates the model 342 architecture and ML parameters that will be used for the FL process. 343 Then it broadcasts the related decisions back to the end devices through 344 a NON message and all the eligible devices enter the training phase. 346 After the initial configuration process completes, the Edge IoT protocol 347 continues to the FL training stage. 349 +------------------------------------------------------------------+ 350 | +-------------+ +--------------+ | 351 | | End devices | | Edge gateway | | 352 | +-------------+ +--------------+ | 353 | | Non message {Resource type} | | 354 | |------------------------------->| | 355 | | | | 356 | | +------------------+ | 357 | | |Resource discovery| | 358 | | +------------------+ | 359 | | | | 360 | | Non message {discovery} | | 361 | |<-------------------------------| | 362 | | Non message {Device ID} | | 363 | |------------------------------->| | 364 | | | | 365 | | +-----------------------+ | 366 | | |Resource Authentication| | 367 | | +-----------------------+ | 368 | | | | 369 | | Non message {Authentication} | | 370 | |<-------------------------------| | 371 | | +-----------------+ | 372 | | |Device scheduling| | 373 | | +-----------------+ | 374 | | | | 375 | | Non message {Scheduling info.}| | 376 | |<-------------------------------| | 377 | | | | 378 | | Non message {Avl. Resources} | | 379 | |------------------------------->| | 380 | | | | 381 | | +----------------+ | 382 | | |FL configuration| | 383 | | +----------------+ | 384 | | | | 385 | | Non message {Hyperparameters}| | 386 | |<-------------------------------| | 387 | | | | 388 +------------------------------------------------------------------+ 389 Figure 2: Protocol initial configuration stage. 391 4.2. FL training 393 The FL training is stage in which the actual FL takes places. Fig. 3 394 depicts the functionalities we employ in order to support the FL 395 process. Such functionalities are considered as follows: 397 1.Local model training: In this scenario, the end devices that are 398 eligible to participate in the FL training send a NON message to 399 request the ML model from the edge. Then, the edge responds with an 400 RST message if necessary, to trigger the original NON message 401 retransmission. In the sequel the edge dispatches the global model 402 to the end devices using again the NON message format. The devices 403 respond with an RST message in case the transmission resulted in 404 errors and thus, the edge retransmits the NON message to the 405 corresponding device. Afterwards, each device proceeds to locally 406 train the model using its local data set. 408 2.Device reporting: Once a device completes the local model training, 409 it dispatches its model to the edge gateway through the device 410 reporting process. Due to the constrained nature of the participating 411 devices, the end device-edge communication is implemented by 412 using the NON message format. To this end, the devices dispatch their 413 ids and the locally trained models to the edge via NON messages 414 which are not followed by an ACK from the server side. As a result, 415 if the Edge fails to obtain the corresponding RST reply will notify 416 the end devices and will trigger a retransmission procedure of the 417 original NON message to the Edge. After the edge obtains every local 418 model, it conducts the global model aggregation process and produces 419 one global model which is broadcasted back to the devices. The FL 420 training process is repeated until the predefined amount of FL rounds 421 is reached. 423 After the FL training completes, the edge computing protocol enters 424 the cloud update stage. 426 +------------------------------------------------------------------+ 427 | +-------------+ +--------------+ | 428 | | End devices | | Edge gateway | | 429 | +-------------+ +--------------+ | 430 | | Non message {Model request} | | 431 | |------------------------------->| | 432 | | | | 433 | | Non message {Global model} | | 434 | |<-------------------------------| | 435 | +------------+ | | 436 | | Local model| | | 437 | | training | | | 438 | +------------+ | | 439 | | Non message {Local model} | | 440 | |------------------------------->| | 441 | | | | 442 | | +------------------------+ | 443 | | |Global model aggregation| | 444 | +------------------------+ | 445 | | Non message {Model request} | | 446 | |------------------------------->| | 447 | | | | 448 | | Non message {Global model} | | 449 | |<-------------------------------| | 450 | +------------+ | | 451 | | Local model| | | 452 | | training | | | 453 | +------------+ | | 454 | | | | 455 | | | | 456 | | 457 +------------------------------------------------------------------+ 458 Figure 3: Protocol training stage. 460 4.3. Cloud update 462 Fig. 4 below depicts the cloud update stage of the Edge computing 463 protocol which is invoked after the FL training completes. 464 Cloud update consists of the following functionalities: 466 1.Model sharing: The edge gateway informs the cloud for its 467 intentions to upload the trained FL model. In the sequel the cloud 468 authenticates the edge and decides whether it can be considered a 469 trusted party. When the model sharing process successfully completes, 470 the edge is authenticated and can proceed to the model publish 471 functionality. Due to the fact that no IoT devices participate in 472 such communication process, we use the more reliable CON message 473 format; instead of relying on NON messages. To this end, the edge 474 dispatches a CON message to cloud that contains its ID to inform 475 it that the FL process has been completed. The cloud in return 476 responds by an ACK or RST reply that indicates whether the 477 initial request was successfully delivered. In the sequel, the 478 cloud performs the edge authorization procedure according to the 479 received ID and sends a CON message to the edge that contains 480 the authorization result. 482 2.Model publish: In this scenario, the edge sends the trained 483 model and the model version through a CON message to the cloud. 484 Thus the edge waits for an ACK or RST reply depending on the 485 success of the transmission. If the model is transmitted 486 without errors the cloud responds with an ACK message. On the 487 contrary, transmission errors result in an RST reply from the 488 cloud which triggers a retransmission from the edge. When the 489 cloud successfully obtains the trained ML model it stores it 490 and makes it available to the users. 492 +------------------------------------------------------------------+ 493 | +-------------+ +-----+ | 494 | |Edge gateway | |Cloud| | 495 | +-------------+ +-----+ | 496 | | CON message {Edge ID} | | 497 | |------------------------------->| | 498 | | | | 499 | | ACK/RST reply | | 500 | |<-------------------------------| | 501 | | +--------------+ | 502 | | |Authentication| | 503 | | +--------------+ | 504 | | CON message {authorization} | | 505 | |<-------------------------------| | 506 | | | | 507 | | ACK/RST reply | | 508 | |------------------------------->| | 509 | | | | 510 | | CON message {Model, version} | | 511 | |------------------------------->| | 512 | | | | 513 | | ACK/RST reply | | 514 | |<-------------------------------| | 515 | | +-----------+ | 516 | | |Model store| | 517 | | +-----------+ | 518 | | | | 519 | | | | 520 +------------------------------------------------------------------+ 521 Figure 4: Protocol cloud update stage. 523 5. Security Considerations 525 The FL training process is considered a difficult task as the achievable 526 accuracy of the model is affected by the characteristics of the local 527 data sets. Local datasets are the data collected by the end devices 528 which are stored locally on each device. In order to ensure data 529 privacy, we make sure that no data exchange takes place between 530 the end devices or between the end devices and the Edge gateway. 531 In this sense, the Edge gateway aggregates the local models without 532 utilizing any local data set information and the data privacy of 533 each end devices is ensured. Regarding data security, the end 534 device-Edge gateway communication can be encrypted using any existing 535 encryption technique such as AES. Such an encryption mechanism can 536 be applied either for data sharing between the end devices and 537 the Edge or for encrypting the messages exchanged between those 538 entities similarly to [6]. The encryption mechanism can be applied 539 directly to the transmitted CoAP messages provided that a decryption 540 process is deployed on the receiver side. Nonetheless, the 541 implementation and deployment of such a technique is outside the 542 scope of this work. 544 6. IANA Considerations 546 There are no IANA considerations related to this document. 548 7. Conclusions 550 In this draft we present an FL protocol suitable for distributed 551 ML in an IoT network. We provide a functional architecture that 552 consists of a number of end devices, of an edge gateway and of a 553 cloud server. In order to support the FL training process we 554 provide three distinct protocol stages that coordinate the 555 distributed learning process. To this end we consider the initial 556 configuration, the FL training and the cloud update stages each 557 of which provides the necessary functionalities to the FL 558 process. The FL training process is conducted by leveraging the 559 CoAP communication protocol and takes place between the end 560 devices and the edge server. After the training finishes, 561 the trained FL model is stored to the cloud and is made 562 accessible to the users. 564 8. References 566 8.1. Normative References 568 [1] IoT Edge Computing Challenges and Functions, IETF draft. 569 https://tools.ietf.org/html/draft-hong-t2trg-iot-edge-computing-01, 570 Jul. 2020. 571 [2] F. Pisani, F. M. C. de Oliveira, E. S. Gama, R. Immich, L. F. 572 Bittencourt, E. Borin. "Fog Computing on Constrained Devices: 573 Paving the Way for the Future IoT", in arXiv: 574 https://arxiv.org/abs/2002.05300, Mar. 2019. 575 [3] Distributed fault management for IoT Networks, IETf draft. 576 https://tools.ietf.org/html/draft-hongcs-t2trg-dfm-00, Dec 2018. 577 [4] IoT Edge Computing: Initiatives, Projects and Products, 578 IETF draft. https://tools.ietf.org/html/draft-defoy-t2trg-iot 579 -edge-computing-background-00, May 2020. 580 [5] IETF iot-edge-computing draft, Weblink: https://www.potaroo. 581 net/ietf/idref/draft-hong-t2trg-iot-edge-computing/#ref-RFC6291 582 [6] M. A. Rahman, M. S. Hossain, M. S. Islam, N. A. Alrajeh 583 and G. Muhammad, "Secure and Provenance Enhanced Internet 584 of Health Things Framework: A Blockchain Managed Federated 585 Learning Approach," in IEEE Access, vol. 8, pp. 586 205071-205087, Nov. 2020. 588 8.1. Non-normative References 589 [RFC 7252] The Constrained Application Protocol (CoAP), Weblink: 590 https://tools.ietf.org/html/rfc7252 , Jun. 2014 592 9. Acknowledgments 594 597 Copyright (c) 2021 IETF Trust and the persons identified 598 as authors of the code. All rights reserved. 600 Redistribution and use in source and binary forms, 601 with or without modification,are permitted provided 602 that the following conditions are met: Redistributions 603 of source code must retain the above copyright 604 notice, this list of conditions and the following 605 disclaimer. 607 Redistributions in binary form must reproduce the above 608 copyright notice, this list of conditions and the 609 following disclaimer in the documentation and/or other 610 materials provided with the distribution. 612 Neither the name of Internet Society, 613 IETF or IETF Trust, nor the names of specific contributors, 614 may be used to endorse or promote products derived from this 615 software without specific prior written permission. 617 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND 618 CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, 619 INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 620 MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 621 DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR 622 CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 623 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, 624 BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE 625 GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR 626 BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY 627 OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR 628 TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN 629 ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED 630 OF THE POSSIBILITY OF SUCH DAMAGE. 632 Authors' Addresses 634 Fotis Foukalas 635 Cognitive Innovations 636 Kifisias 125-127, 11524, Athens, Greece 637 Email: fotis@cogninn.com 639 Athanasios Tziouvaras 640 Cognitive Innovations 641 Kifisias 125-127, 11524, Athens, Greece 642 Email: thanasis@cogninn.com