idnits 2.17.1 draft-irtf-coinrg-use-cases-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (17 February 2021) is 1162 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-03) exists of draft-fink-coin-sec-priv-01 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 COINRG I. Kunze 3 Internet-Draft K. Wehrle 4 Intended status: Informational RWTH Aachen 5 Expires: 21 August 2021 D. Trossen 6 Huawei 7 M.J. Montpetit 8 Concordia 9 17 February 2021 11 Use Cases for In-Network Computing 12 draft-irtf-coinrg-use-cases-00 14 Abstract 16 Computing in the Network (COIN) comes with the prospect of deploying 17 processing functionality on networking devices, such as switches and 18 network interface cards. While such functionality can be beneficial 19 in several contexts, it has to be carefully placed into the context 20 of the general Internet communication. This document discusses some 21 use cases to demonstrate how real applications can benefit from COIN 22 and to showcase essential requirements that have to be fulfilled by 23 COIN applications. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on 21 August 2021. 42 Copyright Notice 44 Copyright (c) 2021 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 49 license-info) in effect on the date of publication of this document. 50 Please review these documents carefully, as they describe your rights 51 and restrictions with respect to this document. Code Components 52 extracted from this document must include Simplified BSD License text 53 as described in Section 4.e of the Trust Legal Provisions and are 54 provided without warranty as described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 59 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 60 3. Taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . 4 61 4. Industrial Use Cases . . . . . . . . . . . . . . . . . . . . 5 62 4.1. IIoT Network Scenario . . . . . . . . . . . . . . . . . . 5 63 4.2. In-Network Control / Time-sensitive applications . . . . 6 64 4.2.1. Description . . . . . . . . . . . . . . . . . . . . . 6 65 4.2.2. Characterization . . . . . . . . . . . . . . . . . . 7 66 4.2.3. Existing Solutions . . . . . . . . . . . . . . . . . 7 67 4.2.4. Opportunities and Research Questions for COIN . . . . 8 68 4.2.5. Requirements . . . . . . . . . . . . . . . . . . . . 8 69 4.3. Large Volume Applications . . . . . . . . . . . . . . . . 9 70 4.3.1. Description . . . . . . . . . . . . . . . . . . . . . 9 71 4.3.2. Characterization . . . . . . . . . . . . . . . . . . 9 72 4.3.3. Existing Solutions . . . . . . . . . . . . . . . . . 10 73 4.3.4. Opportunities and Research Questions for COIN . . . . 10 74 4.3.5. Requirements . . . . . . . . . . . . . . . . . . . . 12 75 4.4. Industrial Safety . . . . . . . . . . . . . . . . . . . . 12 76 4.4.1. Description . . . . . . . . . . . . . . . . . . . . . 12 77 4.4.2. Characterization . . . . . . . . . . . . . . . . . . 13 78 4.4.3. Existing Solutions . . . . . . . . . . . . . . . . . 13 79 4.4.4. Opportunities and Research Questions for COIN . . . . 13 80 4.4.5. Requirements . . . . . . . . . . . . . . . . . . . . 14 81 5. Immersive Experiences . . . . . . . . . . . . . . . . . . . . 14 82 5.1. Mobile Application Offloading . . . . . . . . . . . . . . 14 83 5.1.1. Description . . . . . . . . . . . . . . . . . . . . . 14 84 5.1.2. Characterization . . . . . . . . . . . . . . . . . . 14 85 5.1.3. Existing Solutions . . . . . . . . . . . . . . . . . 16 86 5.1.4. Opportunities and Research Questions for COIN . . . . 16 87 5.1.5. Requirements . . . . . . . . . . . . . . . . . . . . 17 88 5.2. Extended Reality (XR) . . . . . . . . . . . . . . . . . . 17 89 5.2.1. Description . . . . . . . . . . . . . . . . . . . . . 17 90 5.2.2. Characterization . . . . . . . . . . . . . . . . . . 17 91 5.2.3. Existing Solutions . . . . . . . . . . . . . . . . . 18 92 5.2.4. Opportunities and Research Questions for COIN . . . . 19 93 5.2.5. Requirements . . . . . . . . . . . . . . . . . . . . 20 94 5.3. Personalised and interactive performing arts . . . . . . 21 95 5.3.1. Description . . . . . . . . . . . . . . . . . . . . . 21 96 5.3.2. Characterization . . . . . . . . . . . . . . . . . . 22 97 5.3.3. Existing solutions . . . . . . . . . . . . . . . . . 23 98 5.3.4. Opportunities and Research Questions for COIN . . . . 23 99 5.3.5. Requirements . . . . . . . . . . . . . . . . . . . . 24 100 6. Infrastructure Services . . . . . . . . . . . . . . . . . . . 24 101 6.1. Distributed AI . . . . . . . . . . . . . . . . . . . . . 24 102 6.1.1. Description . . . . . . . . . . . . . . . . . . . . . 24 103 6.1.2. Characterization . . . . . . . . . . . . . . . . . . 24 104 6.1.3. Existing Solutions . . . . . . . . . . . . . . . . . 24 105 6.1.4. Opportunities and Research Questions for COIN . . . . 25 106 6.1.5. Requirements . . . . . . . . . . . . . . . . . . . . 25 107 6.2. Content Delivery Networks . . . . . . . . . . . . . . . . 25 108 6.2.1. Description . . . . . . . . . . . . . . . . . . . . . 25 109 6.2.2. Characterization . . . . . . . . . . . . . . . . . . 26 110 6.2.3. Existing Solutions . . . . . . . . . . . . . . . . . 26 111 6.2.4. Opportunities and Research Questions for COIN . . . . 26 112 6.2.5. Requirements . . . . . . . . . . . . . . . . . . . . 27 113 6.3. CFaaS . . . . . . . . . . . . . . . . . . . . . . . . . . 27 114 6.3.1. Description . . . . . . . . . . . . . . . . . . . . . 27 115 6.3.2. Characterization . . . . . . . . . . . . . . . . . . 27 116 6.3.3. Existing Solutions . . . . . . . . . . . . . . . . . 27 117 6.3.4. Opportunities and Research Questions for COIN . . . . 27 118 6.3.5. Requirements . . . . . . . . . . . . . . . . . . . . 28 119 7. Security Considerations . . . . . . . . . . . . . . . . . . . 28 120 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 29 121 9. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 29 122 10. List of Use Case Contributors . . . . . . . . . . . . . . . . 29 123 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 29 124 11.1. Normative References . . . . . . . . . . . . . . . . . . 29 125 11.2. Informative References . . . . . . . . . . . . . . . . . 29 126 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 31 128 1. Introduction 130 The Internet is a best-effort network that offers limited guarantees 131 regarding the timely and successful transmission of packets. Data 132 manipulation and protocol functionality is generally provided by the 133 end-hosts while the network is kept simple and only intended as a 134 "store and forward" packet facility. This design-choice is suitable 135 for a wide variety of applications and has helped in the rapid growth 136 of the Internet. However, there are several domains which, e.g., 137 demand a number of strict performance guarantees that cannot be 138 provided over regular best-effort networks or require more closed 139 loop integration to manage data flows. In this context, flexibly 140 distributing the computation tasks across the network can help to 141 achieve the guarantees and increase the overall performance. 143 However, different domains and different applications have different 144 requirements and it remains unclear and the topic of academic 145 research whether there can be a common solution to all COIN scenarios 146 or if solutions have to be tailored to each scenario. 148 This document presents a series of applications and their 149 requirements to illustrate the importance of COIN for realizing 150 advanced applications. Based on these, the draft aims to create a 151 taxonomy of elementary COIN scenarios with the goal of guiding future 152 research work. 154 2. Terminology 156 Programmable network devices (PNDs): Network devices, such as network 157 interface cards and switches, which are programmable, e.g., using P4 158 or other languages. 160 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 161 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 162 document are to be interpreted as described in RFC 2119 [RFC2119]. 164 3. Taxonomy 166 The use cases in this draft aim at outlining the specific 167 capabilities that in-networking capabilities may bring to their 168 realization. To attain this goal, we will use the following taxonomy 169 to describe each of the use cases: 171 1. Description: explanation of the use case behavior 173 2. Characterization: explanation of the services that are being 174 realized and the semantics of interactions in the use case 176 3. Existing solutions: if existing, outline current methods of 177 realizing the use case 179 4. Opportunities and research questions for COIN: outline how PNDs 180 may support or improve on the use case case in terms of 181 performance and other metrics and state essential questions that 182 are suitable for guiding research. 184 5. Requirements: describe the requirements for any solutions that 185 may need development along the opportunities outlined in item 4 187 4. Industrial Use Cases 189 The industrial domain is characterized by diverse sets of 190 requirements which often cannot be provided over regular best-effort 191 networks. Consequently, there is a large number of specialized 192 applications and protocols designed to give the required strict 193 performance guarantees, e.g., regarding real-time capabilities. 194 Time-Sensitive-Networking [TSN] as an enhancement to the standard 195 Ethernet, e.g., tries to achieve these requirements on the link layer 196 by statically reserving shares of the bandwidth. In the Industrial 197 Internet of Things (IIoT), however, more and more parts of the 198 industrial production domain are interconnected. This increases the 199 complexity of the industrial networks, makes them more dynamic, and 200 creates more diverse sets of requirements. In these scenarios, 201 solutions on the link layer alone are not sufficient. 203 The challenge is to develop concepts that can satisfy the dynamic 204 performance requirements of modern industrial networks. COIN 205 presents a promising starting point because it allows to flexibly 206 distribute computation tasks across the network which can help to 207 manage dynamic changes. As specifying general requirements for the 208 industrial production domain is difficult due to the mentioned 209 diversity, this document next characterizes and analyzes different 210 scenarios to showcase potential requirements for the industrial 211 production domain. 213 4.1. IIoT Network Scenario 215 Common components of the IIoT can be divided into three categories as 216 illustrated in Figure 1. Following 217 [I-D.mcbride-edge-data-discovery-overview], EDGE DEVICES, such as 218 sensors and actuators, constitute the boundary between the physical 219 and digital world. They communicate the current state of the 220 physical world to the digital world by transmitting sensor data or 221 let the digital world interact with the physical world by executing 222 actions after receiving (simple) control information. The processing 223 of the sensor data and the creation of the control information is 224 done on COMPUTING DEVICES. They range from small-powered controllers 225 close to the EDGE DEVICES, to more powerful edge or remote clouds in 226 larger distances. The connection between the EDGE and COMPUTING 227 DEVICES is established by NETWORKING DEVICES. In the industrial 228 domain, they range from standard devices, e.g., typical Ethernet 229 switches, which can interconnect all Ethernet-capable hosts, to 230 proprietary equipment with proprietary protocols only supporting 231 hosts of specific vendors. 233 -------- 234 |Sensor| ------------| ~~~~~~~~~~~~ ------------ 235 -------- ------------- { Internet } --- |Remote Cloud| 236 . |Access Point|--- ~~~~~~~~~~~~ ------------ 237 -------- ------------- | | 238 |Sensor| ----| | | | 239 -------- | | -------- | 240 . | | |Switch| ---------------------- 241 . | | -------- | 242 . | | ------------ | 243 ---------- | |----------------- | Controller | | 244 |Actuator| ------------ ------------ | 245 ---------- | -------- ------------ 246 . |----|Switch|---------------------------| Edge Cloud | 247 ---------- -------- ------------ 248 |Actuator| ---------| 249 ---------- 251 |-----------| |------------------| |-------------------| 252 EDGE DEVICES NETWORKING DEVICES COMPUTING DEVICES 254 Figure 1: Industrial networks show a high level of heterogeneity. 256 4.2. In-Network Control / Time-sensitive applications 258 4.2.1. Description 260 The control of physical processes and components of a production line 261 is essential for the growing automation of production and ideally 262 allows for a consistent quality level. Traditionally, the control 263 has been exercised by control software running on programmable logic 264 controllers (PLCs) located directly next to the controlled process or 265 component. This approach is best-suited for settings with a simple 266 model that is focused on a single or few controlled components. 268 Modern production lines and shop floors are characterized by an 269 increasing amount of involved devices and sensors, a growing level of 270 dependency between the different components, and more complex control 271 models. A centralized control is desirable to manage the large 272 amount of available information which often has to be pre-processed 273 or aggregated with other information before it can be used. PLCs are 274 not designed for this array of tasks and computations could 275 theoretically be moved to more powerful devices. These devices are 276 no longer close to the controlled objects and induce additional 277 latency. 279 4.2.2. Characterization 281 A control process consists of two main components as illustrated in 282 Figure 2: a system under control and a controller. In feedback 283 control, the current state of the system is monitored, e.g., using 284 sensors and the controller influences the system based on the 285 difference between the current and the reference state to keep it 286 close to this reference state. 288 reference 289 state ------------ -------- Output 290 ----------> | Controller | ---> | System | ----------> 291 ^ ------------ -------- | 292 | | 293 | observed state | 294 | --------- | 295 -------------------| Sensors | <----- 296 --------- 298 Figure 2: Simple feedback control model. 300 Apart from the control model, the quality of the control primarily 301 depends on the timely reception of the sensor feedback, because the 302 controller can only react if it is notified of changes in the system 303 state. Depending on the dynamics of the controlled system, the 304 control can be subject to tight latency constraints, often in the 305 single-digit millisecond range. While low latencies are essential, 306 there is an even greater need for stable and deterministic levels of 307 latency, because controllers can generally cope with different levels 308 of latency, if they are designed for them, but they are significantly 309 challenged by dynamically changing or unstable latencies. The 310 unpredictable latency of the Internet exemplifies this problem if 311 off-premise cloud platforms are included. 313 4.2.3. Existing Solutions 315 Control functionality is traditionally executed on PLCs close to the 316 machinery and these are only rarely upgraded. Further, the PLCs 317 require vendor-specific implementations and are often hard to update 318 which makes such control processes inflexible and difficult to 319 manage. Moving computations to more freely programmable devices thus 320 has the potential of significantly improving the flexibility. 322 4.2.4. Opportunities and Research Questions for COIN 324 Control models, in general, can become involved but there is a 325 variety of control algorithms that are composed of simple 326 computations such as matrix multiplication. These are supported by 327 some PNDs and it is thus possible to compose simplified 328 approximations of the more complex algorithms and deploy them in the 329 network. While the simplified versions induce a more inaccurate 330 control, they allow for a quicker response and might be sufficient to 331 operate a basic tight control loop while the overall control can 332 still be exercised from the cloud. 334 Opportunities: 336 * Speed up control update rates by leveraging privileged position in 337 the network 339 The problem, however, is that networking devices typically only allow 340 for integer precision computation while floating-point precision is 341 needed by most control algorithms. Additionally, computational 342 capabilities vary for different available PNDs. Yet, early 343 approaches like [RUETH] and [VESTIN] have already shown the general 344 applicability of such ideas, but there are still a lot of open 345 research questions not limited to the following: 347 Research Questions: 349 * How can one derive the simplified versions of the overall 350 controller? 352 - How complex can they become? 354 - How can one take the limited computational precision of 355 networking devices into account when making them? 357 * How does one distribute the simplified versions in the network? 359 * How does the overall controller interact with the simplified 360 versions? 362 4.2.5. Requirements 364 Req 4.2.1: Control approaches MUST provide stable/deterministic 365 latencies. 367 Req 4.2.2: Control approaches SHOULD provide low latencies. 369 Req 4.2.3: The interaction between the in-network control function 370 and the overall controller SHOULD be explicit. 372 Req 4.2.4: Actions of the control approaches SHOULD be explicit to 373 the overall controller. 375 Req 4.2.5: Actions of the control approaches MUST be overridable by 376 the overall controller. 378 4.3. Large Volume Applications 380 4.3.1. Description 382 In the IIoT, processes and machines can be monitored more effectively 383 resulting in more available information. This data can be used to 384 deploy machine learning (ML) techniques and consequently help to find 385 previously unknown correlations between different components of the 386 production which in turn helps to improve the overall production 387 system. Newly gained knowledge can be shared between different sites 388 of the same company or even between different companies [PENNEKAMP]. 390 Traditional company infrastructure is neither equipped for the 391 management and storage of such large amounts of data nor for the 392 computationally expensive training of ML approaches. Similar to the 393 considerations in Section 4.2, off-premise cloud platforms offer 394 cost-effective solutions with a high degree of flexibility and 395 scalability. While the unpredictable latency of the Internet is only 396 a subordinate problem for this use case, moving all data to off- 397 premise locations primarily poses infrastructural challenges. 399 4.3.2. Characterization 401 Processes in the industrial domain are monitored by distributed 402 sensors which range from simple binary (e.g., light barriers) to 403 sophisticated sensors measuring the system with varying degrees of 404 resolution. Sensors can further serve different purposes, as some 405 might be used for time-critical process control while others are only 406 used as redundant fallback platforms. Overall, there is a high level 407 of heterogeneity which makes managing the sensor output a challenging 408 task. 410 Depending on the deployed sensors and the complexity of the observed 411 system, the resulting overall data volume can easily be in the range 412 of several Gbit/s [GLEBKE]. Using off-premise clouds for managing 413 the data requires uploading or streaming the growing volume of sensor 414 data using the companies' Internet access which is typically limited 415 to a few hundred of Mbit/s. While large networking companies can 416 simply upgrade their infrastructure, most industrial companies rely 417 on traditional ISPs for their Internet access. Higher access speeds 418 are hence tied to higher costs and, above all, subject to the supply 419 of the ISPs and consequently not always available. A major challenge 420 is thus to devise a methodology that is able to handle such amounts 421 of data over limited access links. 423 Another aspect is that business data leaving the premise and control 424 of the company further comes with security concerns, as sensitive 425 information or valuable business secrets might be contained in it. 426 Typical security measures such as encrypting the data make in-network 427 computing techniques hardly applicable as they typically work on 428 unencrypted data. Adding security to in-network computing 429 approaches, either by adding functionality for handling encrypted 430 data or devising general security measures, is thus an auspicious 431 field for research which we describe in more detail in Section 7. 433 4.3.3. Existing Solutions 435 Current approaches for handling such large amounts of information 436 typically build upon stream processing frameworks such as Apache 437 Flink. While they allow for handling large volume applications, they 438 are tied to performant server machines and upscaling the information 439 density also requires a corresponding upscaling of the compute 440 infrastructure. 442 4.3.4. Opportunities and Research Questions for COIN 444 There are at least two concepts that might be suitable for reducing 445 the amount of transmitted data in a meaningful way using in-network 446 computing: 448 1. filtering out redundant or unnecessary data 450 2. aggregating data by applying pre-processing steps within the 451 network 453 Both concepts require detailed knowledge about the monitoring 454 infrastructure at the factories and the purpose of the transmitted 455 data. 457 4.3.4.1. Traffic Filtering 459 Sensors are often set up redundantly, i.e., part of the collected 460 data might also be redundant. Moreover, they are often hard to 461 configure or not configurable at all which is why their resolution or 462 sampling frequency is often larger than required. Consequently, it 463 is likely that more data is transmitted than is needed or desired. 465 A trivial idea for reducing the amount of data is to filter out 466 redundant or undesired data before it leaves the premise using simple 467 traffic filters that are deployed in the on-premise network. There 468 are different approaches to how this topic can be tackled. A first 469 step would be to scale down the available sensor data to the data 470 rate that is needed. For example, if a sensor transmits with a 471 frequency of 5 kHz, but the control entity only needs 1 kHz, only 472 every fifth packet containing sensor data is let through. 473 Alternatively, sensor data could be filtered down to a lower 474 frequency while the sensor value is in an uninteresting range, but 475 let through with higher resolution once the sensor value range 476 becomes interesting. It is important that end-hosts are informed 477 about the filtering so that they can distinguish between data loss 478 and data filtered out on purpose. 480 Opportunities: 482 * Semantic packet and stream filtering at line-rate 484 * Filtering based on packet header and payload, as well as multi- 485 packet information 487 Challenges/Research Questions: 489 * How can traffic filters be designed? 491 * How can traffic filters be coordinated and deployed? 493 * How can traffic filters be changed dynamically? 495 * How can traffic filtering be signaled to the end-hosts? 497 4.3.4.2. In-Network (Pre-)Processing 499 There are manifold computations that can be performed on the sensor 500 data in the cloud. Some of them are very complex or need the 501 complete sensor data during the computation, but there are also 502 simpler operations which can be done on subsets of the overall 503 dataset or earlier on the communication path as soon as all data is 504 available. One example is finding the maximum of all sensor values 505 which can either be done iteratively at each intermediate hop or at 506 the first hop, where all data is available. 508 Using expert knowledge about the exact computation steps and the 509 concrete transmission path of the sensor data, simple computation 510 steps can be deployed in the on-premise network to reduce the overall 511 data volume and potentially speed up the processing time in the 512 cloud. 514 Related work has already shown that in-network aggregation can help 515 to improve the performance of distributed ML applications [SAPIO]. 516 Investigating the applicability of stream data processing techniques 517 to programmable networking devices is also interesting, because 518 sensor data is usually streamed. In this context, the following 519 research questions can be of interest: 521 Opportunities: 523 * Semantic data aggregation 525 * Computation across multiple packets and leveraging packet payload 527 Challenges/Research Questions: 529 * Which (pre-)processing steps can be deployed in the network? 531 - How complex can they become? 533 * How can applications incorporate the (pre-)processing steps? 535 * How can the programming of the techniques be streamlined? 537 4.3.5. Requirements 539 Req 4.3.1: Filters or preprocessors MUST conform to application-level 540 syntax and semantics. 542 Req 4.3.2: Filters or preprocessors MAY leverage packet header and 543 payload information 545 Req 4.3.3: Filters or preprocessors SHOULD be reconfigurable at run- 546 time 548 4.4. Industrial Safety 550 4.4.1. Description 552 Despite increasing automation in production processes, human workers 553 are still often necessary. Consequently, safety measures have a high 554 priority to ensure that no human life is endangered. In traditional 555 factories, the regions of contact between humans and machines are 556 well-defined and interactions are simple. Simple safety measures 557 like emergency switches at the working positions are enough to 558 provide a decent level of safety. 560 Modern factories are characterized by increasingly dynamic and 561 complex environments with new interaction scenarios between humans 562 and robots. Robots can either directly assist humans or perform 563 tasks autonomously. The intersect between the human working area and 564 the robots grows and it is harder for human workers to fully observe 565 the complete environment. 567 Additional safety measures are essential to prevent accidents and 568 support humans in observing the environment. The increased 569 availability of sensor data and the detailed monitoring of the 570 factories can help to build additional safety measures if the 571 corresponding data is collected early at the correct position. 573 4.4.2. Characterization 575 Industrial safety measures are typically hardware solutions because 576 they have to pass rigorous testing before they are certified and 577 deployment-ready. Standard measures include safety switches and 578 light barriers. Additionally, the working area can be explicitly 579 divided into 'contact' and 'safe' areas, indicating when workers have 580 to watch out for interactions with machinery. 582 These measures are static solutions, potentially relying on 583 specialized hardware, and are challenged by the increased dynamics of 584 modern factories where the factory configuration can be changed on 585 demand. Software solutions offer higher flexibility as they can 586 dynamically respect new information gathered by the sensor systems, 587 but in most cases they cannot give guaranteed safety. Yet, it is 588 worthwhile to investigate whether such solutions can introduce 589 additional safety measures. 591 4.4.3. Existing Solutions 593 Note: Will be added later. 595 4.4.4. Opportunities and Research Questions for COIN 597 Software-based solutions can take advantage of the large amount of 598 available sensor data. Different safety indicators within the 599 production hall can be combined within the network so that 600 programmable networking devices can give early responses if a 601 potential safety breach is detected. A rather simple possibility 602 could be to track the positions of human workers and robots. 603 Whenever a robot gets too close to a human in a non-working area or 604 if a human enters a defined safety zone, robots are stopped to 605 prevent injuries. More advanced concepts could also include image 606 data or combine arbitrary sensor data. 608 Opportunities: 610 * Early emergency reactions based on diverse sensor feedback 612 Research Questions: 614 * Which additional safety measures can be provided? 616 - Do these measures actually improve safety? 618 * Which sensor information can be combined and how? 620 4.4.5. Requirements 622 Req 4.4.1: COIN-based safety measures MUST NOT degrade existing 623 safety measures. 625 Req 4.4.2: COIN-based safety measures MAY enhance existing safety 626 measures. 628 5. Immersive Experiences 630 5.1. Mobile Application Offloading 632 5.1.1. Description 634 The scenario can be exemplified in an immersive gaming application, 635 where a single user plays a game using a VR headset. The headset 636 hosts functions that "display" frames to the user, as well as the 637 functions for VR content processing and frame rendering combining 638 with input data received from sensors in the VR headset. Once this 639 application is partitioned into micro-services and deployed in an 640 app-centric execution environment, only the "display" micro-service 641 is left in the headset, while the compute intensive real-time VR 642 content processing micro-services can be offloaded to a nearby 643 resource rich home PC, for a better execution (faster and possibly 644 higher resolution generation). 646 5.1.2. Characterization 648 Partitioning an application into micro-services allows for denoting 649 the application as a collection of functions for a flexible 650 composition and a distributed execution, e.g., most functions of a 651 mobile application can be categorized into any of three, "receiving", 652 "processing" and "displaying" function groups. 654 Any device may realize one or more of the micro-services of an 655 application and expose them to the execution environment. When the 656 micro-service sequence is executed on a single device, the outcome is 657 what you see today as applications running on mobile devices. 658 However, the execution of functions may be moved to other (e.g., more 659 suitable) devices which have exposed the corresponding micro-services 660 to the environment. The result of the latter is flexible mobile 661 function offloading, for possible reduction of power consumption 662 (e.g., offloading CPU intensive process functions to a remote server) 663 or for improved end user experience (e.g., moving display functions 664 to a nearby smart TV). 666 Figure 3 shows one realization of the above scenario, where a 'DPR 667 app' is running on a mobile device (containing the partitioned 668 Display(D), Process(P) and Receive(R) micro services) over an SDN 669 network. The packaged applications are made available through a 670 localized 'playstore server'. The application installation is 671 realized as a 'service deployment' process, combining the local app 672 installation with a distributed micro-service deployment (and 673 orchestration) on most suitable AppCentreS ('processing server'). 675 +----------+ Processing Server 676 Mobile | +------+ | 677 +---------+ | | P | | 678 | App | | +------+ | 679 | +-----+ | | +------+ | 680 | |D|P|R| | | | SR | | 681 | +-----+ | | +------+ | Internet 682 | +-----+ | +----------+ / 683 | | SR | | | / 684 | +-----+ | +----------+ +------+ 685 +---------+ /|SDN Switch|_____|Border| 686 +-------+ / +----------+ | SR | 687 | 5GAN |/ | +------+ 688 +-------+ | 689 +---------+ | 690 |+-------+| +----------+ 691 ||Display|| /|SDN Switch| 692 |+-------+| +-------+ / +----------+ 693 |+-------+| /|WIFI AP|/ 694 || D || / +-------+ +--+ 695 |+-------+|/ |SR| 696 |+-------+| /+--+ 697 || SR || +---------+ 698 |+-------+| |Playstore| 699 +---------+ | Server | 700 TV +---------+ 702 Figure 3: Application Function Offloading Example. 704 Such localized deployment could, for instance, be provided by a 705 visiting site, such as a hotel or a theme park. Once the 706 'processing' micro-service is terminated on the mobile device, the 707 'service routing' (SR) elements in the network routes requests to the 708 previously deployed 'processing' micro-service running on the 709 processing server' AppCentre over an existing SDN network. As an 710 extension to the above scenarios, we can also envision that content 711 from one processing micro-service may be distributed to more than one 712 display micro-service, e.g., for multi/many-viewing scenarios. 714 5.1.3. Existing Solutions 716 NOTE: material on solutions like ETSI MEC will be added here later 718 5.1.4. Opportunities and Research Questions for COIN 720 Opportunities: 722 * execution of app-level micro-services (service deployment in 723 [APPCENTRES]) 725 * supporting service-level routing of requests (service routing in 726 [APPCENTRES]) 728 * support the constraint-based selection of a specific service 729 instance over others (constraint-based routing in [APPCENTRES]) 731 Research Questions: 733 * How to combine service-level orchestration frameworks with app- 734 level packaging methods? 736 * How to reduce latencies involved in micro-service interactions 737 where service instance locations may change quickly? 739 * How to signal constraints used for routing in a scalable manner? 741 * How to provide constraint-based routing decisions at packet 742 forwarding speed? 744 * What in-network capabilities may support the execution of micro- 745 services? 747 5.1.5. Requirements 749 Req 5.1.1: Any app-centric execution environment MUST provide means 750 for routing of service requests between resources in the distributed 751 environment. 753 Req 5.1.2: Any app-centric execution environment MUST provide means 754 for dynamically choosing the best possible micro-service sequence 755 (i.e., chaining of micro-services) for a given application 756 experience. Means for discovering suitable micro-service SHOULD be 757 provided. 759 Req 5.1.3: Any app-centric execution environment MUST provide means 760 for pinning the execution of a specific micro-service to a specific 761 resource instance in the distributed environment. 763 Req 5.1.4: Any app-centric execution environment SHOULD provide means 764 for packaging micro-services for deployments in distributed networked 765 computing environments. The packaging MAY include any constraints 766 regarding the deployment of service instances in specific network 767 locations or compute resources. Such packaging SHOULD conform to 768 existing application deployment models, such as mobile application 769 packaging, TOSCA orchestration templates or tar balls or combinations 770 thereof. 772 Req 5.1.5: Any app-centric execution environment MUST provide means 773 for real-time synchronization and consistency of distributed 774 application states. 776 5.2. Extended Reality (XR) 778 5.2.1. Description 780 Virtual Reality (VR) and Augmented Reality (AR) taken together as 781 Extended Reality (XR) are at the center of a number of advances in 782 interactive technologies. While initially associated with gaming and 783 entertainment, XR applications now include remote diagnosis, 784 maintenance, telemedicine, manufacturing and assembly, autonomous 785 systems, smart cities, and immersive classrooms. 787 5.2.2. Characterization 789 XR is one example of the Multisource-Multidestination Problem that 790 combines video, haptics, and tactile experiences in interactive or 791 networked multi-party and social interactions. Thus, XR is difficult 792 to deliver with a client-server cloud-based solution as it requires a 793 combination of: stream synchronization, low delays and delay 794 variations, means to recover from losses and optimized caching and 795 rendering as close as possible to the user at the network edge. Many 796 XR services that involve video holography and haptics, require very 797 low delay or generate large amounts of data, both requiring a careful 798 look at data filtering and reduction, functional distribution and 799 partitioning. Hence, XR uses recent advances in in-network 800 programming, distributed networks, orchestration and resource 801 discovery to support the XR advanced immersive requirements. It is 802 important to note that the use of in-network computing for XR does 803 not imply a specific protocol but targets an architecture enabling 804 the deployment of the services. This includes computing in the nodes 805 from content source to destination. 807 5.2.3. Existing Solutions 809 Related XR or XR-enabling solutions using in-network computation or 810 related technologies include: 812 * Enabling Scalable Edge Video Analytics with Computing-In-Network 813 (Jun Chen Jiang of the University of Chicago): this work brings a 814 periodical re-profiling to adapt the video pipeline to the dynamic 815 video content that is a characteristic of XR. The implication is 816 that "need tight network-app coupling" for real time video 817 analytics. 819 * VR journalism, interactive VR movies and meetings in cyberspace 820 (many projects PBS, MIT interactive documentary lab, Huawei 821 research - references to be provided): typical VR is not made for 822 multiparty and these applications require a tight coupling of the 823 local and remote rendering and data capture and combinations of 824 cloud (for more static information) and edge (for dynamic 825 content). 827 * Local rendering of holographic content using near field 828 computation (heritage from advances cockpit interactions - looking 829 for non military papers): a lot has been said recently of the 830 large amounts of data necessary to transmit and use holographic 831 imagery in communications. Transmitting the near field 832 information and rendering the image locally allows to reduce the 833 data rates by 1 or 2. 835 * ICE-AR [ICE] project at UCLA (Jeff Burke): while this project is a 836 showcase of the NDN network artchitecture it also uses a lof of 837 edge-cloud capabilities for example for inter-server games and 838 advanced video applications. 840 5.2.4. Opportunities and Research Questions for COIN 842 Opportunities: 844 In-network computing for XR profits from the heritage of extensive 845 research in the past years on Information Centric Networking, Machine 846 Learning, network telemetry, imaging and IoT as well as distributed 847 security and in-network coding. The opportunities include: 849 * Reduced latency: the physical distance between the content cloud 850 and the users must be short enough to limit the propagation delay 851 to the 20 ms usually cited for XR applications; the use of local 852 CPU and IoT devices for range of interest (RoI) detection and 853 fynamic rendering may enable this. 855 * Video transmission: better transcoding and use of advanced 856 context-based compression algorithms, pre-fetching and pre-caching 857 and movement prediction not only in the cloud. 859 * Monitoring: telemetry is a major research topic for COIN and it 860 enables to monitor and distribute the XR services. 862 * Network access: push some networking functions in the kernel space 863 into the user space to enable the deployment of stream specific 864 algorithms for congestion control and application-based load 865 balancing based on machine learning and user data patterns. 867 * Functional decomposition: functional decomposition, localization 868 and discovery of computing and storage resources in the network. 869 But it is not only finding the best resources but qualifying those 870 resources in terms of reliability especially for mission critical 871 services in XR (medicine for example). This could include 872 intelligence services. 874 Research Questions: 876 There is a need for more research resource allocation problems at the 877 edge to enable interactive operation and quality of experience in VR. 878 These include multi-variate and heterogeneous goal optimization 879 problems requiring advanced analysis. Image rendering and video 880 processing in XR leverages different HW capabilities combinations of 881 CPU and GPU. Research questions include: 883 * Can current programmable network entities be sufficient to provide 884 the speed required to provide and execute complex filtering 885 operations that includes metadata analysis for complex and dynamic 886 scene rendering? 888 * How can the interoperability of CPU/GPU be optimized to combine 889 low level packet filteting with the higher layer processors needed 890 for image processing and haptics? 892 * Can the use of joint learning algorithms across both data center 893 and edge computers be used to create optimal functionality 894 allocation and the creation of semi-permanent datasets and 895 analytics for usage trending resulting in better localization of 896 XR functions? 898 * Can COIN improve the dynamic distribution of control, forwarding 899 and storage resources and related usage models in XR? 901 5.2.5. Requirements 903 XR requirements include the need to provide real-time interactivity 904 for immersive and increasingly mobile immersive applications with 905 tactile and time-sensitive data and high bandwidth for high 906 resolution images and local rendering for 3D images and holograms. 907 Since XR deals with personal information and potentially protected 908 content XR must also provide a secure environment and ensure user 909 privacy. Additionally, the sheer amount of data needed for and 910 generated by the XR applications can use recent trend analysis and 911 mechanisms, including machine learning to find these trends and 912 reduce the size of the data sets. The requirements can be summarized 913 as: 915 Req 5.2.1: Allow joint collaboration. 917 Req 5.2.2: Provide multi-views. 919 Req 5.2.3: Include extra streams dynamically for data intensive 920 services, manufacturing and industrial processes. 922 Req 5.2.4: Enable multistream, multidevice, multidestination 923 applications. 925 Req 5.2.5: Use new Internet Architectures at the edge for improved 926 performance and performance management. 928 Req 5.2.6: Integrate with holography, 3D displays and image rendering 929 processors. 931 Req 5.2.7: All the use of multicast distribution and processing as 932 well as peer to peer distribution in bandwidth and capacity 933 constrained environments. 935 Req 5.2.8: Evaluate the integration local and fog caching with cloud- 936 based pre-rendering. 938 Req 5.2.9: Evaluate ML-based congestion control to manage XR sessions 939 quality of service and to determine how to priortize data. 941 Req 5.2.10: Consider higher layer protocols optimization to reduce 942 latency especially in data intensive applications at the edge. 944 Req 5.2.11: Provide trust, including blockchains and smart-contracts 945 to enable secure community building across domains. 947 Req 5.2.12: Support nomadicity and mobility (link to mobile edge). 949 Req 5.2.13: Use 5G slicing to create independent session-driven 950 processing/rendering. 952 Req 5.2.14: Provide performance optimization by data reduction, 953 tunneling, session virtualization and loss protection. 955 Req 5.2.15: Use AI/ML for trend analysis and data reduction when 956 appropriate. 958 5.3. Personalised and interactive performing arts 960 5.3.1. Description 962 This use case covers live productions of the performing arts where 963 the performers and audience are in different physical locations. The 964 performance is conveyed to the audience through multiple networked 965 streams which may be tailored to the requirements of individual 966 audience members; and the performers receive live feedback from the 967 audience. 969 There are two main aspects: i) to emulate as closely as possible the 970 experience of live performances where the performers and audience are 971 co-located in the same physical space, such as a theatre; and ii) to 972 enhance traditional physical performances with features such as 973 personalisation of the experience according to the preferences or 974 needs of the audience members. 976 Examples of personalisation include: 978 * viewpoint selection such as choosing a specific seat in the 979 theatre or for more advanced positioning of the audience member's 980 viewpoint outside of the traditional seating - amongst, above or 981 behind the performers (but within some limits which may be imposed 982 by the performers or the director for artistic reasons); 984 * augmentation of the performance with subtitles, audio-description, 985 actor-tagging, language translation, advertisements/product- 986 placement, other enhancements/filters to make the performance 987 accessible to disabled audience members (removal of flashing 988 images for epileptics, alternative colour schemes for colour-blind 989 audience members, etc.). 991 5.3.2. Characterization 993 There are several chained functional entities which are candidates 994 for in-network processing. 996 * Performer aggregation and editing functions 998 * Distribution and encoding functions 1000 * Personalisation functions 1002 - to select which of the existing streams should be forwarded to 1003 the audience member 1005 - to augment streams with additional metadata such as subtitles 1007 - to create new streams after processing existing ones: to 1008 interpolate between camera angles to create a new viewpoint or 1009 to render point clouds from the audience member's chosen 1010 perspective 1012 - to undertake remote rendering according to viewer position, 1013 e.g. creation of VR headset display streams according to 1014 audience head position - when this processing has been 1015 offloaded from the viewer's end-system to the in-network 1016 function due to limited processing power in the end-system, or 1017 to limited network bandwidth to receive all of the individual 1018 streams to be processed. 1020 * Audience feedback sensor processing functions 1022 * Audience feedback aggregation functions 1024 These are candidates for in-network processing rather than being 1025 located in end-systems (at the performers' site, the audience 1026 members' premises or in a central cloud location) for several 1027 reasons: 1029 * personalisation of the performance to audience preferences and 1030 requirements makes it unfeasible for this to be done at the 1031 performer premises: it will require large amounts of processing 1032 power to process individual personalised streams as well as large 1033 amounts of network bandwidth to transmit personalised streams to 1034 each viewer. 1036 * rendering of VR headset content to follow viewer head movements 1037 has an upper bound on lag to maintain viewer QoE. 1039 * viewer devices may not have the processing-power to undertake the 1040 personalisation or the viewers' network may not have the capacity 1041 to receive all of the constituent streams to undertake the 1042 personalisation functions. 1044 * there are strict latency requirements for the live and interactive 1045 aspects that require the deviation from the direct path between 1046 performers and audience is minimised, reducing the opportunity to 1047 leverage large-scale processing capabilities at centralised data- 1048 centres. 1050 5.3.3. Existing solutions 1052 To be added. 1054 5.3.4. Opportunities and Research Questions for COIN 1056 Opportunities: 1058 * See Characterization. 1060 Research Questions: 1062 * Where should the aggregation, encoding and personalisation 1063 functions be located? Close to the performers or close to the 1064 audience members? 1066 * How far away from the direct network path from performer to 1067 audience can they be located, considering the latency implications 1068 of path-stretch and the availability of processing capacity? 1070 * How to achieve network synchronisation across multiple streams to 1071 allow for merging, audio-video interpolation and other cross- 1072 stream processing functions that require time synchronisation for 1073 the integrity of the output? 1075 5.3.5. Requirements 1077 The chain of functions and propagation over the interconnecting 1078 network segments for performance capture, aggregation, distribution, 1079 personalisation, consumption, capture of audience response, feedback 1080 processing, aggregation, rendering should be achieved within an upper 1081 bound of latency (the tolerable amount is to be defined, but in the 1082 order of 100s of ms to mimic performers perceiving audience feedback, 1083 such as laugher or other emotional responses in a theatre setting). 1085 6. Infrastructure Services 1087 6.1. Distributed AI 1089 6.1.1. Description 1091 There is a growing range of use cases demanding for the realization 1092 of AI capabilities among distributed endpoints. Such demand may be 1093 driven by the need to increase overall computational power for large- 1094 scale problems. Other solutions may desire the localization of 1095 reasoning logic, e.g., for deriving attributes that better preserve 1096 privacy of the utilized raw input data. 1098 6.1.2. Characterization 1100 Examples for large-scale AI problems include biotechnology and 1101 astronomy related reasoning over massive amounts of observational 1102 input data. Examples for localizing input data for privacy reasons 1103 include radar-like application for the development of topological 1104 mapping data based on (distributed) radio measurements at base 1105 stations (and possibly end devices), while the processing within 1106 radio access networks (RAN) already constitute a distributed AI 1107 problem to a certain extent albeit with little flexibility in 1108 distributing the execution of the AI logic. 1110 6.1.3. Existing Solutions 1112 Reasoning frameworks, such as TensorFlow, may be utilized for the 1113 realization of the (distributed) AI logic, building on remote service 1114 invocation through protocols such as gRPC [GRPC] or MPI [MPI] with 1115 the intention of providing an on-chip NPU (neural processor unit) 1116 like abstraction to the AI framework. 1118 NOTE: material on solutions like ETSI MEC and 3GPP work will be added 1119 here later 1121 6.1.4. Opportunities and Research Questions for COIN 1123 Opportunities: 1125 * supporting service-level routing of requests (service routing in 1126 [APPCENTRES]) 1128 * support the constraint-based selection of a specific service 1129 instance over others (constraint-based routing in [APPCENTRES]) 1131 * collective communication between multiple instances of AI services 1133 Research Questions: 1135 * similar to use case in Section 5.1 1137 * What are the communication patterns that may be supported by 1138 collective communication solutions? 1140 * How to achieve scalable multicast delivery with rapidly changing 1141 receiver sets? 1143 * What in-network capabilities may support the collective 1144 communication patterns found? 1146 * How to provide a service routing capability that supports any 1147 invocation protocol (beyond HTTP)? 1149 6.1.5. Requirements 1151 Req 6.1.1: Any app-centric execution environment MUST provide means 1152 to specify the constraints for placing (AI) execution logic in 1153 certain logical execution points (and their associated physical 1154 locations). 1156 Req 6.1.2: Any app-centric execution environment MUST provide support 1157 for app/micro-service specific invocation protocols. 1159 6.2. Content Delivery Networks 1161 6.2.1. Description 1163 Delivery of content to end users often relies on Content Delivery 1164 Networks (CDNs) storing said content closer to end users for latency 1165 reduced delivery with DNS-based indirection being utilized to serve 1166 the request on behalf of the origin server. 1168 6.2.2. Characterization 1170 From the perspective of this draft, a CDN can be interpreted as a 1171 (network service level) application with distributed logic for 1172 distributing content from the origin server to the CDN ingress and 1173 further to the CDN replication points which ultimately serve the 1174 user-facing content requests. 1176 6.2.3. Existing Solutions 1178 NOTE: material on solutions will be added here later 1180 Studies such as those in [FCDN] have shown that content distribution 1181 at the level of named content, utilizing efficient (e.g., Layer 2) 1182 multicast for replication towards edge CDN nodes, can significantly 1183 increase the overall network and server efficiency. It also reduces 1184 indirection latency for content retrieval as well as reduces required 1185 edge storage capacity by benefiting from the increased network 1186 efficiency to renew edge content more quickly against changing 1187 demand. 1189 6.2.4. Opportunities and Research Questions for COIN 1191 Opportunities: 1193 * supporting service-level routing of requests (service routing in 1194 [APPCENTRES]) 1196 * support the constraint-based selection of a specific service 1197 instance over others (constraint-based routing in [APPCENTRES]) 1199 * supporting Layer 2 capabilities for multicast (compute 1200 interconnection and collective communication in [APPCENTRES]) 1202 Research Questions: in addition to those for Section 5.1, 1204 * how to utilize L2 multicast to improve on CDN designs? How to 1205 utilize in-network capabilities in those designs? 1207 * what forwarding methods may support the required multicast 1208 capabilities (see [FCDN]) 1210 * how could storage be traded off against frequent, multicast-based, 1211 replication (see [FCDN]) 1213 * what scalability limits exist for L2 multicast capabilities? How 1214 to overcome them? 1216 6.2.5. Requirements 1218 Req 6.2.1: Any app-centric execution environment SHOULD utilize Layer 1219 2 multicast transmission capabilities for responses to concurrent 1220 service requests. 1222 6.3. CFaaS 1224 6.3.1. Description 1226 App-centric execution environments, consisting of Layer 2 connected 1227 data centres, provide the opportunity for infrastructure providers to 1228 offer CFaaS type of offerings to application providers. Those app 1229 providers utilize the compute fabric exposed by this CFaaS offering 1230 for the purposes defined through their applications. In other words, 1231 the compute resources can be utilized to execute the desired micro- 1232 services of which the application is composed, while utilizing the 1233 inter-connection between those compute resources to do so in a 1234 distributed manner. 1236 6.3.2. Characterization 1238 We foresee those CFaaS offerings to be tenant-specific, a tenant here 1239 defined as the provider of at least one application. For this, we 1240 foresee an interaction between CFaaS provider and tenant to 1241 dynamically select the appropriate resources to define the demand 1242 side of the fabric. Conversely, we also foresee the supply side of 1243 the fabric to be highly dynamic with resources being offered to the 1244 fabric through, e.g., user-provided resources (whose supply might 1245 depend on highly context-specific supply policies) or infrastructure 1246 resources of intermittent availability such as those provided through 1247 road-side infrastructure in vehicular scenarios. The resulting 1248 dynamic demand-supply matching establishes a dynamic nature of the 1249 compute fabric that in turn requires trust relationships to be built 1250 dynamically between the resource provider(s) and the CFaaS provider. 1251 This also requires the communication resources to be dynamically 1252 adjusted to interconnect all resources suitably into the (tenant- 1253 specific) fabric exposed as CFaaS. 1255 6.3.3. Existing Solutions 1257 NOTE: material on solutions will be added here later 1259 6.3.4. Opportunities and Research Questions for COIN 1261 Opportunities: 1263 * supporting service-level routing of requests (service routing in 1264 [APPCENTRES]) 1266 * support the constraint-based selection of a specific service 1267 instance over others (constraint-based routing in [APPCENTRES]) 1269 * supporting Layer 2 capabilities for multicast (compute 1270 interconnection and collective communication in [APPCENTRES]) 1272 Research Questions: similar to those for Section 5.1, in addition 1274 * how to convey app-specific requirements for the creation of the L2 1275 fabric? 1277 * how to dynamically integrate resources, particularly when driving 1278 by app-level requirements and changing service-specific 1279 constraints? 1281 * how to utilize in-network capabilities to aid the availability and 1282 accountability of resources? 1284 6.3.5. Requirements 1286 Req 6.3.1: Any app-specific execution environment SHOULD expose means 1287 to specify the requirements for the tenant-specific compute fabric 1288 being utilized for the app execution. 1290 Req 6.3.2: Any app-specific execution environment SHOULD allow for 1291 dynamic integration of compute resources into the compute fabric 1292 being utilized for the app execution; those resources include, but 1293 are not limited to, end user provided resources. 1295 Req 6.3.3: Any app-specific execution environment MUST provide means 1296 to optimize the inter-connection of compute resources, including 1297 those dynamically added and removed during the provisioning of the 1298 tenant-specific compute fabric. 1300 Req 6.3.4: Any app-specific execution environment MUST provide means 1301 for ensuring availability and usage of resources is accounted for. 1303 7. Security Considerations 1305 Note: This section will need consolidation once new use cases are 1306 added to the draft. Current in-network computing approaches 1307 typically work on unencrypted plain text data because today's 1308 networking devices usually do not have crypto capabilities. As is 1309 already mentioned in Section 4.3.2, this above all poses problems 1310 when business data, potentially containing business secrets, is 1311 streamed into remote computing facilities and consequently leaves the 1312 control of the company. Insecure on-premise communication within the 1313 company and on the shop-floor is also a problem as machines could be 1314 intruded from the outside. It is thus crucial to deploy security and 1315 authentication functionality on on-premise and outgoing communication 1316 although this might interfere with in-network computing approaches. 1317 Ways to implement and combine security measures with in-network 1318 computing are described in more detail in [I-D.fink-coin-sec-priv]. 1320 8. IANA Considerations 1322 N/A 1324 9. Conclusion 1326 There are several domains that can profit from COIN. 1328 Industrial scenarios have unique sets of requirements mostly focusing 1329 around tight latency constraints with high required bandwidths. 1331 NOTE: Further aspects will be added once more use cases are added to 1332 the draft. 1334 10. List of Use Case Contributors 1336 * Ike Kunze and Klaus Wehrle have contributed the industrial use 1337 cases (Section 4). 1339 * Dirk Trossen has contributed the following use cases: Section 5.1, 1340 Section 6.1, Section 6.2, Section 6.3. 1342 * Marie-Jose Montpetit has contributed the XR use case 1343 (Section 5.2). 1345 * David Griffin and Miguel Rio have contributed the use case on 1346 performing arts (Section 5.3). 1348 11. References 1350 11.1. Normative References 1352 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1353 Requirement Levels", BCP 14, RFC 2119, 1354 DOI 10.17487/RFC2119, March 1997, 1355 . 1357 11.2. Informative References 1359 [APPCENTRES] 1360 Trossen, D., Sarathchandra, C., and M. Boniface, "In- 1361 Network Computing for App-Centric Micro-Services", Work in 1362 Progress, Internet-Draft, draft-sarathchandra-coin- 1363 appcentres-04, 26 January 2021, . 1367 [FCDN] Al-Naday, M., Reed, M.J., Riihijarvi, J., Trossen, D., 1368 Thomos, N., and M. Al-Khalidi, "A Flexible and Efficient 1369 CDN Infrastructure without DNS Redirection of Content 1370 Reflection", . 1372 [GLEBKE] Glebke, R., Henze, M., Wehrle, K., Niemietz, P., Trauth, 1373 D., Mattfeld MBA, P., and T. Bergs, "A Case for Integrated 1374 Data Processing in Large-Scale Cyber-Physical Systems", 1375 Proceedings of the 52nd Hawaii International Conference on 1376 System Sciences, DOI 10.24251/hicss.2019.871, 2019, 1377 . 1379 [GRPC] "High performance open source universal RPC framework", 1380 . 1382 [I-D.fink-coin-sec-priv] 1383 Fink, I. and K. Wehrle, "Enhancing Security and Privacy 1384 with In-Network Computing", Work in Progress, Internet- 1385 Draft, draft-fink-coin-sec-priv-01, 8 September 2020, 1386 . 1389 [I-D.mcbride-edge-data-discovery-overview] 1390 McBride, M., Kutscher, D., Schooler, E., Bernardos, C., 1391 Lopez, D., and X. Foy, "Edge Data Discovery for COIN", 1392 Work in Progress, Internet-Draft, draft-mcbride-edge-data- 1393 discovery-overview-05, 1 November 2020, 1394 . 1397 [ICE] Burke, J., "ICN-Enabled Secure Edge Networking with 1398 Augmented Reality: ICE-AR.", ICE-AR Presentation at 1399 NDNCOM. , 2018, . 1403 [MPI] Vishnu, A., Siegel, C., and J. Daily, "Scaling Distributed 1404 Machine Learning with In-Network Aggregation", 1405 . 1407 [PENNEKAMP] 1408 Pennekamp, J., Henze, M., Schmidt, S., Niemietz, P., Fey, 1409 M., Trauth, D., Bergs, T., Brecher, C., and K. Wehrle, 1410 "Dataflow Challenges in an Internet of Production: A 1411 Security & Privacy Perspective", Proceedings of the ACM 1412 Workshop on Cyber-Physical Systems Security & Privacy - 1413 CPS-SPC'19, DOI 10.1145/3338499.3357357, 2019, 1414 . 1416 [RUETH] Rueth, J., Glebke, R., Wehrle, K., Causevic, V., and S. 1417 Hirche, "Towards In-Network Industrial Feedback Control", 1418 Proceedings of the 2018 Morning Workshop on In- 1419 Network Computing, DOI 10.1145/3229591.3229592, August 1420 2018, . 1422 [SAPIO] Sapio, A., "Scaling Distributed Machine Learning with In- 1423 Network Aggregation", 2019, 1424 . 1426 [TSN] "IEEE Time-Sensitive Networking (TSN) Task Group", 1427 . 1429 [VESTIN] Vestin, J., Kassler, A., and J. Akerberg, "FastReact: In- 1430 Network Control and Caching for Industrial Control 1431 Networks using Programmable Data Planes", 2018 IEEE 23rd 1432 International Conference on Emerging Technologies and 1433 Factory Automation (ETFA), DOI 10.1109/etfa.2018.8502456, 1434 September 2018, 1435 . 1437 Authors' Addresses 1439 Ike Kunze 1440 RWTH Aachen University 1441 Ahornstr. 55 1442 D-52074 Aachen 1443 Germany 1445 Email: kunze@comsys.rwth-aachen.de 1447 Klaus Wehrle 1448 RWTH Aachen University 1449 Ahornstr. 55 1450 D-52074 Aachen 1451 Germany 1453 Email: wehrle@comsys.rwth-aachen.de 1454 Dirk Trossen 1455 Huawei Technologies Duesseldorf GmbH 1456 Riesstr. 25C 1457 D-80992 Munich 1458 Germany 1460 Email: Dirk.Trossen@Huawei.com 1462 Marie-Jose Montpetit 1463 Concordia University 1464 Montreal 1465 Canada 1467 Email: marie@mjmontpetit.com