idnits 2.17.1 draft-ietf-detnet-use-cases-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'ACE' is defined on line 3078, but no explicit reference was found in the text == Unused Reference: 'CCAMP' is defined on line 3090, but no explicit reference was found in the text == Unused Reference: 'CPRI-transp' is defined on line 3109, but no explicit reference was found in the text == Unused Reference: 'DICE' is defined on line 3118, but no explicit reference was found in the text == Unused Reference: 'EA12' is defined on line 3121, but no explicit reference was found in the text == Unused Reference: 'HART' is defined on line 3138, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-6tisch-terminology' is defined on line 3167, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipv6-multilink-subnets' is defined on line 3173, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-roll-rpl-industrial-applicability' is defined on line 3184, but no explicit reference was found in the text == Unused Reference: 'I-D.thubert-6lowpan-backbone-router' is defined on line 3206, but no explicit reference was found in the text == Unused Reference: 'IEC61850-90-12' is defined on line 3225, but no explicit reference was found in the text == Unused Reference: 'IEEE8021TSN' is defined on line 3293, but no explicit reference was found in the text == Unused Reference: 'IETFDetNet' is defined on line 3299, but no explicit reference was found in the text == Unused Reference: 'ISO7240-16' is defined on line 3312, but no explicit reference was found in the text == Unused Reference: 'LTE-Latency' is defined on line 3323, but no explicit reference was found in the text == Unused Reference: 'RFC2119' is defined on line 3367, but no explicit reference was found in the text == Unused Reference: 'RFC2460' is defined on line 3372, but no explicit reference was found in the text == Unused Reference: 'RFC2474' is defined on line 3376, but no explicit reference was found in the text == Unused Reference: 'RFC3209' is defined on line 3387, but no explicit reference was found in the text == Unused Reference: 'RFC3393' is defined on line 3392, but no explicit reference was found in the text == Unused Reference: 'RFC3444' is defined on line 3403, but no explicit reference was found in the text == Unused Reference: 'RFC3972' is defined on line 3408, but no explicit reference was found in the text == Unused Reference: 'RFC4291' is defined on line 3417, but no explicit reference was found in the text == Unused Reference: 'RFC4903' is defined on line 3426, but no explicit reference was found in the text == Unused Reference: 'RFC4919' is defined on line 3430, but no explicit reference was found in the text == Unused Reference: 'RFC6282' is defined on line 3447, but no explicit reference was found in the text == Unused Reference: 'RFC6775' is defined on line 3465, but no explicit reference was found in the text == Unused Reference: 'TEAS' is defined on line 3496, but no explicit reference was found in the text == Unused Reference: 'UHD-video' is defined on line 3527, but no explicit reference was found in the text == Outdated reference: A later version (-30) exists of draft-ietf-6tisch-architecture-11 == Outdated reference: A later version (-10) exists of draft-ietf-6tisch-terminology-08 -- Obsolete informational reference (is this intentional?): RFC 2460 (Obsoleted by RFC 8200) Summary: 1 error (**), 0 flaws (~~), 32 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force E. Grossman, Ed. 3 Internet-Draft DOLBY 4 Intended status: Informational C. Gunther 5 Expires: October 5, 2017 HARMAN 6 P. Thubert 7 P. Wetterwald 8 CISCO 9 J. Raymond 10 HYDRO-QUEBEC 11 J. Korhonen 12 BROADCOM 13 Y. Kaneko 14 Toshiba 15 S. Das 16 Applied Communication Sciences 17 Y. Zha 18 HUAWEI 19 B. Varga 20 J. Farkas 21 Ericsson 22 F. Goetz 23 J. Schmitt 24 Siemens 25 X. Vilajosana 26 Worldsensing 27 T. Mahmoodi 28 King's College London 29 S. Spirou 30 Intracom Telecom 31 P. Vizarreta 32 Technical University of Munich, TUM 33 April 3, 2017 35 Deterministic Networking Use Cases 36 draft-ietf-detnet-use-cases-12 38 Abstract 40 This draft documents requirements in several diverse industries to 41 establish multi-hop paths for characterized flows with deterministic 42 properties. In this context deterministic implies that streams can 43 be established which provide guaranteed bandwidth and latency which 44 can be established from either a Layer 2 or Layer 3 (IP) interface, 45 and which can co-exist on an IP network with best-effort traffic. 47 Additional requirements include optional redundant paths, very high 48 reliability paths, time synchronization, and clock distribution. 50 Industries considered include wireless for industrial applications, 51 professional audio, electrical utilities, building automation 52 systems, radio/mobile access networks, automotive, and gaming. 54 For each case, this document will identify the application, identify 55 representative solutions used today, and what new uses an IETF DetNet 56 solution may enable. 58 Status of This Memo 60 This Internet-Draft is submitted in full conformance with the 61 provisions of BCP 78 and BCP 79. 63 Internet-Drafts are working documents of the Internet Engineering 64 Task Force (IETF). Note that other groups may also distribute 65 working documents as Internet-Drafts. The list of current Internet- 66 Drafts is at http://datatracker.ietf.org/drafts/current/. 68 Internet-Drafts are draft documents valid for a maximum of six months 69 and may be updated, replaced, or obsoleted by other documents at any 70 time. It is inappropriate to use Internet-Drafts as reference 71 material or to cite them other than as "work in progress." 73 This Internet-Draft will expire on October 5, 2017. 75 Copyright Notice 77 Copyright (c) 2017 IETF Trust and the persons identified as the 78 document authors. All rights reserved. 80 This document is subject to BCP 78 and the IETF Trust's Legal 81 Provisions Relating to IETF Documents 82 (http://trustee.ietf.org/license-info) in effect on the date of 83 publication of this document. Please review these documents 84 carefully, as they describe your rights and restrictions with respect 85 to this document. Code Components extracted from this document must 86 include Simplified BSD License text as described in Section 4.e of 87 the Trust Legal Provisions and are provided without warranty as 88 described in the Simplified BSD License. 90 Table of Contents 92 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5 93 2. Pro Audio and Video . . . . . . . . . . . . . . . . . . . . . 6 94 2.1. Use Case Description . . . . . . . . . . . . . . . . . . 6 95 2.1.1. Uninterrupted Stream Playback . . . . . . . . . . . . 7 96 2.1.2. Synchronized Stream Playback . . . . . . . . . . . . 7 97 2.1.3. Sound Reinforcement . . . . . . . . . . . . . . . . . 8 98 2.1.4. Deterministic Time to Establish Streaming . . . . . . 8 99 2.1.5. Secure Transmission . . . . . . . . . . . . . . . . . 8 100 2.1.5.1. Safety . . . . . . . . . . . . . . . . . . . . . 8 101 2.2. Pro Audio Today . . . . . . . . . . . . . . . . . . . . . 9 102 2.3. Pro Audio Future . . . . . . . . . . . . . . . . . . . . 9 103 2.3.1. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 9 104 2.3.2. High Reliability Stream Paths . . . . . . . . . . . . 9 105 2.3.3. Integration of Reserved Streams into IT Networks . . 9 106 2.3.4. Use of Unused Reservations by Best-Effort Traffic . . 10 107 2.3.5. Traffic Segregation . . . . . . . . . . . . . . . . . 10 108 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets . . . 10 109 2.3.5.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 11 110 2.3.6. Latency Optimization by a Central Controller . . . . 11 111 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory . . 11 112 2.4. Pro Audio Asks . . . . . . . . . . . . . . . . . . . . . 12 113 3. Electrical Utilities . . . . . . . . . . . . . . . . . . . . 12 114 3.1. Use Case Description . . . . . . . . . . . . . . . . . . 12 115 3.1.1. Transmission Use Cases . . . . . . . . . . . . . . . 12 116 3.1.1.1. Protection . . . . . . . . . . . . . . . . . . . 12 117 3.1.1.2. Intra-Substation Process Bus Communications . . . 18 118 3.1.1.3. Wide Area Monitoring and Control Systems . . . . 19 119 3.1.1.4. IEC 61850 WAN engineering guidelines requirement 120 classification . . . . . . . . . . . . . . . . . 20 121 3.1.2. Generation Use Case . . . . . . . . . . . . . . . . . 21 122 3.1.2.1. Control of the Generated Power . . . . . . . . . 21 123 3.1.2.2. Control of the Generation Infrastructure . . . . 22 124 3.1.3. Distribution use case . . . . . . . . . . . . . . . . 27 125 3.1.3.1. Fault Location Isolation and Service Restoration 126 (FLISR) . . . . . . . . . . . . . . . . . . . . . 27 127 3.2. Electrical Utilities Today . . . . . . . . . . . . . . . 28 128 3.2.1. Security Current Practices and Limitations . . . . . 28 129 3.3. Electrical Utilities Future . . . . . . . . . . . . . . . 30 130 3.3.1. Migration to Packet-Switched Network . . . . . . . . 31 131 3.3.2. Telecommunications Trends . . . . . . . . . . . . . . 31 132 3.3.2.1. General Telecommunications Requirements . . . . . 31 133 3.3.2.2. Specific Network topologies of Smart Grid 134 Applications . . . . . . . . . . . . . . . . . . 32 135 3.3.2.3. Precision Time Protocol . . . . . . . . . . . . . 33 136 3.3.3. Security Trends in Utility Networks . . . . . . . . . 34 137 3.4. Electrical Utilities Asks . . . . . . . . . . . . . . . . 36 138 4. Building Automation Systems . . . . . . . . . . . . . . . . . 36 139 4.1. Use Case Description . . . . . . . . . . . . . . . . . . 36 140 4.2. Building Automation Systems Today . . . . . . . . . . . . 37 141 4.2.1. BAS Architecture . . . . . . . . . . . . . . . . . . 37 142 4.2.2. BAS Deployment Model . . . . . . . . . . . . . . . . 38 143 4.2.3. Use Cases for Field Networks . . . . . . . . . . . . 40 144 4.2.3.1. Environmental Monitoring . . . . . . . . . . . . 40 145 4.2.3.2. Fire Detection . . . . . . . . . . . . . . . . . 40 146 4.2.3.3. Feedback Control . . . . . . . . . . . . . . . . 41 147 4.2.4. Security Considerations . . . . . . . . . . . . . . . 41 148 4.3. BAS Future . . . . . . . . . . . . . . . . . . . . . . . 41 149 4.4. BAS Asks . . . . . . . . . . . . . . . . . . . . . . . . 42 150 5. Wireless for Industrial . . . . . . . . . . . . . . . . . . . 42 151 5.1. Use Case Description . . . . . . . . . . . . . . . . . . 42 152 5.1.1. Network Convergence using 6TiSCH . . . . . . . . . . 43 153 5.1.2. Common Protocol Development for 6TiSCH . . . . . . . 43 154 5.2. Wireless Industrial Today . . . . . . . . . . . . . . . . 44 155 5.3. Wireless Industrial Future . . . . . . . . . . . . . . . 44 156 5.3.1. Unified Wireless Network and Management . . . . . . . 44 157 5.3.1.1. PCE and 6TiSCH ARQ Retries . . . . . . . . . . . 46 158 5.3.2. Schedule Management by a PCE . . . . . . . . . . . . 47 159 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests . . . . . . 47 160 5.3.2.2. 6TiSCH IP Interface . . . . . . . . . . . . . . . 48 161 5.3.3. 6TiSCH Security Considerations . . . . . . . . . . . 49 162 5.4. Wireless Industrial Asks . . . . . . . . . . . . . . . . 49 163 6. Cellular Radio . . . . . . . . . . . . . . . . . . . . . . . 49 164 6.1. Use Case Description . . . . . . . . . . . . . . . . . . 49 165 6.1.1. Network Architecture . . . . . . . . . . . . . . . . 49 166 6.1.2. Delay Constraints . . . . . . . . . . . . . . . . . . 50 167 6.1.3. Time Synchronization Constraints . . . . . . . . . . 51 168 6.1.4. Transport Loss Constraints . . . . . . . . . . . . . 53 169 6.1.5. Security Considerations . . . . . . . . . . . . . . . 53 170 6.2. Cellular Radio Networks Today . . . . . . . . . . . . . . 54 171 6.2.1. Fronthaul . . . . . . . . . . . . . . . . . . . . . . 54 172 6.2.2. Midhaul and Backhaul . . . . . . . . . . . . . . . . 54 173 6.3. Cellular Radio Networks Future . . . . . . . . . . . . . 55 174 6.4. Cellular Radio Networks Asks . . . . . . . . . . . . . . 57 175 7. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 57 176 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 57 177 7.2. Industrial M2M Communication Today . . . . . . . . . . . 58 178 7.2.1. Transport Parameters . . . . . . . . . . . . . . . . 59 179 7.2.2. Stream Creation and Destruction . . . . . . . . . . . 60 180 7.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 60 181 7.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 60 182 8. Use Case Common Themes . . . . . . . . . . . . . . . . . . . 60 183 8.1. Unified, standards-based network . . . . . . . . . . . . 61 184 8.1.1. Extensions to Ethernet . . . . . . . . . . . . . . . 61 185 8.1.2. Centrally Administered . . . . . . . . . . . . . . . 61 186 8.1.3. Standardized Data Flow Information Models . . . . . . 61 187 8.1.4. L2 and L3 Integration . . . . . . . . . . . . . . . . 61 188 8.1.5. Guaranteed End-to-End Delivery . . . . . . . . . . . 61 189 8.1.6. Replacement for Multiple Proprietary Deterministic 190 Networks . . . . . . . . . . . . . . . . . . . . . . 61 191 8.1.7. Mix of Deterministic and Best-Effort Traffic . . . . 62 192 8.1.8. Unused Reserved BW to be Available to Best Effort 193 Traffic . . . . . . . . . . . . . . . . . . . . . . . 62 195 8.1.9. Lower Cost, Multi-Vendor Solutions . . . . . . . . . 62 196 8.2. Scalable Size . . . . . . . . . . . . . . . . . . . . . . 62 197 8.3. Scalable Timing Parameters and Accuracy . . . . . . . . . 62 198 8.3.1. Bounded Latency . . . . . . . . . . . . . . . . . . . 62 199 8.3.2. Low Latency . . . . . . . . . . . . . . . . . . . . . 63 200 8.3.3. Symmetrical Path Delays . . . . . . . . . . . . . . . 63 201 8.4. High Reliability and Availability . . . . . . . . . . . . 63 202 8.5. Security . . . . . . . . . . . . . . . . . . . . . . . . 63 203 8.6. Deterministic Flows . . . . . . . . . . . . . . . . . . . 64 204 9. Use Cases Explicitly Out of Scope for DetNet . . . . . . . . 64 205 9.1. DetNet Scope Limitations . . . . . . . . . . . . . . . . 64 206 9.2. Internet-based Applications . . . . . . . . . . . . . . . 65 207 9.2.1. Use Case Description . . . . . . . . . . . . . . . . 65 208 9.2.1.1. Media Content Delivery . . . . . . . . . . . . . 65 209 9.2.1.2. Online Gaming . . . . . . . . . . . . . . . . . . 65 210 9.2.1.3. Virtual Reality . . . . . . . . . . . . . . . . . 65 211 9.2.2. Internet-Based Applications Today . . . . . . . . . . 65 212 9.2.3. Internet-Based Applications Future . . . . . . . . . 65 213 9.2.4. Internet-Based Applications Asks . . . . . . . . . . 66 214 9.3. Pro Audio and Video - Digital Rights Management (DRM) . . 66 215 9.4. Pro Audio and Video - Link Aggregation . . . . . . . . . 66 216 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 67 217 10.1. Pro Audio . . . . . . . . . . . . . . . . . . . . . . . 67 218 10.2. Utility Telecom . . . . . . . . . . . . . . . . . . . . 67 219 10.3. Building Automation Systems . . . . . . . . . . . . . . 67 220 10.4. Wireless for Industrial . . . . . . . . . . . . . . . . 67 221 10.5. Cellular Radio . . . . . . . . . . . . . . . . . . . . . 68 222 10.6. Industrial M2M . . . . . . . . . . . . . . . . . . . . . 68 223 10.7. Internet Applications and CoMP . . . . . . . . . . . . . 68 224 10.8. Electrical Utilities . . . . . . . . . . . . . . . . . . 68 225 11. Informative References . . . . . . . . . . . . . . . . . . . 68 226 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 78 228 1. Introduction 230 This draft presents use cases from diverse industries which have in 231 common a need for deterministic streams, but which also differ 232 notably in their network topologies and specific desired behavior. 233 Together, they provide broad industry context for DetNet and a 234 yardstick against which proposed DetNet designs can be measured (to 235 what extent does a proposed design satisfy these various use cases?) 237 For DetNet, use cases explicitly do not define requirements; The 238 DetNet WG will consider the use cases, decide which elements are in 239 scope for DetNet, and the results will be incorporated into future 240 drafts. Similarly, the DetNet use case draft explicitly does not 241 suggest any specific design, architecture or protocols, which will be 242 topics of future drafts. 244 We present for each use case the answers to the following questions: 246 o What is the use case? 248 o How is it addressed today? 250 o How would you like it to be addressed in the future? 252 o What do you want the IETF to deliver? 254 The level of detail in each use case should be sufficient to express 255 the relevant elements of the use case, but not more. 257 At the end we consider the use cases collectively, and examine the 258 most significant goals they have in common. 260 2. Pro Audio and Video 262 2.1. Use Case Description 264 The professional audio and video industry ("ProAV") includes: 266 o Music and film content creation 268 o Broadcast 270 o Cinema 272 o Live sound 274 o Public address, media and emergency systems at large venues 275 (airports, stadiums, churches, theme parks). 277 These industries have already transitioned audio and video signals 278 from analog to digital. However, the digital interconnect systems 279 remain primarily point-to-point with a single (or small number of) 280 signals per link, interconnected with purpose-built hardware. 282 These industries are now transitioning to packet-based infrastructure 283 to reduce cost, increase routing flexibility, and integrate with 284 existing IT infrastructure. 286 Today ProAV applications have no way to establish deterministic 287 streams from a standards-based Layer 3 (IP) interface, which is a 288 fundamental limitation to the use cases described here. Today 289 deterministic streams can be created within standards-based layer 2 290 LANs (e.g. using IEEE 802.1 AVB) however these are not routable via 291 IP and thus are not effective for distribution over wider areas (for 292 example broadcast events that span wide geographical areas). 294 It would be highly desirable if such streams could be routed over the 295 open Internet, however solutions with more limited scope (e.g. 296 enterprise networks) would still provide a substantial improvement. 298 The following sections describe specific ProAV use cases. 300 2.1.1. Uninterrupted Stream Playback 302 Transmitting audio and video streams for live playback is unlike 303 common file transfer because uninterrupted stream playback in the 304 presence of network errors cannot be achieved by re-trying the 305 transmission; by the time the missing or corrupt packet has been 306 identified it is too late to execute a re-try operation. Buffering 307 can be used to provide enough delay to allow time for one or more 308 retries, however this is not an effective solution in applications 309 where large delays (latencies) are not acceptable (as discussed 310 below). 312 Streams with guaranteed bandwidth can eliminate congestion on the 313 network as a cause of transmission errors that would lead to playback 314 interruption. Use of redundant paths can further mitigate 315 transmission errors to provide greater stream reliability. 317 2.1.2. Synchronized Stream Playback 319 Latency in this context is the time between when a signal is 320 initially sent over a stream and when it is received. A common 321 example in ProAV is time-synchronizing audio and video when they take 322 separate paths through the playback system. In this case the latency 323 of both the audio and video streams must be bounded and consistent if 324 the sound is to remain matched to the movement in the video. A 325 common tolerance for audio/video sync is one NTSC video frame (about 326 33ms) and to maintain the audience perception of correct lip sync the 327 latency needs to be consistent within some reasonable tolerance, for 328 example 10%. 330 A common architecture for synchronizing multiple streams that have 331 different paths through the network (and thus potentially different 332 latencies) is to enable measurement of the latency of each path, and 333 have the data sinks (for example speakers) delay (buffer) all packets 334 on all but the slowest path. Each packet of each stream is assigned 335 a presentation time which is based on the longest required delay. 336 This implies that all sinks must maintain a common time reference of 337 sufficient accuracy, which can be achieved by any of various 338 techniques. 340 This type of architecture is commonly implemented using a central 341 controller that determines path delays and arbitrates buffering 342 delays. 344 2.1.3. Sound Reinforcement 346 Consider the latency (delay) from when a person speaks into a 347 microphone to when their voice emerges from the speaker. If this 348 delay is longer than about 10-15 milliseconds it is noticeable and 349 can make a sound reinforcement system unusable (see slide 6 of 350 [SRP_LATENCY]). (If you have ever tried to speak in the presence of 351 a delayed echo of your voice you may know this experience). 353 Note that the 15ms latency bound includes all parts of the signal 354 path, not just the network, so the network latency must be 355 significantly less than 15ms. 357 In some cases local performers must perform in synchrony with a 358 remote broadcast. In such cases the latencies of the broadcast 359 stream and the local performer must be adjusted to match each other, 360 with a worst case of one video frame (33ms for NTSC video). 362 In cases where audio phase is a consideration, for example beam- 363 forming using multiple speakers, latency requirements can be in the 364 10 microsecond range (1 audio sample at 96kHz). 366 2.1.4. Deterministic Time to Establish Streaming 368 Note: The WG has decided that guidelines for deterministic time to 369 establish stream startup is not within scope of DetNet. If bounded 370 timing of establishing or re-establish streams is required in a given 371 use case, it is up to the application/system to achieve this. (The 372 supporting text from this section has been removed as of draft 12). 374 2.1.5. Secure Transmission 376 2.1.5.1. Safety 378 Professional audio systems can include amplifiers that are capable of 379 generating hundreds or thousands of watts of audio power which if 380 used incorrectly can cause hearing damage to those in the vicinity. 381 Apart from the usual care required by the systems operators to 382 prevent such incidents, the network traffic that controls these 383 devices must be secured (as with any sensitive application traffic). 385 2.2. Pro Audio Today 387 Some proprietary systems have been created which enable deterministic 388 streams at Layer 3 however they are "engineered networks" which 389 require careful configuration to operate, often require that the 390 system be over-provisioned, and it is implied that all devices on the 391 network voluntarily play by the rules of that network. To enable 392 these industries to successfully transition to an interoperable 393 multi-vendor packet-based infrastructure requires effective open 394 standards, and we believe that establishing relevant IETF standards 395 is a crucial factor. 397 2.3. Pro Audio Future 399 2.3.1. Layer 3 Interconnecting Layer 2 Islands 401 It would be valuable to enable IP to connect multiple Layer 2 LANs. 403 As an example, ESPN recently constructed a state-of-the-art 194,000 404 sq ft, $125 million broadcast studio called DC2. The DC2 network is 405 capable of handling 46 Tbps of throughput with 60,000 simultaneous 406 signals. Inside the facility are 1,100 miles of fiber feeding four 407 audio control rooms (see [ESPN_DC2] ). 409 In designing DC2 they replaced as much point-to-point technology as 410 they could with packet-based technology. They constructed seven 411 individual studios using layer 2 LANS (using IEEE 802.1 AVB) that 412 were entirely effective at routing audio within the LANs. However to 413 interconnect these layer 2 LAN islands together they ended up using 414 dedicated paths in a custom SDN (Software Defined Networking) router 415 because there is no standards-based routing solution available. 417 2.3.2. High Reliability Stream Paths 419 On-air and other live media streams are often backed up with 420 redundant links that seamlessly act to deliver the content when the 421 primary link fails for any reason. In point-to-point systems this is 422 provided by an additional point-to-point link; the analogous 423 requirement in a packet-based system is to provide an alternate path 424 through the network such that no individual link can bring down the 425 system. 427 2.3.3. Integration of Reserved Streams into IT Networks 429 A commonly cited goal of moving to a packet based media 430 infrastructure is that costs can be reduced by using off the shelf, 431 commodity network hardware. In addition, economy of scale can be 432 realized by combining media infrastructure with IT infrastructure. 434 In keeping with these goals, stream reservation technology should be 435 compatible with existing protocols, and not compromise use of the 436 network for best effort (non-time-sensitive) traffic. 438 2.3.4. Use of Unused Reservations by Best-Effort Traffic 440 In cases where stream bandwidth is reserved but not currently used 441 (or is under-utilized) that bandwidth must be available to best- 442 effort (i.e. non-time-sensitive) traffic. For example a single 443 stream may be nailed up (reserved) for specific media content that 444 needs to be presented at different times of the day, ensuring timely 445 delivery of that content, yet in between those times the full 446 bandwidth of the network can be utilized for best-effort tasks such 447 as file transfers. 449 This also addresses a concern of IT network administrators that are 450 considering adding reserved bandwidth traffic to their networks that 451 ("users will reserve large quantities of bandwidth and then never un- 452 reserve it even though they are not using it, and soon the network 453 will have no bandwidth left"). 455 2.3.5. Traffic Segregation 457 Note: It is still under WG discussion whether this topic will be 458 addressed by DetNet. 460 Sink devices may be low cost devices with limited processing power. 461 In order to not overwhelm the CPUs in these devices it is important 462 to limit the amount of traffic that these devices must process. 464 As an example, consider the use of individual seat speakers in a 465 cinema. These speakers are typically required to be cost reduced 466 since the quantities in a single theater can reach hundreds of seats. 467 Discovery protocols alone in a one thousand seat theater can generate 468 enough broadcast traffic to overwhelm a low powered CPU. Thus an 469 installation like this will benefit greatly from some type of traffic 470 segregation that can define groups of seats to reduce traffic within 471 each group. All seats in the theater must still be able to 472 communicate with a central controller. 474 There are many techniques that can be used to support this 475 requirement including (but not limited to) the following examples. 477 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets 479 Packet forwarding rules can be used to eliminate some extraneous 480 streaming traffic from reaching potentially low powered sink devices, 481 however there may be other types of broadcast traffic that should be 482 eliminated using other means for example VLANs or IP subnets. 484 2.3.5.2. Multicast Addressing (IPv4 and IPv6) 486 Multicast addressing is commonly used to keep bandwidth utilization 487 of shared links to a minimum. 489 Because of the MAC Address forwarding nature of Layer 2 bridges it is 490 important that a multicast MAC address is only associated with one 491 stream. This will prevent reservations from forwarding packets from 492 one stream down a path that has no interested sinks simply because 493 there is another stream on that same path that shares the same 494 multicast MAC address. 496 Since each multicast MAC Address can represent 32 different IPv4 497 multicast addresses there must be a process put in place to make sure 498 this does not occur. Requiring use of IPv6 address can achieve this, 499 however due to their continued prevalence, solutions that are 500 effective for IPv4 installations are also required. 502 2.3.6. Latency Optimization by a Central Controller 504 A central network controller might also perform optimizations based 505 on the individual path delays, for example sinks that are closer to 506 the source can inform the controller that they can accept greater 507 latency since they will be buffering packets to match presentation 508 times of farther away sinks. The controller might then move a stream 509 reservation on a short path to a longer path in order to free up 510 bandwidth for other critical streams on that short path. See slides 511 3-5 of [SRP_LATENCY]. 513 Additional optimization can be achieved in cases where sinks have 514 differing latency requirements, for example in a live outdoor concert 515 the speaker sinks have stricter latency requirements than the 516 recording hardware sinks. See slide 7 of [SRP_LATENCY]. 518 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory 520 Device cost can be reduced in a system with guaranteed reservations 521 with a small bounded latency due to the reduced requirements for 522 buffering (i.e. memory) on sink devices. For example, a theme park 523 might broadcast a live event across the globe via a layer 3 protocol; 524 in such cases the size of the buffers required is proportional to the 525 latency bounds and jitter caused by delivery, which depends on the 526 worst case segment of the end-to-end network path. For example on 527 todays open internet the latency is typically unacceptable for audio 528 and video streaming without many seconds of buffering. In such 529 scenarios a single gateway device at the local network that receives 530 the feed from the remote site would provide the expensive buffering 531 required to mask the latency and jitter issues associated with long 532 distance delivery. Sink devices in the local location would have no 533 additional buffering requirements, and thus no additional costs, 534 beyond those required for delivery of local content. The sink device 535 would be receiving the identical packets as those sent by the source 536 and would be unaware that there were any latency or jitter issues 537 along the path. 539 2.4. Pro Audio Asks 541 o Layer 3 routing on top of AVB (and/or other high QoS networks) 543 o Content delivery with bounded, lowest possible latency 545 o IntServ and DiffServ integration with AVB (where practical) 547 o Single network for A/V and IT traffic 549 o Standards-based, interoperable, multi-vendor 551 o IT department friendly 553 o Enterprise-wide networks (e.g. size of San Francisco but not the 554 whole Internet (yet...)) 556 3. Electrical Utilities 558 3.1. Use Case Description 560 Many systems that an electrical utility deploys today rely on high 561 availability and deterministic behavior of the underlying networks. 562 Here we present use cases in Transmission, Generation and 563 Distribution, including key timing and reliability metrics. We also 564 discuss security issues and industry trends which affect the 565 architecture of next generation utility networks 567 3.1.1. Transmission Use Cases 569 3.1.1.1. Protection 571 Protection means not only the protection of human operators but also 572 the protection of the electrical equipment and the preservation of 573 the stability and frequency of the grid. If a fault occurs in the 574 transmission or distribution of electricity then severe damage can 575 occur to human operators, electrical equipment and the grid itself, 576 leading to blackouts. 578 Communication links in conjunction with protection relays are used to 579 selectively isolate faults on high voltage lines, transformers, 580 reactors and other important electrical equipment. The role of the 581 teleprotection system is to selectively disconnect a faulty part by 582 transferring command signals within the shortest possible time. 584 3.1.1.1.1. Key Criteria 586 The key criteria for measuring teleprotection performance are command 587 transmission time, dependability and security. These criteria are 588 defined by the IEC standard 60834 as follows: 590 o Transmission time (Speed): The time between the moment where state 591 changes at the transmitter input and the moment of the 592 corresponding change at the receiver output, including propagation 593 delay. Overall operating time for a teleprotection system 594 includes the time for initiating the command at the transmitting 595 end, the propagation delay over the network (including equipments) 596 and the selection and decision time at the receiving end, 597 including any additional delay due to a noisy environment. 599 o Dependability: The ability to issue and receive valid commands in 600 the presence of interference and/or noise, by minimizing the 601 probability of missing command (PMC). Dependability targets are 602 typically set for a specific bit error rate (BER) level. 604 o Security: The ability to prevent false tripping due to a noisy 605 environment, by minimizing the probability of unwanted commands 606 (PUC). Security targets are also set for a specific bit error 607 rate (BER) level. 609 Additional elements of the the teleprotection system that impact its 610 performance include: 612 o Network bandwidth 614 o Failure recovery capacity (aka resiliency) 616 3.1.1.1.2. Fault Detection and Clearance Timing 618 Most power line equipment can tolerate short circuits or faults for 619 up to approximately five power cycles before sustaining irreversible 620 damage or affecting other segments in the network. This translates 621 to total fault clearance time of 100ms. As a safety precaution, 622 however, actual operation time of protection systems is limited to 623 70- 80 percent of this period, including fault recognition time, 624 command transmission time and line breaker switching time. 626 Some system components, such as large electromechanical switches, 627 require particularly long time to operate and take up the majority of 628 the total clearance time, leaving only a 10ms window for the 629 telecommunications part of the protection scheme, independent of the 630 distance to travel. Given the sensitivity of the issue, new networks 631 impose requirements that are even more stringent: IEC standard 61850 632 limits the transfer time for protection messages to 1/4 - 1/2 cycle 633 or 4 - 8ms (for 60Hz lines) for the most critical messages. 635 3.1.1.1.3. Symmetric Channel Delay 637 Note: It is currently under WG discussion whether symmetric path 638 delays are to be guaranteed by DetNet. 640 Teleprotection channels which are differential must be synchronous, 641 which means that any delays on the transmit and receive paths must 642 match each other. Teleprotection systems ideally support zero 643 asymmetric delay; typical legacy relays can tolerate delay 644 discrepancies of up to 750us. 646 Some tools available for lowering delay variation below this 647 threshold are: 649 o For legacy systems using Time Division Multiplexing (TDM), jitter 650 buffers at the multiplexers on each end of the line can be used to 651 offset delay variation by queuing sent and received packets. The 652 length of the queues must balance the need to regulate the rate of 653 transmission with the need to limit overall delay, as larger 654 buffers result in increased latency. 656 o For jitter-prone IP packet networks, traffic management tools can 657 ensure that the teleprotection signals receive the highest 658 transmission priority to minimize jitter. 660 o Standard packet-based synchronization technologies, such as 661 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet 662 (Sync-E), can help keep networks stable by maintaining a highly 663 accurate clock source on the various network devices. 665 3.1.1.1.4. Teleprotection Network Requirements (IEC 61850) 667 The following table captures the main network metrics as based on the 668 IEC 61850 standard. 670 +-----------------------------+-------------------------------------+ 671 | Teleprotection Requirement | Attribute | 672 +-----------------------------+-------------------------------------+ 673 | One way maximum delay | 4-10 ms | 674 | Asymetric delay required | Yes | 675 | Maximum jitter | less than 250 us (750 us for legacy | 676 | | IED) | 677 | Topology | Point to point, point to Multi- | 678 | | point | 679 | Availability | 99.9999 | 680 | precise timing required | Yes | 681 | Recovery time on node | less than 50ms - hitless | 682 | failure | | 683 | performance management | Yes, Mandatory | 684 | Redundancy | Yes | 685 | Packet loss | 0.1% to 1% | 686 +-----------------------------+-------------------------------------+ 688 Table 1: Teleprotection network requirements 690 3.1.1.1.5. Inter-Trip Protection scheme 692 "Inter-tripping" is the signal-controlled tripping of a circuit 693 breaker to complete the isolation of a circuit or piece of apparatus 694 in concert with the tripping of other circuit breakers. 696 +--------------------------------+----------------------------------+ 697 | Inter-Trip protection | Attribute | 698 | Requirement | | 699 +--------------------------------+----------------------------------+ 700 | One way maximum delay | 5 ms | 701 | Asymetric delay required | No | 702 | Maximum jitter | Not critical | 703 | Topology | Point to point, point to Multi- | 704 | | point | 705 | Bandwidth | 64 Kbps | 706 | Availability | 99.9999 | 707 | precise timing required | Yes | 708 | Recovery time on node failure | less than 50ms - hitless | 709 | performance management | Yes, Mandatory | 710 | Redundancy | Yes | 711 | Packet loss | 0.1% | 712 +--------------------------------+----------------------------------+ 714 Table 2: Inter-Trip protection network requirements 716 3.1.1.1.6. Current Differential Protection Scheme 718 Current differential protection is commonly used for line protection, 719 and is typical for protecting parallel circuits. At both end of the 720 lines the current is measured by the differential relays, and both 721 relays will trip the circuit breaker if the current going into the 722 line does not equal the current going out of the line. This type of 723 protection scheme assumes some form of communications being present 724 between the relays at both end of the line, to allow both relays to 725 compare measured current values. Line differential protection 726 schemes assume a very low telecommunications delay between both 727 relays, often as low as 5ms. Moreover, as those systems are often 728 not time-synchronized, they also assume symmetric telecommunications 729 paths with constant delay, which allows comparing current measurement 730 values taken at the exact same time. 732 +----------------------------------+--------------------------------+ 733 | Current Differential protection | Attribute | 734 | Requirement | | 735 +----------------------------------+--------------------------------+ 736 | One way maximum delay | 5 ms | 737 | Asymetric delay Required | Yes | 738 | Maximum jitter | less than 250 us (750us for | 739 | | legacy IED) | 740 | Topology | Point to point, point to | 741 | | Multi-point | 742 | Bandwidth | 64 Kbps | 743 | Availability | 99.9999 | 744 | precise timing required | Yes | 745 | Recovery time on node failure | less than 50ms - hitless | 746 | performance management | Yes, Mandatory | 747 | Redundancy | Yes | 748 | Packet loss | 0.1% | 749 +----------------------------------+--------------------------------+ 751 Table 3: Current Differential Protection metrics 753 3.1.1.1.7. Distance Protection Scheme 755 Distance (Impedance Relay) protection scheme is based on voltage and 756 current measurements. The network metrics are similar (but not 757 identical to) Current Differential protection. 759 +-------------------------------+-----------------------------------+ 760 | Distance protection | Attribute | 761 | Requirement | | 762 +-------------------------------+-----------------------------------+ 763 | One way maximum delay | 5 ms | 764 | Asymetric delay Required | No | 765 | Maximum jitter | Not critical | 766 | Topology | Point to point, point to Multi- | 767 | | point | 768 | Bandwidth | 64 Kbps | 769 | Availability | 99.9999 | 770 | precise timing required | Yes | 771 | Recovery time on node failure | less than 50ms - hitless | 772 | performance management | Yes, Mandatory | 773 | Redundancy | Yes | 774 | Packet loss | 0.1% | 775 +-------------------------------+-----------------------------------+ 777 Table 4: Distance Protection requirements 779 3.1.1.1.8. Inter-Substation Protection Signaling 781 This use case describes the exchange of Sampled Value and/or GOOSE 782 (Generic Object Oriented Substation Events) message between 783 Intelligent Electronic Devices (IED) in two substations for 784 protection and tripping coordination. The two IEDs are in a master- 785 slave mode. 787 The Current Transformer or Voltage Transformer (CT/VT) in one 788 substation sends the sampled analog voltage or current value to the 789 Merging Unit (MU) over hard wire. The MU sends the time-synchronized 790 61850-9-2 sampled values to the slave IED. The slave IED forwards 791 the information to the Master IED in the other substation. The 792 master IED makes the determination (for example based on sampled 793 value differentials) to send a trip command to the originating IED. 794 Once the slave IED/Relay receives the GOOSE trip for breaker 795 tripping, it opens the breaker. It then sends a confirmation message 796 back to the master. All data exchanges between IEDs are either 797 through Sampled Value and/or GOOSE messages. 799 +----------------------------------+--------------------------------+ 800 | Inter-Substation protection | Attribute | 801 | Requirement | | 802 +----------------------------------+--------------------------------+ 803 | One way maximum delay | 5 ms | 804 | Asymetric delay Required | No | 805 | Maximum jitter | Not critical | 806 | Topology | Point to point, point to | 807 | | Multi-point | 808 | Bandwidth | 64 Kbps | 809 | Availability | 99.9999 | 810 | precise timing required | Yes | 811 | Recovery time on node failure | less than 50ms - hitless | 812 | performance management | Yes, Mandatory | 813 | Redundancy | Yes | 814 | Packet loss | 1% | 815 +----------------------------------+--------------------------------+ 817 Table 5: Inter-Substation Protection requirements 819 3.1.1.2. Intra-Substation Process Bus Communications 821 This use case describes the data flow from the CT/VT to the IEDs in 822 the substation via the MU. The CT/VT in the substation send the 823 sampled value (analog voltage or current) to the MU over hard wire. 824 The MU sends the time-synchronized 61850-9-2 sampled values to the 825 IEDs in the substation in GOOSE message format. The GPS Master Clock 826 can send 1PPS or IRIG-B format to the MU through a serial port or 827 IEEE 1588 protocol via a network. Process bus communication using 828 61850 simplifies connectivity within the substation and removes the 829 requirement for multiple serial connections and removes the slow 830 serial bus architectures that are typically used. This also ensures 831 increased flexibility and increased speed with the use of multicast 832 messaging between multiple devices. 834 +----------------------------------+--------------------------------+ 835 | Intra-Substation protection | Attribute | 836 | Requirement | | 837 +----------------------------------+--------------------------------+ 838 | One way maximum delay | 5 ms | 839 | Asymetric delay Required | No | 840 | Maximum jitter | Not critical | 841 | Topology | Point to point, point to | 842 | | Multi-point | 843 | Bandwidth | 64 Kbps | 844 | Availability | 99.9999 | 845 | precise timing required | Yes | 846 | Recovery time on Node failure | less than 50ms - hitless | 847 | performance management | Yes, Mandatory | 848 | Redundancy | Yes - No | 849 | Packet loss | 0.1% | 850 +----------------------------------+--------------------------------+ 852 Table 6: Intra-Substation Protection requirements 854 3.1.1.3. Wide Area Monitoring and Control Systems 856 The application of synchrophasor measurement data from Phasor 857 Measurement Units (PMU) to Wide Area Monitoring and Control Systems 858 promises to provide important new capabilities for improving system 859 stability. Access to PMU data enables more timely situational 860 awareness over larger portions of the grid than what has been 861 possible historically with normal SCADA (Supervisory Control and Data 862 Acquisition) data. Handling the volume and real-time nature of 863 synchrophasor data presents unique challenges for existing 864 application architectures. Wide Area management System (WAMS) makes 865 it possible for the condition of the bulk power system to be observed 866 and understood in real-time so that protective, preventative, or 867 corrective action can be taken. Because of the very high sampling 868 rate of measurements and the strict requirement for time 869 synchronization of the samples, WAMS has stringent telecommunications 870 requirements in an IP network that are captured in the following 871 table: 873 +----------------------+--------------------------------------------+ 874 | WAMS Requirement | Attribute | 875 +----------------------+--------------------------------------------+ 876 | One way maximum | 50 ms | 877 | delay | | 878 | Asymetric delay | No | 879 | Required | | 880 | Maximum jitter | Not critical | 881 | Topology | Point to point, point to Multi-point, | 882 | | Multi-point to Multi-point | 883 | Bandwidth | 100 Kbps | 884 | Availability | 99.9999 | 885 | precise timing | Yes | 886 | required | | 887 | Recovery time on | less than 50ms - hitless | 888 | Node failure | | 889 | performance | Yes, Mandatory | 890 | management | | 891 | Redundancy | Yes | 892 | Packet loss | 1% | 893 | Consecutive Packet | At least 1 packet per application cycle | 894 | Loss | must be received. | 895 +----------------------+--------------------------------------------+ 897 Table 7: WAMS Special Communication Requirements 899 3.1.1.4. IEC 61850 WAN engineering guidelines requirement 900 classification 902 The IEC (International Electrotechnical Commission) has recently 903 published a Technical Report which offers guidelines on how to define 904 and deploy Wide Area Networks for the interconnections of electric 905 substations, generation plants and SCADA operation centers. The IEC 906 61850-90-12 is providing a classification of WAN communication 907 requirements into 4 classes. Table 8 summarizes these requirements: 909 +----------------+------------+------------+------------+-----------+ 910 | WAN | Class WA | Class WB | Class WC | Class WD | 911 | Requirement | | | | | 912 +----------------+------------+------------+------------+-----------+ 913 | Application | EHV (Extra | HV (High | MV (Medium | General | 914 | field | High | Voltage) | Voltage) | purpose | 915 | | Voltage) | | | | 916 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms | 917 | Jitter | 10 us | 100 us | 1 ms | 10 ms | 918 | Latency | 100 us | 1 ms | 10 ms | 100 ms | 919 | Asymetry | | | | | 920 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 | 921 | | | | | ms | 922 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | | 923 | | 10-6 | 10-4 | | | 924 | Unavailability | 10-7 to | 10-5 to | 10-3 | | 925 | | 10-6 | 10-4 | | | 926 | Recovery delay | Zero | 50 ms | 5 s | 50 s | 927 | Cyber security | extremely | High | Medium | Medium | 928 | | high | | | | 929 +----------------+------------+------------+------------+-----------+ 931 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC 933 3.1.2. Generation Use Case 935 Energy generation systems are complex infrastructures that require 936 control of both the generated power and the generation 937 infrastructure. 939 3.1.2.1. Control of the Generated Power 941 The electrical power generation frequency must be maintained within a 942 very narrow band. Deviations from the acceptable frequency range are 943 detected and the required signals are sent to the power plants for 944 frequency regulation. 946 Automatic Generation Control (AGC) is a system for adjusting the 947 power output of generators at different power plants, in response to 948 changes in the load. 950 +---------------------------------------------------+---------------+ 951 | FCAG (Frequency Control Automatic Generation) | Attribute | 952 | Requirement | | 953 +---------------------------------------------------+---------------+ 954 | One way maximum delay | 500 ms | 955 | Asymetric delay Required | No | 956 | Maximum jitter | Not critical | 957 | Topology | Point to | 958 | | point | 959 | Bandwidth | 20 Kbps | 960 | Availability | 99.999 | 961 | precise timing required | Yes | 962 | Recovery time on Node failure | N/A | 963 | performance management | Yes, | 964 | | Mandatory | 965 | Redundancy | Yes | 966 | Packet loss | 1% | 967 +---------------------------------------------------+---------------+ 969 Table 9: FCAG Communication Requirements 971 3.1.2.2. Control of the Generation Infrastructure 973 The control of the generation infrastructure combines requirements 974 from industrial automation systems and energy generation systems. In 975 this section we present the use case of the control of the generation 976 infrastructure of a wind turbine. 978 | 979 | 980 | +-----------------+ 981 | | +----+ | 982 | | |WTRM| WGEN | 983 WROT x==|===| | | 984 | | +----+ WCNV| 985 | |WNAC | 986 | +---+---WYAW---+--+ 987 | | | 988 | | | +----+ 989 |WTRF | |WMET| 990 | | | | 991 Wind Turbine | +--+-+ 992 Controller | | 993 WTUR | | | 994 WREP | | | 995 WSLG | | | 996 WALG | WTOW | | 998 Figure 1: Wind Turbine Control Network 1000 Figure 1 presents the subsystems that operate a wind turbine. These 1001 subsystems include 1003 o WROT (Rotor Control) 1005 o WNAC (Nacelle Control) (nacelle: housing containing the generator) 1007 o WTRM (Transmission Control) 1009 o WGEN (Generator) 1011 o WYAW (Yaw Controller) (of the tower head) 1013 o WCNV (In-Turbine Power Converter) 1015 o WMET (External Meteorological Station providing real time 1016 information to the controllers of the tower) 1018 Traffic characteristics relevant for the network planning and 1019 dimensioning process in a wind turbine scenario are listed below. 1020 The values in this section are based mainly on the relevant 1021 references [Ahm14] and [Spe09]. Each logical node (Figure 1) is a 1022 part of the metering network and produces analog measurements and 1023 status information which must comply with their respective data rate 1024 constraints. 1026 +-----------+--------+--------+-------------+---------+-------------+ 1027 | Subsystem | Sensor | Analog | Data Rate | Status | Data rate | 1028 | | Count | Sample | (bytes/sec) | Sample | (bytes/sec) | 1029 | | | Count | | Count | | 1030 +-----------+--------+--------+-------------+---------+-------------+ 1031 | WROT | 14 | 9 | 642 | 5 | 10 | 1032 | WTRM | 18 | 10 | 2828 | 8 | 16 | 1033 | WGEN | 14 | 12 | 73764 | 2 | 4 | 1034 | WCNV | 14 | 12 | 74060 | 2 | 4 | 1035 | WTRF | 12 | 5 | 73740 | 2 | 4 | 1036 | WNAC | 12 | 9 | 112 | 3 | 6 | 1037 | WYAW | 7 | 8 | 220 | 4 | 8 | 1038 | WTOW | 4 | 1 | 8 | 3 | 6 | 1039 | WMET | 7 | 7 | 228 | - | - | 1040 +-----------+--------+--------+-------------+---------+-------------+ 1042 Table 10: Wind Turbine Data Rate Constraints 1044 Quality of Service (QoS) constraints for different services are 1045 presented in Table 11. These constraints are defined by IEEE 1646 1046 standard [IEEE1646] and IEC 61400 standard [IEC61400]. 1048 +---------------------+---------+-------------+---------------------+ 1049 | Service | Latency | Reliability | Packet Loss Rate | 1050 +---------------------+---------+-------------+---------------------+ 1051 | Analogue measure | 16 ms | 99.99% | < 10-6 | 1052 | Status information | 16 ms | 99.99% | < 10-6 | 1053 | Protection traffic | 4 ms | 100.00% | < 10-9 | 1054 | Reporting and | 1 s | 99.99% | < 10-6 | 1055 | logging | | | | 1056 | Video surveillance | 1 s | 99.00% | No specific | 1057 | | | | requirement | 1058 | Internet connection | 60 min | 99.00% | No specific | 1059 | | | | requirement | 1060 | Control traffic | 16 ms | 100.00% | < 10-9 | 1061 | Data polling | 16 ms | 99.99% | < 10-6 | 1062 +---------------------+---------+-------------+---------------------+ 1064 Table 11: Wind Turbine Reliability and Latency Constraints 1066 3.1.2.2.1. Intra-Domain Network Considerations 1068 A wind turbine is composed of a large set of subsystems including 1069 sensors and actuators which require time-critical operation. The 1070 reliability and latency constraints of these different subsystems is 1071 shown in Table 11. These subsystems are connected to an intra-domain 1072 network which is used to monitor and control the operation of the 1073 turbine and connect it to the SCADA subsystems. The different 1074 components are interconnected using fiber optics, industrial buses, 1075 industrial Ethernet, EtherCat, or a combination of them. Industrial 1076 signaling and control protocols such as Modbus, Profibus, Profinet 1077 and EtherCat are used directly on top of the Layer 2 transport or 1078 encapsulated over TCP/IP. 1080 The Data collected from the sensors and condition monitoring systems 1081 is multiplexed onto fiber cables for transmission to the base of the 1082 tower, and to remote control centers. The turbine controller 1083 continuously monitors the condition of the wind turbine and collects 1084 statistics on its operation. This controller also manages a large 1085 number of switches, hydraulic pumps, valves, and motors within the 1086 wind turbine. 1088 There is usually a controller both at the bottom of the tower and in 1089 the nacelle. The communication between these two controllers usually 1090 takes place using fiber optics instead of copper links. Sometimes, a 1091 third controller is installed in the hub of the rotor and manages the 1092 pitch of the blades. That unit usually communicates with the nacelle 1093 unit using serial communications. 1095 3.1.2.2.2. Inter-Domain network considerations 1097 A remote control center belonging to a grid operator regulates the 1098 power output, enables remote actuation, and monitors the health of 1099 one or more wind parks in tandem. It connects to the local control 1100 center in a wind park over the Internet (Figure 2) via firewalls at 1101 both ends. The AS path between the local control center and the Wind 1102 Park typically involves several ISPs at different tiers. For 1103 example, a remote control center in Denmark can regulate a wind park 1104 in Greece over the normal public AS path between the two locations. 1106 The remote control center is part of the SCADA system, setting the 1107 desired power output to the wind park and reading back the result 1108 once the new power output level has been set. Traffic between the 1109 remote control center and the wind park typically consists of 1110 protocols like IEC 60870-5-104 [IEC-60870-5-104], OPC XML-DA 1111 [OPCXML], Modbus [MODBUS], and SNMP [RFC3411]. Currently, traffic 1112 flows between the wind farm and the remote control center are best 1113 effort. QoS requirements are not strict, so no SLAs or service 1114 provisioning mechanisms (e.g., VPN) are employed. In case of events 1115 like equipment failure, tolerance for alarm delay is on the order of 1116 minutes, due to redundant systems already in place. 1118 +--------------+ 1119 | | 1120 | | 1121 | Wind Park #1 +----+ 1122 | | | XXXXXX 1123 | | | X XXXXXXXX +----------------+ 1124 +--------------+ | XXXX X XXXXX | | 1125 +---+ XXX | Remote Control | 1126 XXX Internet +----+ Center | 1127 +----+X XXX | | 1128 +--------------+ | XXXXXXX XX | | 1129 | | | XX XXXXXXX +----------------+ 1130 | | | XXXXX 1131 | Wind Park #2 +----+ 1132 | | 1133 | | 1134 +--------------+ 1136 Figure 2: Wind Turbine Control via Internet 1138 We expect future use cases which require bounded latency, bounded 1139 jitter and extraordinary low packet loss for inter-domain traffic 1140 flows due to the softwarization and virtualization of core wind farm 1141 equipment (e.g. switches, firewalls and SCADA server components). 1142 These factors will create opportunities for service providers to 1143 install new services and dynamically manage them from remote 1144 locations. For example, to enable fail-over of a local SCADA server, 1145 a SCADA server in another wind farm site (under the administrative 1146 control of the same operator) could be utilized temporarily 1147 (Figure 3). In that case local traffic would be forwarded to the 1148 remote SCADA server and existing intra-domain QoS and timing 1149 parameters would have to be met for inter-domain traffic flows. 1151 +--------------+ 1152 | | 1153 | | 1154 | Wind Park #1 +----+ 1155 | | | XXXXXX 1156 | | | X XXXXXXXX +----------------+ 1157 +--------------+ | XXXX XXXXX | | 1158 +---+ Operator XXX | Remote Control | 1159 XXX Administered +----+ Center | 1160 +----+X WAN XXX | | 1161 +--------------+ | XXXXXXX XX | | 1162 | | | XX XXXXXXX +----------------+ 1163 | | | XXXXX 1164 | Wind Park #2 +----+ 1165 | | 1166 | | 1167 +--------------+ 1169 Figure 3: Wind Turbine Control via Operator Administered WAN 1171 3.1.3. Distribution use case 1173 3.1.3.1. Fault Location Isolation and Service Restoration (FLISR) 1175 Fault Location, Isolation, and Service Restoration (FLISR) refers to 1176 the ability to automatically locate the fault, isolate the fault, and 1177 restore service in the distribution network. This will likely be the 1178 first widespread application of distributed intelligence in the grid. 1180 Static power switch status (open/closed) in the network dictates the 1181 power flow to secondary substations. Reconfiguring the network in 1182 the event of a fault is typically done manually on site to energize/ 1183 de-energize alternate paths. Automating the operation of substation 1184 switchgear allows the flow of power to be altered automatically under 1185 fault conditions. 1187 FLISR can be managed centrally from a Distribution Management System 1188 (DMS) or executed locally through distributed control via intelligent 1189 switches and fault sensors. 1191 +----------------------+--------------------------------------------+ 1192 | FLISR Requirement | Attribute | 1193 +----------------------+--------------------------------------------+ 1194 | One way maximum | 80 ms | 1195 | delay | | 1196 | Asymetric delay | No | 1197 | Required | | 1198 | Maximum jitter | 40 ms | 1199 | Topology | Point to point, point to Multi-point, | 1200 | | Multi-point to Multi-point | 1201 | Bandwidth | 64 Kbps | 1202 | Availability | 99.9999 | 1203 | precise timing | Yes | 1204 | required | | 1205 | Recovery time on | Depends on customer impact | 1206 | Node failure | | 1207 | performance | Yes, Mandatory | 1208 | management | | 1209 | Redundancy | Yes | 1210 | Packet loss | 0.1% | 1211 +----------------------+--------------------------------------------+ 1213 Table 12: FLISR Communication Requirements 1215 3.2. Electrical Utilities Today 1217 Many utilities still rely on complex environments formed of multiple 1218 application-specific proprietary networks, including TDM networks. 1220 In this kind of environment there is no mixing of OT and IT 1221 applications on the same network, and information is siloed between 1222 operational areas. 1224 Specific calibration of the full chain is required, which is costly. 1226 This kind of environment prevents utility operations from realizing 1227 the operational efficiency benefits, visibility, and functional 1228 integration of operational information across grid applications and 1229 data networks. 1231 In addition, there are many security-related issues as discussed in 1232 the following section. 1234 3.2.1. Security Current Practices and Limitations 1236 Grid monitoring and control devices are already targets for cyber 1237 attacks, and legacy telecommunications protocols have many intrinsic 1238 network-related vulnerabilities. For example, DNP3, Modbus, 1239 PROFIBUS/PROFINET, and other protocols are designed around a common 1240 paradigm of request and respond. Each protocol is designed for a 1241 master device such as an HMI (Human Machine Interface) system to send 1242 commands to subordinate slave devices to retrieve data (reading 1243 inputs) or control (writing to outputs). Because many of these 1244 protocols lack authentication, encryption, or other basic security 1245 measures, they are prone to network-based attacks, allowing a 1246 malicious actor or attacker to utilize the request-and-respond system 1247 as a mechanism for command-and-control like functionality. Specific 1248 security concerns common to most industrial control, including 1249 utility telecommunication protocols include the following: 1251 o Network or transport errors (e.g. malformed packets or excessive 1252 latency) can cause protocol failure. 1254 o Protocol commands may be available that are capable of forcing 1255 slave devices into inoperable states, including powering-off 1256 devices, forcing them into a listen-only state, disabling 1257 alarming. 1259 o Protocol commands may be available that are capable of restarting 1260 communications and otherwise interrupting processes. 1262 o Protocol commands may be available that are capable of clearing, 1263 erasing, or resetting diagnostic information such as counters and 1264 diagnostic registers. 1266 o Protocol commands may be available that are capable of requesting 1267 sensitive information about the controllers, their configurations, 1268 or other need-to-know information. 1270 o Most protocols are application layer protocols transported over 1271 TCP; therefore it is easy to transport commands over non-standard 1272 ports or inject commands into authorized traffic flows. 1274 o Protocol commands may be available that are capable of 1275 broadcasting messages to many devices at once (i.e. a potential 1276 DoS). 1278 o Protocol commands may be available to query the device network to 1279 obtain defined points and their values (i.e. a configuration 1280 scan). 1282 o Protocol commands may be available that will list all available 1283 function codes (i.e. a function scan). 1285 These inherent vulnerabilities, along with increasing connectivity 1286 between IT an OT networks, make network-based attacks very feasible. 1288 Simple injection of malicious protocol commands provides control over 1289 the target process. Altering legitimate protocol traffic can also 1290 alter information about a process and disrupt the legitimate controls 1291 that are in place over that process. A man-in-the-middle attack 1292 could provide both control over a process and misrepresentation of 1293 data back to operator consoles. 1295 3.3. Electrical Utilities Future 1297 The business and technology trends that are sweeping the utility 1298 industry will drastically transform the utility business from the way 1299 it has been for many decades. At the core of many of these changes 1300 is a drive to modernize the electrical grid with an integrated 1301 telecommunications infrastructure. However, interoperability 1302 concerns, legacy networks, disparate tools, and stringent security 1303 requirements all add complexity to the grid transformation. Given 1304 the range and diversity of the requirements that should be addressed 1305 by the next generation telecommunications infrastructure, utilities 1306 need to adopt a holistic architectural approach to integrate the 1307 electrical grid with digital telecommunications across the entire 1308 power delivery chain. 1310 The key to modernizing grid telecommunications is to provide a 1311 common, adaptable, multi-service network infrastructure for the 1312 entire utility organization. Such a network serves as the platform 1313 for current capabilities while enabling future expansion of the 1314 network to accommodate new applications and services. 1316 To meet this diverse set of requirements, both today and in the 1317 future, the next generation utility telecommunnications network will 1318 be based on open-standards-based IP architecture. An end-to-end IP 1319 architecture takes advantage of nearly three decades of IP technology 1320 development, facilitating interoperability and device management 1321 across disparate networks and devices, as it has been already 1322 demonstrated in many mission-critical and highly secure networks. 1324 IPv6 is seen as a future telecommunications technology for the Smart 1325 Grid; the IEC (International Electrotechnical Commission) and 1326 different National Committees have mandated a specific adhoc group 1327 (AHG8) to define the migration strategy to IPv6 for all the IEC TC57 1328 power automation standards. 1330 We expect cloud-based SCADA systems to control and monitor the 1331 critical and non-critical subsystems of generation systems, for 1332 example wind farms. 1334 3.3.1. Migration to Packet-Switched Network 1336 Throughout the world, utilities are increasingly planning for a 1337 future based on smart grid applications requiring advanced 1338 telecommunications systems. Many of these applications utilize 1339 packet connectivity for communicating information and control signals 1340 across the utility's Wide Area Network (WAN), made possible by 1341 technologies such as multiprotocol label switching (MPLS). The data 1342 that traverses the utility WAN includes: 1344 o Grid monitoring, control, and protection data 1346 o Non-control grid data (e.g. asset data for condition-based 1347 monitoring) 1349 o Physical safety and security data (e.g. voice and video) 1351 o Remote worker access to corporate applications (voice, maps, 1352 schematics, etc.) 1354 o Field area network backhaul for smart metering, and distribution 1355 grid management 1357 o Enterprise traffic (email, collaboration tools, business 1358 applications) 1360 WANs support this wide variety of traffic to and from substations, 1361 the transmission and distribution grid, generation sites, between 1362 control centers, and between work locations and data centers. To 1363 maintain this rapidly expanding set of applications, many utilities 1364 are taking steps to evolve present time-division multiplexing (TDM) 1365 based and frame relay infrastructures to packet systems. Packet- 1366 based networks are designed to provide greater functionalities and 1367 higher levels of service for applications, while continuing to 1368 deliver reliability and deterministic (real-time) traffic support. 1370 3.3.2. Telecommunications Trends 1372 These general telecommunications topics are in addition to the use 1373 cases that have been addressed so far. These include both current 1374 and future telecommunications related topics that should be factored 1375 into the network architecture and design. 1377 3.3.2.1. General Telecommunications Requirements 1379 o IP Connectivity everywhere 1381 o Monitoring services everywhere and from different remote centers 1382 o Move services to a virtual data center 1384 o Unify access to applications / information from the corporate 1385 network 1387 o Unify services 1389 o Unified Communications Solutions 1391 o Mix of fiber and microwave technologies - obsolescence of SONET/ 1392 SDH or TDM 1394 o Standardize grid telecommunications protocol to opened standard to 1395 ensure interoperability 1397 o Reliable Telecommunications for Transmission and Distribution 1398 Substations 1400 o IEEE 1588 time synchronization Client / Server Capabilities 1402 o Integration of Multicast Design 1404 o QoS Requirements Mapping 1406 o Enable Future Network Expansion 1408 o Substation Network Resilience 1410 o Fast Convergence Design 1412 o Scalable Headend Design 1414 o Define Service Level Agreements (SLA) and Enable SLA Monitoring 1416 o Integration of 3G/4G Technologies and future technologies 1418 o Ethernet Connectivity for Station Bus Architecture 1420 o Ethernet Connectivity for Process Bus Architecture 1422 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP 1424 3.3.2.2. Specific Network topologies of Smart Grid Applications 1426 Utilities often have very large private telecommunications networks. 1427 It covers an entire territory / country. The main purpose of the 1428 network, until now, has been to support transmission network 1429 monitoring, control, and automation, remote control of generation 1430 sites, and providing FCAPS (Fault, Configuration, Accounting, 1431 Performance, Security) services from centralized network operation 1432 centers. 1434 Going forward, one network will support operation and maintenance of 1435 electrical networks (generation, transmission, and distribution), 1436 voice and data services for ten of thousands of employees and for 1437 exchange with neighboring interconnections, and administrative 1438 services. To meet those requirements, utility may deploy several 1439 physical networks leveraging different technologies across the 1440 country: an optical network and a microwave network for instance. 1441 Each protection and automatism system between two points has two 1442 telecommunications circuits, one on each network. Path diversity 1443 between two substations is key. Regardless of the event type 1444 (hurricane, ice storm, etc.), one path shall stay available so the 1445 system can still operate. 1447 In the optical network, signals are transmitted over more than tens 1448 of thousands of circuits using fiber optic links, microwave and 1449 telephone cables. This network is the nervous system of the 1450 utility's power transmission operations. The optical network 1451 represents ten of thousands of km of cable deployed along the power 1452 lines, with individual runs as long as 280 km. 1454 3.3.2.3. Precision Time Protocol 1456 Some utilities do not use GPS clocks in generation substations. One 1457 of the main reasons is that some of the generation plants are 30 to 1458 50 meters deep under ground and the GPS signal can be weak and 1459 unreliable. Instead, atomic clocks are used. Clocks are 1460 synchronized amongst each other. Rubidium clocks provide clock and 1461 1ms timestamps for IRIG-B. 1463 Some companies plan to transition to the Precision Time Protocol 1464 (PTP, [IEEE1588]), distributing the synchronization signal over the 1465 IP/MPLS network. PTP provides a mechanism for synchronizing the 1466 clocks of participating nodes to a high degree of accuracy and 1467 precision. 1469 PTP operates based on the following assumptions: 1471 It is assumed that the network eliminates cyclic forwarding of PTP 1472 messages within each communication path (e.g. by using a spanning 1473 tree protocol). 1475 PTP is tolerant of an occasional missed message, duplicated 1476 message, or message that arrived out of order. However, PTP 1477 assumes that such impairments are relatively rare. 1479 PTP was designed assuming a multicast communication model, however 1480 PTP also supports a unicast communication model as long as the 1481 behavior of the protocol is preserved. 1483 Like all message-based time transfer protocols, PTP time accuracy 1484 is degraded by delay asymmetry in the paths taken by event 1485 messages. Asymmetry is not detectable by PTP, however, if such 1486 delays are known a priori, PTP can correct for asymmetry. 1488 IEC 61850 will recommend the use of the IEEE PTP 1588 Utility Profile 1489 (as defined in [IEC62439-3:2012] Annex B) which offers the support of 1490 redundant attachment of clocks to Parallel Redundancy Protcol (PRP) 1491 and High-availability Seamless Redundancy (HSR) networks. 1493 3.3.3. Security Trends in Utility Networks 1495 Although advanced telecommunications networks can assist in 1496 transforming the energy industry by playing a critical role in 1497 maintaining high levels of reliability, performance, and 1498 manageability, they also introduce the need for an integrated 1499 security infrastructure. Many of the technologies being deployed to 1500 support smart grid projects such as smart meters and sensors can 1501 increase the vulnerability of the grid to attack. Top security 1502 concerns for utilities migrating to an intelligent smart grid 1503 telecommunications platform center on the following trends: 1505 o Integration of distributed energy resources 1507 o Proliferation of digital devices to enable management, automation, 1508 protection, and control 1510 o Regulatory mandates to comply with standards for critical 1511 infrastructure protection 1513 o Migration to new systems for outage management, distribution 1514 automation, condition-based maintenance, load forecasting, and 1515 smart metering 1517 o Demand for new levels of customer service and energy management 1519 This development of a diverse set of networks to support the 1520 integration of microgrids, open-access energy competition, and the 1521 use of network-controlled devices is driving the need for a converged 1522 security infrastructure for all participants in the smart grid, 1523 including utilities, energy service providers, large commercial and 1524 industrial, as well as residential customers. Securing the assets of 1525 electric power delivery systems (from the control center to the 1526 substation, to the feeders and down to customer meters) requires an 1527 end-to-end security infrastructure that protects the myriad of 1528 telecommunications assets used to operate, monitor, and control power 1529 flow and measurement. 1531 "Cyber security" refers to all the security issues in automation and 1532 telecommunications that affect any functions related to the operation 1533 of the electric power systems. Specifically, it involves the 1534 concepts of: 1536 o Integrity : data cannot be altered undetectably 1538 o Authenticity : the telecommunications parties involved must be 1539 validated as genuine 1541 o Authorization : only requests and commands from the authorized 1542 users can be accepted by the system 1544 o Confidentiality : data must not be accessible to any 1545 unauthenticated users 1547 When designing and deploying new smart grid devices and 1548 telecommunications systems, it is imperative to understand the 1549 various impacts of these new components under a variety of attack 1550 situations on the power grid. Consequences of a cyber attack on the 1551 grid telecommunications network can be catastrophic. This is why 1552 security for smart grid is not just an ad hoc feature or product, 1553 it's a complete framework integrating both physical and Cyber 1554 security requirements and covering the entire smart grid networks 1555 from generation to distribution. Security has therefore become one 1556 of the main foundations of the utility telecom network architecture 1557 and must be considered at every layer with a defense-in-depth 1558 approach. Migrating to IP based protocols is key to address these 1559 challenges for two reasons: 1561 o IP enables a rich set of features and capabilities to enhance the 1562 security posture 1564 o IP is based on open standards, which allows interoperability 1565 between different vendors and products, driving down the costs 1566 associated with implementing security solutions in OT networks. 1568 Securing OT (Operation technology) telecommunications over packet- 1569 switched IP networks follow the same principles that are foundational 1570 for securing the IT infrastructure, i.e., consideration must be given 1571 to enforcing electronic access control for both person-to-machine and 1572 machine-to-machine communications, and providing the appropriate 1573 levels of data privacy, device and platform integrity, and threat 1574 detection and mitigation. 1576 3.4. Electrical Utilities Asks 1578 o Mixed L2 and L3 topologies 1580 o Deterministic behavior 1582 o Bounded latency and jitter 1584 o Tight feedback intervals 1586 o High availability, low recovery time 1588 o Redundancy, low packet loss 1590 o Precise timing 1592 o Centralized computing of deterministic paths 1594 o Distributed configuration may also be useful 1596 4. Building Automation Systems 1598 4.1. Use Case Description 1600 A Building Automation System (BAS) manages equipment and sensors in a 1601 building for improving residents' comfort, reducing energy 1602 consumption, and responding to failures and emergencies. For 1603 example, the BAS measures the temperature of a room using sensors and 1604 then controls the HVAC (heating, ventilating, and air conditioning) 1605 to maintain a set temperature and minimize energy consumption. 1607 A BAS primarily performs the following functions: 1609 o Periodically measures states of devices, for example humidity and 1610 illuminance of rooms, open/close state of doors, FAN speed, etc. 1612 o Stores the measured data. 1614 o Provides the measured data to BAS systems and operators. 1616 o Generates alarms for abnormal state of devices. 1618 o Controls devices (e.g. turn off room lights at 10:00 PM). 1620 4.2. Building Automation Systems Today 1622 4.2.1. BAS Architecture 1624 A typical BAS architecture of today is shown in Figure 4. 1626 +----------------------------+ 1627 | | 1628 | BMS HMI | 1629 | | | | 1630 | +----------------------+ | 1631 | | Management Network | | 1632 | +----------------------+ | 1633 | | | | 1634 | LC LC | 1635 | | | | 1636 | +----------------------+ | 1637 | | Field Network | | 1638 | +----------------------+ | 1639 | | | | | | 1640 | Dev Dev Dev Dev | 1641 | | 1642 +----------------------------+ 1644 BMS := Building Management Server 1645 HMI := Human Machine Interface 1646 LC := Local Controller 1648 Figure 4: BAS architecture 1650 There are typically two layers of network in a BAS. The upper one is 1651 called the Management Network and the lower one is called the Field 1652 Network. In management networks an IP-based communication protocol 1653 is used, while in field networks non-IP based communication protocols 1654 ("field protocols") are mainly used. Field networks have specific 1655 timing requirements, whereas management networks can be best-effort. 1657 A Human Machine Interface (HMI) is typically a desktop PC used by 1658 operators to monitor and display device states, send device control 1659 commands to Local Controllers (LCs), and configure building schedules 1660 (for example "turn off all room lights in the building at 10:00 PM"). 1662 A Building Management Server (BMS) performs the following operations. 1664 o Collect and store device states from LCs at regular intervals. 1666 o Send control values to LCs according to a building schedule. 1668 o Send an alarm signal to operators if it detects abnormal devices 1669 states. 1671 The BMS and HMI communicate with LCs via IP-based "management 1672 protocols" (see standards [bacnetip], [knx]). 1674 A LC is typically a Programmable Logic Controller (PLC) which is 1675 connected to several tens or hundreds of devices using "field 1676 protocols". An LC performs the following kinds of operations: 1678 o Measure device states and provide the information to BMS or HMI. 1680 o Send control values to devices, unilaterally or as part of a 1681 feedback control loop. 1683 There are many field protocols used today; some are standards-based 1684 and others are proprietary (see standards [lontalk], [modbus], 1685 [profibus] and [flnet]). The result is that BASs have multiple MAC/ 1686 PHY modules and interfaces. This makes BASs more expensive, slower 1687 to develop, and can result in "vendor lock-in" with multiple types of 1688 management applications. 1690 4.2.2. BAS Deployment Model 1692 An example BAS for medium or large buildings is shown in Figure 5. 1693 The physical layout spans multiple floors, and there is a monitoring 1694 room where the BAS management entities are located. Each floor will 1695 have one or more LCs depending upon the number of devices connected 1696 to the field network. 1698 +--------------------------------------------------+ 1699 | Floor 3 | 1700 | +----LC~~~~+~~~~~+~~~~~+ | 1701 | | | | | | 1702 | | Dev Dev Dev | 1703 | | | 1704 |--- | ------------------------------------------| 1705 | | Floor 2 | 1706 | +----LC~~~~+~~~~~+~~~~~+ Field Network | 1707 | | | | | | 1708 | | Dev Dev Dev | 1709 | | | 1710 |--- | ------------------------------------------| 1711 | | Floor 1 | 1712 | +----LC~~~~+~~~~~+~~~~~+ +-----------------| 1713 | | | | | | Monitoring Room | 1714 | | Dev Dev Dev | | 1715 | | | BMS HMI | 1716 | | Management Network | | | | 1717 | +--------------------------------+-----+ | 1718 | | | 1719 +--------------------------------------------------+ 1721 Figure 5: BAS Deployment model for Medium/Large Buildings 1723 Each LC is connected to the monitoring room via the Management 1724 network, and the management functions are performed within the 1725 building. In most cases, fast Ethernet (e.g. 100BASE-T) is used for 1726 the management network. Since the management network is non- 1727 realtime, use of Ethernet without quality of service is sufficient 1728 for today's deployment. 1730 In the field network a variety of physical interfaces such as RS232C 1731 and RS485 are used, which have specific timing requirements. Thus if 1732 a field network is to be replaced with an Ethernet or wireless 1733 network, such networks must support time-critical deterministic 1734 flows. 1736 In Figure 6, another deployment model is presented in which the 1737 management system is hosted remotely. This is becoming popular for 1738 small office and residential buildings in which a standalone 1739 monitoring system is not cost-effective. 1741 +---------------+ 1742 | Remote Center | 1743 | | 1744 | BMS HMI | 1745 +------------------------------------+ | | | | 1746 | Floor 2 | | +---+---+ | 1747 | +----LC~~~~+~~~~~+ Field Network| | | | 1748 | | | | | | Router | 1749 | | Dev Dev | +-------|-------+ 1750 | | | | 1751 |--- | ------------------------------| | 1752 | | Floor 1 | | 1753 | +----LC~~~~+~~~~~+ | | 1754 | | | | | | 1755 | | Dev Dev | | 1756 | | | | 1757 | | Management Network | WAN | 1758 | +------------------------Router-------------+ 1759 | | 1760 +------------------------------------+ 1762 Figure 6: Deployment model for Small Buildings 1764 Some interoperability is possible today in the Management Network, 1765 but not in today's field networks due to their non-IP-based design. 1767 4.2.3. Use Cases for Field Networks 1769 Below are use cases for Environmental Monitoring, Fire Detection, and 1770 Feedback Control, and their implications for field network 1771 performance. 1773 4.2.3.1. Environmental Monitoring 1775 The BMS polls each LC at a maximum measurement interval of 100ms (for 1776 example to draw a historical chart of 1 second granularity with a 10x 1777 sampling interval) and then performs the operations as specified by 1778 the operator. Each LC needs to measure each of its several hundred 1779 sensors once per measurement interval. Latency is not critical in 1780 this scenario as long as all sensor values are completed in the 1781 measurement interval. Availability is expected to be 99.999 %. 1783 4.2.3.2. Fire Detection 1785 On detection of a fire, the BMS must stop the HVAC, close the fire 1786 shutters, turn on the fire sprinklers, send an alarm, etc. There are 1787 typically ~10s of sensors per LC that BMS needs to manage. In this 1788 scenario the measurement interval is 10-50ms, the communication delay 1789 is 10ms, and the availability must be 99.9999 %. 1791 4.2.3.3. Feedback Control 1793 BAS systems utilize feedback control in various ways; the most time- 1794 critial is control of DC motors, which require a short feedback 1795 interval (1-5ms) with low communication delay (10ms) and jitter 1796 (1ms). The feedback interval depends on the characteristics of the 1797 device and a target quality of control value. There are typically 1798 ~10s of such devices per LC. 1800 Communication delay is expected to be less than 10 ms, jitter less 1801 than 1 sec while the availability must be 99.9999% . 1803 4.2.4. Security Considerations 1805 When BAS field networks were developed it was assumed that the field 1806 networks would always be physically isolated from external networks 1807 and therefore security was not a concern. In today's world many BASs 1808 are managed remotely and are thus connected to shared IP networks and 1809 so security is definitely a concern, yet security features are not 1810 available in the majority of BAS field network deployments . 1812 The management network, being an IP-based network, has the protocols 1813 available to enable network security, but in practice many BAS 1814 systems do not implement even the available security features such as 1815 device authentication or encryption for data in transit. 1817 4.3. BAS Future 1819 In the future we expect more fine-grained environmental monitoring 1820 and lower energy consumption, which will require more sensors and 1821 devices, thus requiring larger and more complex building networks. 1823 We expect building networks to be connected to or converged with 1824 other networks (Enterprise network, Home network, and Internet). 1826 Therefore better facilities for network management, control, 1827 reliability and security are critical in order to improve resident 1828 and operator convenience and comfort. For example the ability to 1829 monitor and control building devices via the internet would enable 1830 (for example) control of room lights or HVAC from a resident's 1831 desktop PC or phone application. 1833 4.4. BAS Asks 1835 The community would like to see an interoperable protocol 1836 specification that can satisfy the timing, security, availability and 1837 QoS constraints described above, such that the resulting converged 1838 network can replace the disparate field networks. Ideally this 1839 connectivity could extend to the open Internet. 1841 This would imply an architecture that can guarantee 1843 o Low communication delays (from <10ms to 100ms in a network of 1844 several hundred devices) 1846 o Low jitter (< 1 ms) 1848 o Tight feedback intervals (1ms - 10ms) 1850 o High network availability (up to 99.9999% ) 1852 o Availability of network data in disaster scenario 1854 o Authentication between management and field devices (both local 1855 and remote) 1857 o Integrity and data origin authentication of communication data 1858 between field and management devices 1860 o Confidentiality of data when communicated to a remote device 1862 5. Wireless for Industrial 1864 5.1. Use Case Description 1866 Wireless networks are useful for industrial applications, for example 1867 when portable, fast-moving or rotating objects are involved, and for 1868 the resource-constrained devices found in the Internet of Things 1869 (IoT). 1871 Such network-connected sensors, actuators, control loops (etc.) 1872 typically require that the underlying network support real-time 1873 quality of service (QoS), as well as specific classes of other 1874 network properties such as reliability, redundancy, and security. 1876 These networks may also contain very large numbers of devices, for 1877 example for factories, "big data" acquisition, and the IoT. Given 1878 the large numbers of devices installed, and the potential 1879 pervasiveness of the IoT, this is a huge and very cost-sensitive 1880 market. For example, a 1% cost reduction in some areas could save 1881 $100B 1883 5.1.1. Network Convergence using 6TiSCH 1885 Some wireless network technologies support real-time QoS, and are 1886 thus useful for these kinds of networks, but others do not. For 1887 example WiFi is pervasive but does not provide guaranteed timing or 1888 delivery of packets, and thus is not useful in this context. 1890 In this use case we focus on one specific wireless network technology 1891 which does provide the required deterministic QoS, which is "IPv6 1892 over the TSCH mode of IEEE 802.15.4e" (6TiSCH, where TSCH stands for 1893 "Time-Slotted Channel Hopping", see [I-D.ietf-6tisch-architecture], 1894 [IEEE802154], [IEEE802154e], and [RFC7554]). 1896 There are other deterministic wireless busses and networks available 1897 today, however they are imcompatible with each other, and 1898 incompatible with IP traffic (for example [ISA100], [WirelessHART]). 1900 Thus the primary goal of this use case is to apply 6TiSH as a 1901 converged IP- and standards-based wireless network for industrial 1902 applications, i.e. to replace multiple proprietary and/or 1903 incompatible wireless networking and wireless network management 1904 standards. 1906 5.1.2. Common Protocol Development for 6TiSCH 1908 Today there are a number of protocols required by 6TiSCH which are 1909 still in development, and a second intent of this use case is to 1910 highlight the ways in which these "missing" protocols share goals in 1911 common with DetNet. Thus it is possible that some of the protocol 1912 technology developed for DetNet will also be applicable to 6TiSCH. 1914 These protocol goals are identified here, along with their 1915 relationship to DetNet. It is likely that ultimately the resulting 1916 protocols will not be identical, but will share design principles 1917 which contribute to the eficiency of enabling both DetNet and 6TiSCH. 1919 One such commonality is that although at a different time scale, in 1920 both TSN [IEEE802.1TSNTG] and TSCH a packet crosses the network from 1921 node to node follows a precise schedule, as a train that leaves 1922 intermediate stations at precise times along its path. This kind of 1923 operation reduces collisions, saves energy, and enables engineering 1924 the network for deterministic properties. 1926 Another commonality is remote monitoring and scheduling management of 1927 a TSCH network by a Path Computation Element (PCE) and Network 1928 Management Entity (NME). The PCE/NME manage timeslots and device 1929 resources in a manner that minimizes the interaction with and the 1930 load placed on resource-constrained devices. For example, a tiny IoT 1931 device may have just enough buffers to store one or a few IPv6 1932 packets, and will have limited bandwidth between peers such that it 1933 can maintain only a small amount of peer information, and will not be 1934 able to store many packets waiting to be forwarded. It is 1935 advantageous then for it to only be required to carry out the 1936 specific behavior assigned to it by the PCE/NME (as opposed to 1937 maintaining its own IP stack, for example). 1939 Note: Current WG discussion indicates that some peer-to-peer 1940 communication must be assumed, i.e. the PCE may communicate only 1941 indirectly with any given device, enabling hierarchical configuration 1942 of the system. 1944 6TiSCH depends on [PCE] and [I-D.finn-detnet-architecture]. 1946 6TiSCH also depends on the fact that DetNet will maintain consistency 1947 with [IEEE802.1TSNTG]. 1949 5.2. Wireless Industrial Today 1951 Today industrial wireless is accomplished using multiple 1952 deterministic wireless networks which are incompatible with each 1953 other and with IP traffic. 1955 6TiSCH is not yet fully specified, so it cannot be used in today's 1956 applications. 1958 5.3. Wireless Industrial Future 1960 5.3.1. Unified Wireless Network and Management 1962 We expect DetNet and 6TiSCH together to enable converged transport of 1963 deterministic and best-effort traffic flows between real-time 1964 industrial devices and wide area networks via IP routing. A high 1965 level view of a basic such network is shown in Figure 7. 1967 ---+-------- ............ ------------ 1968 | External Network | 1969 | +-----+ 1970 +-----+ | NME | 1971 | | LLN Border | | 1972 | | router +-----+ 1973 +-----+ 1974 o o o 1975 o o o o 1976 o o LLN o o o 1977 o o o o 1978 o 1980 Figure 7: Basic 6TiSCH Network 1982 Figure 8 shows a backbone router federating multiple synchronized 1983 6TiSCH subnets into a single subnet connected to the external 1984 network. 1986 ---+-------- ............ ------------ 1987 | External Network | 1988 | +-----+ 1989 | +-----+ | NME | 1990 +-----+ | +-----+ | | 1991 | | Router | | PCE | +-----+ 1992 | | +--| | 1993 +-----+ +-----+ 1994 | | 1995 | Subnet Backbone | 1996 +--------------------+------------------+ 1997 | | | 1998 +-----+ +-----+ +-----+ 1999 | | Backbone | | Backbone | | Backbone 2000 o | | router | | router | | router 2001 +-----+ +-----+ +-----+ 2002 o o o o o 2003 o o o o o o o o o o o 2004 o o o LLN o o o o 2005 o o o o o o o o o o o o 2007 Figure 8: Extended 6TiSCH Network 2009 The backbone router must ensure end-to-end deterministic behavior 2010 between the LLN and the backbone. We would like to see this 2011 accomplished in conformance with the work done in 2012 [I-D.finn-detnet-architecture] with respect to Layer-3 aspects of 2013 deterministic networks that span multiple Layer-2 domains. 2015 The PCE must compute a deterministic path end-to-end across the TSCH 2016 network and IEEE802.1 TSN Ethernet backbone, and DetNet protocols are 2017 expected to enable end-to-end deterministic forwarding. 2019 +-----+ 2020 | IoT | 2021 | G/W | 2022 +-----+ 2023 ^ <---- Elimination 2024 | | 2025 Track branch | | 2026 +-------+ +--------+ Subnet Backbone 2027 | | 2028 +--|--+ +--|--+ 2029 | | | Backbone | | | Backbone 2030 o | | | router | | | router 2031 +--/--+ +--|--+ 2032 o / o o---o----/ o 2033 o o---o--/ o o o o o 2034 o \ / o o LLN o 2035 o v <---- Replication 2036 o 2038 Figure 9: 6TiSCH Network with PRE 2040 5.3.1.1. PCE and 6TiSCH ARQ Retries 2042 Note: The possible use of ARQ techniques in DetNet is currently 2043 considered a possible design alternative. 2045 6TiSCH uses the IEEE802.15.4 Automatic Repeat-reQuest (ARQ) mechanism 2046 to provide higher reliability of packet delivery. ARQ is related to 2047 packet replication and elimination because there are two independent 2048 paths for packets to arrive at the destination, and if an expected 2049 packed does not arrive on one path then it checks for the packet on 2050 the second path. 2052 Although to date this mechanism is only used by wireless networks, 2053 this may be a technique that would be appropriate for DetNet and so 2054 aspects of the enabling protocol could be co-developed. 2056 For example, in Figure 9, a Track is laid out from a field device in 2057 a 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN 2058 backbone. 2060 In ARQ the Replication function in the field device sends a copy of 2061 each packet over two different branches, and the PCE schedules each 2062 hop of both branches so that the two copies arrive in due time at the 2063 gateway. In case of a loss on one branch, hopefully the other copy 2064 of the packet still arrives within the allocated time. If two copies 2065 make it to the IoT gateway, the Elimination function in the gateway 2066 ignores the extra packet and presents only one copy to upper layers. 2068 At each 6TiSCH hop along the Track, the PCE may schedule more than 2069 one timeSlot for a packet, so as to support Layer-2 retries (ARQ). 2071 In current deployments, a TSCH Track does not necessarily support PRE 2072 but is systematically multi-path. This means that a Track is 2073 scheduled so as to ensure that each hop has at least two forwarding 2074 solutions, and the forwarding decision is to try the preferred one 2075 and use the other in case of Layer-2 transmission failure as detected 2076 by ARQ. 2078 5.3.2. Schedule Management by a PCE 2080 A common feature of 6TiSCH and DetNet is the action of a PCE to 2081 configure paths through the network. Specifically, what is needed is 2082 a protocol and data model that the PCE will use to get/set the 2083 relevant configuration from/to the devices, as well as perform 2084 operations on the devices. We expect that this protocol will be 2085 developed by DetNet with consideration for its reuse by 6TiSCH. The 2086 remainder of this section provides a bit more context from the 6TiSCH 2087 side. 2089 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests 2091 The 6TiSCH device does not expect to place the request for bandwidth 2092 between itself and another device in the network. Rather, an 2093 operation control system invoked through a human interface specifies 2094 the required traffic specification and the end nodes (in terms of 2095 latency and reliability). Based on this information, the PCE must 2096 compute a path between the end nodes and provision the network with 2097 per-flow state that describes the per-hop operation for a given 2098 packet, the corresponding timeslots, and the flow identification that 2099 enables recognizing that a certain packet belongs to a certain path, 2100 etc. 2102 For a static configuration that serves a certain purpose for a long 2103 period of time, it is expected that a node will be provisioned in one 2104 shot with a full schedule, which incorporates the aggregation of its 2105 behavior for multiple paths. 6TiSCH expects that the programing of 2106 the schedule will be done over COAP as discussed in 2107 [I-D.ietf-6tisch-coap]. 2109 6TiSCH expects that the PCE commands will be mapped back and forth 2110 into CoAP by a gateway function at the edge of the 6TiSCH network. 2111 For instance, it is possible that a mapping entity on the backbone 2112 transforms a non-CoAP protocol such as PCEP into the RESTful 2113 interfaces that the 6TiSCH devices support. This architecture will 2114 be refined to comply with DetNet [I-D.finn-detnet-architecture] when 2115 the work is formalized. Related information about 6TiSCH can be 2116 found at [I-D.ietf-6tisch-6top-interface] and RPL [RFC6550]. 2118 A protocol may be used to update the state in the devices during 2119 runtime, for example if it appears that a path through the network 2120 has ceased to perform as expected, but in 6TiSCH that flow was not 2121 designed and no protocol was selected. We would like to see DetNet 2122 define the appropriate end-to-end protocols to be used in that case. 2123 The implication is that these state updates take place once the 2124 system is configured and running, i.e. they are not limited to the 2125 initial communication of the configuration of the system. 2127 A "slotFrame" is the base object that a PCE would manipulate to 2128 program a schedule into an LLN node ([I-D.ietf-6tisch-architecture]). 2130 We would like to see the PCE read energy data from devices, and 2131 compute paths that will implement policies on how energy in devices 2132 is consumed, for instance to ensure that the spent energy does not 2133 exceeded the available energy over a period of time. Note: this 2134 statement implies that an extensible protocol for communicating 2135 device info to the PCE and enabling the PCE to act on it will be part 2136 of the DetNet architecture, however for subnets with specific 2137 protocols (e.g. CoAP) a gateway may be required. 2139 6TiSCH devices can discover their neighbors over the radio using a 2140 mechanism such as beacons, but even though the neighbor information 2141 is available in the 6TiSCH interface data model, 6TiSCH does not 2142 describe a protocol to proactively push the neighborhood information 2143 to a PCE. We would like to see DetNet define such a protocol; one 2144 possible design alternative is that it could operate over CoAP, 2145 alternatively it could be converted to/from CoAP by a gateway. We 2146 would like to see such a protocol carry multiple metrics, for example 2147 similar to those used for RPL operations [RFC6551] 2149 5.3.2.2. 6TiSCH IP Interface 2151 "6top" ([I-D.wang-6tisch-6top-sublayer]) is a logical link control 2152 sitting between the IP layer and the TSCH MAC layer which provides 2153 the link abstraction that is required for IP operations. The 6top 2154 data model and management interfaces are further discussed in 2155 [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap]. 2157 An IP packet that is sent along a 6TiSCH path uses the Differentiated 2158 Services Per-Hop-Behavior Group called Deterministic Forwarding, as 2159 described in [I-D.svshah-tsvwg-deterministic-forwarding]. 2161 5.3.3. 6TiSCH Security Considerations 2163 On top of the classical requirements for protection of control 2164 signaling, it must be noted that 6TiSCH networks operate on limited 2165 resources that can be depleted rapidly in a DoS attack on the system, 2166 for instance by placing a rogue device in the network, or by 2167 obtaining management control and setting up unexpected additional 2168 paths. 2170 5.4. Wireless Industrial Asks 2172 6TiSCH depends on DetNet to define: 2174 o Configuration (state) and operations for deterministic paths 2176 o End-to-end protocols for deterministic forwarding (tagging, IP) 2178 o Protocol for packet replication and elimination 2180 6. Cellular Radio 2182 6.1. Use Case Description 2184 This use case describes the application of deterministic networking 2185 in the context of cellular telecom transport networks. Important 2186 elements include time synchronization, clock distribution, and ways 2187 of establishing time-sensitive streams for both Layer-2 and Layer-3 2188 user plane traffic. 2190 6.1.1. Network Architecture 2192 Figure 10 illustrates a typical 3GPP-defined cellular network 2193 architecture, which includes "Fronthaul" and "Midhaul" network 2194 segments. The "Fronthaul" is the network connecting base stations 2195 (baseband processing units) to the remote radio heads (antennas). 2196 The "Midhaul" is the network inter-connecting base stations (or small 2197 cell sites). 2199 In Figure 10 "eNB" ("E-UTRAN Node B") is the hardware that is 2200 connected to the mobile phone network which communicates directly 2201 with mobile handsets ([TS36300]). 2203 Y (remote radio heads (antennas)) 2204 \ 2205 Y__ \.--. .--. +------+ 2206 \_( `. +---+ _(Back`. | 3GPP | 2207 Y------( Front )----|eNB|----( Haul )----| core | 2208 ( ` .Haul ) +---+ ( ` . ) ) | netw | 2209 /`--(___.-' \ `--(___.-' +------+ 2210 Y_/ / \.--. \ 2211 Y_/ _( Mid`. \ 2212 ( Haul ) \ 2213 ( ` . ) ) \ 2214 `--(___.-'\_____+---+ (small cell sites) 2215 \ |SCe|__Y 2216 +---+ +---+ 2217 Y__|eNB|__Y 2218 +---+ 2219 Y_/ \_Y ("local" radios) 2221 Figure 10: Generic 3GPP-based Cellular Network Architecture 2223 6.1.2. Delay Constraints 2225 The available processing time for Fronthaul networking overhead is 2226 limited to the available time after the baseband processing of the 2227 radio frame has completed. For example in Long Term Evolution (LTE) 2228 radio, processing of a radio frame is allocated 3ms but typically the 2229 processing uses most of it, allowing only a small fraction to be used 2230 by the Fronthaul network (e.g. up to 250us one-way delay, though the 2231 existing spec ([NGMN-fronth]) supports delay only up to 100us). This 2232 ultimately determines the distance the remote radio heads can be 2233 located from the base stations (e.g., 100us equals roughly 20 km of 2234 optical fiber-based transport). Allocation options of the available 2235 time budget between processing and transport are under heavy 2236 discussions in the mobile industry. 2238 For packet-based transport the allocated transport time (e.g. CPRI 2239 would allow for 100us delay [CPRI]) is consumed by all nodes and 2240 buffering between the remote radio head and the baseband processing 2241 unit, plus the distance-incurred delay. 2243 The baseband processing time and the available "delay budget" for the 2244 fronthaul is likely to change in the forthcoming "5G" due to reduced 2245 radio round trip times and other architectural and service 2246 requirements [NGMN]. 2248 [METIS] documents the fundamental challenges as well as overall 2249 technical goals of the future 5G mobile and wireless system as the 2250 starting point. These future systems should support much higher data 2251 volumes and rates and significantly lower end-to-end latency for 100x 2252 more connected devices (at similar cost and energy consumption levels 2253 as today's system). 2255 For Midhaul connections, delay constraints are driven by Inter-Site 2256 radio functions like Coordinated Multipoint Processing (CoMP, see 2257 [CoMP]). CoMP reception and transmission is a framework in which 2258 multiple geographically distributed antenna nodes cooperate to 2259 improve the performance of the users served in the common cooperation 2260 area. The design principal of CoMP is to extend the current single- 2261 cell to multi-UE (User Equipment) transmission to a multi-cell-to- 2262 multi-UEs transmission by base station cooperation. 2264 CoMP has delay-sensitive performance parameters, which are "midhaul 2265 latency" and "CSI (Channel State Information) reporting and 2266 accuracy". The essential feature of CoMP is signaling between eNBs, 2267 so Midhaul latency is the dominating limitation of CoMP performance. 2268 Generally, CoMP can benefit from coordinated scheduling (either 2269 distributed or centralized) of different cells if the signaling delay 2270 between eNBs is within 1-10ms. This delay requirement is both rigid 2271 and absolute because any uncertainty in delay will degrade the 2272 performance significantly. 2274 Inter-site CoMP is one of the key requirements for 5G and is also a 2275 near-term goal for the current 4.5G network architecture. 2277 6.1.3. Time Synchronization Constraints 2279 Fronthaul time synchronization requirements are given by [TS25104], 2280 [TS36104], [TS36211], and [TS36133]. These can be summarized for the 2281 current 3GPP LTE-based networks as: 2283 Delay Accuracy: 2284 +-8ns (i.e. +-1/32 Tc, where Tc is the UMTS Chip time of 1/3.84 2285 MHz) resulting in a round trip accuracy of +-16ns. The value is 2286 this low to meet the 3GPP Timing Alignment Error (TAE) measurement 2287 requirements. Note: performance guarantees of low nanosecond 2288 values such as these are considered to be below the DetNet layer - 2289 it is assumed that the underlying implementation, e.g. the 2290 hardware, will provide sufficient support (e.g. buffering) to 2291 enable this level of accuracy. These values are maintained in the 2292 use case to give an indication of the overall application. 2294 Timing Alignment Error: 2295 Timing Alignment Error (TAE) is problematic to Fronthaul networks 2296 and must be minimized. If the transport network cannot guarantee 2297 low enough TAE then additional buffering has to be introduced at 2298 the edges of the network to buffer out the jitter. Buffering is 2299 not desirable as it reduces the total available delay budget. 2300 Packet Delay Variation (PDV) requirements can be derived from TAE 2301 for packet based Fronthaul networks. 2303 * For multiple input multiple output (MIMO) or TX diversity 2304 transmissions, at each carrier frequency, TAE shall not exceed 2305 65 ns (i.e. 1/4 Tc). 2307 * For intra-band contiguous carrier aggregation, with or without 2308 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2 2309 Tc). 2311 * For intra-band non-contiguous carrier aggregation, with or 2312 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e. 2313 one Tc). 2315 * For inter-band carrier aggregation, with or without MIMO or TX 2316 diversity, TAE shall not exceed 260 ns. 2318 Transport link contribution to radio frequency error: 2319 +-2 PPB. This value is considered to be "available" for the 2320 Fronthaul link out of the total 50 PPB budget reserved for the 2321 radio interface. Note: the reason that the transport link 2322 contributes to radio frequency error is as follows. The current 2323 way of doing Fronthaul is from the radio unit to remote radio head 2324 directly. The remote radio head is essentially a passive device 2325 (without buffering etc.) The transport drives the antenna 2326 directly by feeding it with samples and everything the transport 2327 adds will be introduced to radio as-is. So if the transport 2328 causes additional frequency error that shows immediately on the 2329 radio as well. Note: performance guarantees of low nanosecond 2330 values such as these are considered to be below the DetNet layer - 2331 it is assumed that the underlying implementation, e.g. the 2332 hardware, will provide sufficient support to enable this level of 2333 performance. These values are maintained in the use case to give 2334 an indication of the overall application. 2336 The above listed time synchronization requirements are difficult to 2337 meet with point-to-point connected networks, and more difficult when 2338 the network includes multiple hops. It is expected that networks 2339 must include buffering at the ends of the connections as imposed by 2340 the jitter requirements, since trying to meet the jitter requirements 2341 in every intermediate node is likely to be too costly. However, 2342 every measure to reduce jitter and delay on the path makes it easier 2343 to meet the end-to-end requirements. 2345 In order to meet the timing requirements both senders and receivers 2346 must remain time synchronized, demanding very accurate clock 2347 distribution, for example support for IEEE 1588 transparent clocks or 2348 boundary clocks in every intermediate node. 2350 In cellular networks from the LTE radio era onward, phase 2351 synchronization is needed in addition to frequency synchronization 2352 ([TS36300], [TS23401]). 2354 6.1.4. Transport Loss Constraints 2356 Fronthaul and Midhaul networks assume almost error-free transport. 2357 Errors can result in a reset of the radio interfaces, which can cause 2358 reduced throughput or broken radio connectivity for mobile customers. 2360 For packetized Fronthaul and Midhaul connections packet loss may be 2361 caused by BER, congestion, or network failure scenarios. Current 2362 tools for elminating packet loss for Fronthaul and Midhaul networks 2363 have serious challenges, for example retransmitting lost packets and/ 2364 or using forward error correction (FEC) to circumvent bit errors is 2365 practically impossible due to the additional delay incurred. Using 2366 redundant streams for better guarantees for delivery is also 2367 practically impossible in many cases due to high bandwidth 2368 requirements of Fronthaul and Midhaul networks. Protection switching 2369 is also a candidate but current technologies for the path switch are 2370 too slow to avoid reset of mobile interfaces. 2372 Fronthaul links are assumed to be symmetric, and all Fronthaul 2373 streams (i.e. those carrying radio data) have equal priority and 2374 cannot delay or pre-empt each other. This implies that the network 2375 must guarantee that each time-sensitive flow meets their schedule. 2377 6.1.5. Security Considerations 2379 Establishing time-sensitive streams in the network entails reserving 2380 networking resources for long periods of time. It is important that 2381 these reservation requests be authenticated to prevent malicious 2382 reservation attempts from hostile nodes (or accidental 2383 misconfiguration). This is particularly important in the case where 2384 the reservation requests span administrative domains. Furthermore, 2385 the reservation information itself should be digitally signed to 2386 reduce the risk of a legitimate node pushing a stale or hostile 2387 configuration into another networking node. 2389 Note: This is considered important for the security policy of the 2390 network, but does not affect the core DetNet architecture and design. 2392 6.2. Cellular Radio Networks Today 2394 6.2.1. Fronthaul 2396 Today's Fronthaul networks typically consist of: 2398 o Dedicated point-to-point fiber connection is common 2400 o Proprietary protocols and framings 2402 o Custom equipment and no real networking 2404 Current solutions for Fronthaul are direct optical cables or 2405 Wavelength-Division Multiplexing (WDM) connections. 2407 6.2.2. Midhaul and Backhaul 2409 Today's Midhaul and Backhaul networks typically consist of: 2411 o Mostly normal IP networks, MPLS-TP, etc. 2413 o Clock distribution and sync using 1588 and SyncE 2415 Telecommunication networks in the Mid- and Backhaul are already 2416 heading towards transport networks where precise time synchronization 2417 support is one of the basic building blocks. While the transport 2418 networks themselves have practically transitioned to all-IP packet- 2419 based networks to meet the bandwidth and cost requirements, highly 2420 accurate clock distribution has become a challenge. 2422 In the past, Mid- and Backhaul connections were typically based on 2423 Time Division Multiplexing (TDM-based) and provided frequency 2424 synchronization capabilities as a part of the transport media. 2425 Alternatively other technologies such as Global Positioning System 2426 (GPS) or Synchronous Ethernet (SyncE) are used [SyncE]. 2428 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985] 2429 for legacy transport support) have become popular tools to build and 2430 manage new all-IP Radio Access Networks (RANs) 2431 [I-D.kh-spring-ip-ran-use-case]. Although various timing and 2432 synchronization optimizations have already been proposed and 2433 implemented including 1588 PTP enhancements 2434 [I-D.ietf-tictoc-1588overmpls] and [I-D.ietf-mpls-residence-time], 2435 these solution are not necessarily sufficient for the forthcoming RAN 2436 architectures nor do they guarantee the more stringent time- 2437 synchronization requirements such as [CPRI]. 2439 There are also existing solutions for TDM over IP such as [RFC5087] 2440 and [RFC4553], as well as TDM over Ethernet transports such as 2441 [RFC5086]. 2443 6.3. Cellular Radio Networks Future 2445 Future Cellular Radio Networks will be based on a mix of different 2446 xHaul networks (xHaul = front-, mid- and backhaul), and future 2447 transport networks should be able to support all of them 2448 simultaneously. It is already envisioned today that: 2450 o Not all "cellular radio network" traffic will be IP, for example 2451 some will remain at Layer 2 (e.g. Ethernet based). DetNet 2452 solutions must address all traffic types (Layer 2, Layer 3) with 2453 the same tools and allow their transport simultaneously. 2455 o All form of xHaul networks will need some form of DetNet 2456 solutions. For example with the advent of 5G some Backhaul 2457 traffic will also have DetNet requirements (e.g. traffic belonging 2458 to time-critical 5G applications). 2460 We would like to see the following in future Cellular Radio networks: 2462 o Unified standards-based transport protocols and standard 2463 networking equipment that can make use of underlying deterministic 2464 link-layer services 2466 o Unified and standards-based network management systems and 2467 protocols in all parts of the network (including Fronthaul) 2469 New radio access network deployment models and architectures may 2470 require time- sensitive networking services with strict requirements 2471 on other parts of the network that previously were not considered to 2472 be packetized at all. Time and synchronization support are already 2473 topical for Backhaul and Midhaul packet networks [MEF] and are 2474 becoming a real issue for Fronthaul networks also. Specifically in 2475 Fronthaul networks the timing and synchronization requirements can be 2476 extreme for packet based technologies, for example, on the order of 2477 sub +-20 ns packet delay variation (PDV) and frequency accuracy of 2478 +0.002 PPM [Fronthaul]. 2480 The actual transport protocols and/or solutions to establish required 2481 transport "circuits" (pinned-down paths) for Fronthaul traffic are 2482 still undefined. Those are likely to include (but are not limited 2483 to) solutions directly over Ethernet, over IP, and using MPLS/ 2484 PseudoWire transport. 2486 Even the current time-sensitive networking features may not be 2487 sufficient for Fronthaul traffic. Therefore, having specific 2488 profiles that take the requirements of Fronthaul into account is 2489 desirable [IEEE8021CM]. 2491 Interesting and important work for time-sensitive networking has been 2492 done for Ethernet [TSNTG], which specifies the use of IEEE 1588 time 2493 precision protocol (PTP) [IEEE1588] in the context of IEEE 802.1D and 2494 IEEE 802.1Q. [IEEE8021AS] specifies a Layer 2 time synchronizing 2495 service, and other specifications such as IEEE 1722 [IEEE1722] 2496 specify Ethernet-based Layer-2 transport for time-sensitive streams. 2498 New promising work seeks to enable the transport of time-sensitive 2499 fronthaul streams in Ethernet bridged networks [IEEE8021CM]. 2500 Analogous to IEEE 1722 there is an ongoing standardization effort to 2501 define the Layer-2 transport encapsulation format for transporting 2502 radio over Ethernet (RoE) in the IEEE 1904.3 Task Force [IEEE19043]. 2504 All-IP RANs and xHhaul networks would benefit from time 2505 synchronization and time-sensitive transport services. Although 2506 Ethernet appears to be the unifying technology for the transport, 2507 there is still a disconnect providing Layer 3 services. The protocol 2508 stack typically has a number of layers below the Ethernet Layer 2 2509 that shows up to the Layer 3 IP transport. It is not uncommon that 2510 on top of the lowest layer (optical) transport there is the first 2511 layer of Ethernet followed one or more layers of MPLS, PseudoWires 2512 and/or other tunneling protocols finally carrying the Ethernet layer 2513 visible to the user plane IP traffic. 2515 While there are existing technologies to establish circuits through 2516 the routed and switched networks (especially in MPLS/PWE space), 2517 there is still no way to signal the time synchronization and time- 2518 sensitive stream requirements/reservations for Layer-3 flows in a way 2519 that addresses the entire transport stack, including the Ethernet 2520 layers that need to be configured. 2522 Furthermore, not all "user plane" traffic will be IP. Therefore, the 2523 same solution also must address the use cases where the user plane 2524 traffic is a different layer, for example Ethernet frames. 2526 There is existing work describing the problem statement 2527 [I-D.finn-detnet-problem-statement] and the architecture 2528 [I-D.finn-detnet-architecture] for deterministic networking (DetNet) 2529 that targets solutions for time-sensitive (IP/transport) streams with 2530 deterministic properties over Ethernet-based switched networks. 2532 6.4. Cellular Radio Networks Asks 2534 A standard for data plane transport specification which is: 2536 o Unified among all xHauls (meaning that different flows with 2537 diverse DetNet requirements can coexist in the same network and 2538 traverse the same nodes without interfering with each other) 2540 o Deployed in a highly deterministic network environment 2542 A standard for data flow information models that are: 2544 o Aware of the time sensitivity and constraints of the target 2545 networking environment 2547 o Aware of underlying deterministic networking services (e.g., on 2548 the Ethernet layer) 2550 7. Industrial M2M 2552 7.1. Use Case Description 2554 Industrial Automation in general refers to automation of 2555 manufacturing, quality control and material processing. In this 2556 "machine to machine" (M2M) use case we consider machine units in a 2557 plant floor which periodically exchange data with upstream or 2558 downstream machine modules and/or a supervisory controller within a 2559 local area network. 2561 The actors of M2M communication are Programmable Logic Controllers 2562 (PLCs). Communication between PLCs and between PLCs and the 2563 supervisory PLC (S-PLC) is achieved via critical control/data streams 2564 Figure 11. 2566 S (Sensor) 2567 \ +-----+ 2568 PLC__ \.--. .--. ---| MES | 2569 \_( `. _( `./ +-----+ 2570 A------( Local )-------------( L2 ) 2571 ( Net ) ( Net ) +-------+ 2572 /`--(___.-' `--(___.-' ----| S-PLC | 2573 S_/ / PLC .--. / +-------+ 2574 A_/ \_( `. 2575 (Actuator) ( Local ) 2576 ( Net ) 2577 /`--(___.-'\ 2578 / \ A 2579 S A 2581 Figure 11: Current Generic Industrial M2M Network Architecture 2583 This use case focuses on PLC-related communications; communication to 2584 Manufacturing-Execution-Systems (MESs) are not addressed. 2586 This use case covers only critical control/data streams; non-critical 2587 traffic between industrial automation applications (such as 2588 communication of state, configuration, set-up, and database 2589 communication) are adequately served by currently available 2590 prioritizing techniques. Such traffic can use up to 80% of the total 2591 bandwidth required. There is also a subset of non-time-critical 2592 traffic that must be reliable even though it is not time sensitive. 2594 In this use case the primary need for deterministic networking is to 2595 provide end-to-end delivery of M2M messages within specific timing 2596 constraints, for example in closed loop automation control. Today 2597 this level of determinism is provided by proprietary networking 2598 technologies. In addition, standard networking technologies are used 2599 to connect the local network to remote industrial automation sites, 2600 e.g. over an enterprise or metro network which also carries other 2601 types of traffic. Therefore, flows that should be forwarded with 2602 deterministic guarantees need to be sustained regardless of the 2603 amount of other flows in those networks. 2605 7.2. Industrial M2M Communication Today 2607 Today, proprietary networks fulfill the needed timing and 2608 availability for M2M networks. 2610 The network topologies used today by industrial automation are 2611 similar to those used by telecom networks: Daisy Chain, Ring, Hub and 2612 Spoke, and Comb (a subset of Daisy Chain). 2614 PLC-related control/data streams are transmitted periodically and 2615 carry either a pre-configured payload or a payload configured during 2616 runtime. 2618 Some industrial applications require time synchronization at the end 2619 nodes. For such time-coordinated PLCs, accuracy of 1 microsecond is 2620 required. Even in the case of "non-time-coordinated" PLCs time sync 2621 may be needed e.g. for timestamping of sensor data. 2623 Industrial network scenarios require advanced security solutions. 2624 Many of the current industrial production networks are physically 2625 separated. Preventing critical flows from be leaked outside a domain 2626 is handled today by filtering policies that are typically enforced in 2627 firewalls. 2629 7.2.1. Transport Parameters 2631 The Cycle Time defines the frequency of message(s) between industrial 2632 actors. The Cycle Time is application dependent, in the range of 1ms 2633 - 100ms for critical control/data streams. 2635 Because industrial applications assume deterministic transport for 2636 critical Control-Data-Stream parameters (instead of defining latency 2637 and delay variation parameters) it is sufficient to fulfill the upper 2638 bound of latency (maximum latency). The underlying networking 2639 infrastructure must ensure a maximum end-to-end delivery time of 2640 messages in the range of 100 microseconds to 50 milliseconds 2641 depending on the control loop application. 2643 The bandwidth requirements of control/data streams are usually 2644 calculated directly from the bytes-per-cycle parameter of the control 2645 loop. For PLC-to-PLC communication one can expect 2 - 32 streams 2646 with packet size in the range of 100 - 700 bytes. For S-PLC to PLCs 2647 the number of streams is higher - up to 256 streams. Usually no more 2648 than 20% of available bandwidth is used for critical control/data 2649 streams. In today's networks 1Gbps links are commonly used. 2651 Most PLC control loops are rather tolerant of packet loss, however 2652 critical control/data streams accept no more than 1 packet loss per 2653 consecutive communication cycle (i.e. if a packet gets lost in cycle 2654 "n", then the next cycle ("n+1") must be lossless). After two or 2655 more consecutive packet losses the network may be considered to be 2656 "down" by the Application. 2658 As network downtime may impact the whole production system the 2659 required network availability is rather high (99,999%). 2661 Based on the above parameters we expect that some form of redundancy 2662 will be required for M2M communications, however any individual 2663 solution depends on several parameters including cycle time, delivery 2664 time, etc. 2666 7.2.2. Stream Creation and Destruction 2668 In an industrial environment, critical control/data streams are 2669 created rather infrequently, on the order of ~10 times per day / week 2670 / month. Most of these critical control/data streams get created at 2671 machine startup, however flexibility is also needed during runtime, 2672 for example when adding or removing a machine. Going forward as 2673 production systems become more flexible, we expect a significant 2674 increase in the rate at which streams are created, changed and 2675 destroyed. 2677 7.3. Industrial M2M Future 2679 We would like to see a converged IP-standards-based network with 2680 deterministic properties that can satisfy the timing, security and 2681 reliability constraints described above. Today's proprietary 2682 networks could then be interfaced to such a network via gateways or, 2683 in the case of new installations, devices could be connected directly 2684 to the converged network. 2686 For this use case we expect time synchronization accuracy on the 2687 order of 1us. 2689 7.4. Industrial M2M Asks 2691 o Converged IP-based network 2693 o Deterministic behavior (bounded latency and jitter ) 2695 o High availability (presumably through redundancy) (99.999 %) 2697 o Low message delivery time (100us - 50ms) 2699 o Low packet loss (burstless, 0.1-1 %) 2701 o Security (e.g. prevent critical flows from being leaked between 2702 physically separated networks) 2704 8. Use Case Common Themes 2706 This section summarizes the expected properties of a DetNet network, 2707 based on the use cases as described in this draft. 2709 8.1. Unified, standards-based network 2711 8.1.1. Extensions to Ethernet 2713 A DetNet network is not "a new kind of network" - it based on 2714 extensions to existing Ethernet standards, including elements of IEEE 2715 802.1 AVB/TSN and related standards. Presumably it will be possible 2716 to run DetNet over other underlying transports besides Ethernet, but 2717 Ethernet is explicitly supported. 2719 8.1.2. Centrally Administered 2721 In general a DetNet network is not expected to be "plug and play" - 2722 it is expected that there is some centralized network configuration 2723 and control system. Such a system may be in a single central 2724 location, or it maybe distributed across multiple control entities 2725 that function together as a unified control system for the network. 2726 However, the ability to "hot swap" components (e.g. due to 2727 malfunction) is similar enough to "plug and play" that this kind of 2728 behavior may be expected in DetNet networks, depending on the 2729 implementation. 2731 8.1.3. Standardized Data Flow Information Models 2733 Data Flow Information Models to be used with DetNet networks are to 2734 be specified by DetNet. 2736 8.1.4. L2 and L3 Integration 2738 A DetNet network is intended to integrate between Layer 2 (bridged) 2739 network(s) (e.g. AVB/TSN LAN) and Layer 3 (routed) network(s) (e.g. 2740 using IP-based protocols). One example of this is "making AVB/TSN- 2741 type deterministic performance available from Layer 3 applications, 2742 e.g. using RTP". Another example is "connecting two AVB/TSN LANs 2743 ("islands") together through a standard router". 2745 8.1.5. Guaranteed End-to-End Delivery 2747 Packets sent over DetNet are guaranteed not to be dropped by the 2748 network due to congestion. (Packets may however be dropped for 2749 intended reasons, e.g. per security measures). 2751 8.1.6. Replacement for Multiple Proprietary Deterministic Networks 2753 There are many proprietary non-interoperable deterministic Ethernet- 2754 based networks currently available; DetNet is intended to provide an 2755 open-standards-based alternative to such networks. 2757 8.1.7. Mix of Deterministic and Best-Effort Traffic 2759 DetNet is intended to support coexistance of time-sensitive 2760 operational (OT) traffic and information (IT) traffic on the same 2761 ("unified") network. 2763 8.1.8. Unused Reserved BW to be Available to Best Effort Traffic 2765 If bandwidth reservations are made for a stream but the associated 2766 bandwidth is not used at any point in time, that bandwidth is made 2767 available on the network for best-effort traffic. If the owner of 2768 the reserved stream then starts transmitting again, the bandwidth is 2769 no longer available for best-effort traffic, on a moment-to-moment 2770 basis. Note that such "temporarily available" bandwidth is not 2771 available for time-sensitive traffic, which must have its own 2772 reservation. 2774 8.1.9. Lower Cost, Multi-Vendor Solutions 2776 The DetNet network specifications are intended to enable an ecosystem 2777 in which multiple vendors can create interoperable products, thus 2778 promoting device diversity and potentially higher numbers of each 2779 device manufactured, promoting cost reduction and cost competition 2780 among vendors. The intent is that DetNet networks should be able to 2781 be created at lower cost and with greater diversity of available 2782 devices than existing proprietary networks. 2784 8.2. Scalable Size 2786 DetNet networks range in size from very small, e.g. inside a single 2787 industrial machine, to very large, for example a Utility Grid network 2788 spanning a whole country, and involving many "hops" over various 2789 kinds of links for example radio repeaters, microwave linkes, fiber 2790 optic links, etc.. However recall that the scope of DetNet is 2791 confined to networks that are centrally administered, and explicitly 2792 excludes unbounded decentralized networks such as the Internet. 2794 8.3. Scalable Timing Parameters and Accuracy 2796 8.3.1. Bounded Latency 2798 The DetNet Data Flow Information Model is expected to provide means 2799 to configure the network that include parameters for querying network 2800 path latency, requesting bounded latency for a given stream, 2801 requesting worst case maximum and/or minimum latency for a given path 2802 or stream, and so on. It is an expected case that the network may 2803 not be able to provide a given requested service level, and if so the 2804 network control system should reply that the requested services is 2805 not available (as opposed to accepting the parameter but then not 2806 delivering the desired behavior). 2808 8.3.2. Low Latency 2810 Applications may require "extremely low latency" however depending on 2811 the application these may mean very different latency values; for 2812 example "low latency" across a Utility grid network is on a different 2813 time scale than "low latency" in a motor control loop in a small 2814 machine. The intent is that the mechanisms for specifying desired 2815 latency include wide ranges, and that architecturally there is 2816 nothing to prevent arbirtrarily low latencies from being implemented 2817 in a given network. 2819 8.3.3. Symmetrical Path Delays 2821 Some applications would like to specify that the transit delay time 2822 values be equal for both the transmit and return paths. 2824 8.4. High Reliability and Availability 2826 Reliablity is of critical importance to many DetNet applications, in 2827 which consequences of failure can be extraordinarily high in terms of 2828 cost and even human life. DetNet based systems are expected to be 2829 implemented with essentially arbitrarily high availability (for 2830 example 99.9999% up time, or even 12 nines). The intent is that the 2831 DetNet designs should not make any assumptions about the level of 2832 reliability and availability that may be required of a given system, 2833 and should define parameters for communicating these kinds of metrics 2834 within the network. 2836 A strategy used by DetNet for providing such extraordinarily high 2837 levels of reliability is to provide redundant paths that can be 2838 seamlessly switched between, while maintaining the required 2839 performance of that system. 2841 8.5. Security 2843 Security is of critical importance to many DetNet applications. A 2844 DetNet network must be able to be made secure against devices 2845 failures, attackers, misbehaving devices, and so on. In a DetNet 2846 network the data traffic is expected to be be time-sensitive, thus in 2847 addition to arriving with the data content as intended, the data must 2848 also arrive at the expected time. This may present "new" security 2849 challenges to implementers, and must be addressed accordingly. There 2850 are other security implications, including (but not limited to) the 2851 change in attack surface presented by packet replication and 2852 elimination. 2854 8.6. Deterministic Flows 2856 Reserved bandwidth data flows must be isolated from each other and 2857 from best-effort traffic, so that even if the network is saturated 2858 with best-effort (and/or reserved bandwidth) traffic, the configured 2859 flows are not adversely affected. 2861 9. Use Cases Explicitly Out of Scope for DetNet 2863 This section contains use case text that has been determined to be 2864 outside of the scope of the present DetNet work. 2866 9.1. DetNet Scope Limitations 2868 The scope of DetNet is deliberately limited to specific use cases 2869 that are consistent with the WG charter, subject to the 2870 interpretation of the WG. At the time the DetNet Use Cases were 2871 solicited and provided by the authors the scope of DetNet was not 2872 clearly defined, and as that clarity has emerged, certain of the use 2873 cases have been determined to be outside the scope of the present 2874 DetNet work. Such text has been moved into this section to clarify 2875 that these use cases will not be supported by the DetNet work. 2877 The text in this section was moved here based on the following 2878 "exclusion" principles. Or, as an alternative to moving all such 2879 text to this section, some draft text has been modified in situ to 2880 reflect these same principles. 2882 The following principles have been established to clarify the scope 2883 of the present DetNet work. 2885 o The scope of network addressed by DetNet is limited to networks 2886 that can be centrally controlled, i.e. an "enterprise" aka 2887 "corporate" network. This explicitly excludes "the open 2888 Internet". 2890 o Maintaining synchronized time across a DetNet network is crucial 2891 to its operation, however DetNet assumes that time is to be 2892 maintained using other means, for example (but not limited to) 2893 Precision Time Protocol ([IEEE1588]). A use case may state the 2894 accuracy and reliability that it expects from the DetNet network 2895 as part of a whole system, however it is understood that such 2896 timing properties are not guaranteed by DetNet itself. It is 2897 currently an open question as to whether DetNet protocols will 2898 include a way for an application to communicate such timing 2899 expectations to the network, and if so whether they would be 2900 expected to materially affect the performance they would receive 2901 from the network as a result. 2903 9.2. Internet-based Applications 2905 9.2.1. Use Case Description 2907 There are many applications that communicate across the open Internet 2908 that could benefit from guaranteed delivery and bounded latency. The 2909 following are some representative examples. 2911 9.2.1.1. Media Content Delivery 2913 Media content delivery continues to be an important use of the 2914 Internet, yet users often experience poor quality audio and video due 2915 to the delay and jitter inherent in today's Internet. 2917 9.2.1.2. Online Gaming 2919 Online gaming is a significant part of the gaming market, however 2920 latency can degrade the end user experience. For example "First 2921 Person Shooter" (FPS) games are highly delay-sensitive. 2923 9.2.1.3. Virtual Reality 2925 Virtual reality (VR) has many commercial applications including real 2926 estate presentations, remote medical procedures, and so on. Low 2927 latency is critical to interacting with the virtual world because 2928 perceptual delays can cause motion sickness. 2930 9.2.2. Internet-Based Applications Today 2932 Internet service today is by definition "best effort", with no 2933 guarantees on delivery or bandwidth. 2935 9.2.3. Internet-Based Applications Future 2937 We imagine an Internet from which we will be able to play a video 2938 without glitches and play games without lag. 2940 For online gaming, the maximum round-trip delay can be 100ms and 2941 stricter for FPS gaming which can be 10-50ms. Transport delay is the 2942 dominate part with a 5-20ms budget. 2944 For VR, 1-10ms maximum delay is needed and total network budget is 2945 1-5ms if doing remote VR. 2947 Flow identification can be used for gaming and VR, i.e. it can 2948 recognize a critical flow and provide appropriate latency bounds. 2950 9.2.4. Internet-Based Applications Asks 2952 o Unified control and management protocols to handle time-critical 2953 data flow 2955 o Application-aware flow filtering mechanism to recognize the timing 2956 critical flow without doing 5-tuple matching 2958 o Unified control plane to provide low latency service on Layer-3 2959 without changing the data plane 2961 o OAM system and protocols which can help to provide E2E-delay 2962 sensitive service provisioning 2964 9.3. Pro Audio and Video - Digital Rights Management (DRM) 2966 This section was moved here because this is considered a Link layer 2967 topic, not direct responsibility of DetNet. 2969 Digital Rights Management (DRM) is very important to the audio and 2970 video industries. Any time protected content is introduced into a 2971 network there are DRM concerns that must be maintained (see 2972 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of 2973 network technology, however there are cases when a secure link 2974 supporting authentication and encryption is required by content 2975 owners to carry their audio or video content when it is outside their 2976 own secure environment (for example see [DCI]). 2978 As an example, two techniques are Digital Transmission Content 2979 Protection (DTCP) and High-Bandwidth Digital Content Protection 2980 (HDCP). HDCP content is not approved for retransmission within any 2981 other type of DRM, while DTCP may be retransmitted under HDCP. 2982 Therefore if the source of a stream is outside of the network and it 2983 uses HDCP protection it is only allowed to be placed on the network 2984 with that same HDCP protection. 2986 9.4. Pro Audio and Video - Link Aggregation 2988 Note: The term "Link Aggregation" is used here as defined by the text 2989 in the following paragraph, i.e. not following a more common Network 2990 Industry definition. Current WG consensus is that this item won't be 2991 directly supported by the DetNet architecture, for example because it 2992 implies guarantee of in-order delivery of packets which conflicts 2993 with the core goal of achieving the lowest possible latency. 2995 For transmitting streams that require more bandwidth than a single 2996 link in the target network can support, link aggregation is a 2997 technique for combining (aggregating) the bandwidth available on 2998 multiple physical links to create a single logical link of the 2999 required bandwidth. However, if aggregation is to be used, the 3000 network controller (or equivalent) must be able to determine the 3001 maximum latency of any path through the aggregate link. 3003 10. Acknowledgments 3005 10.1. Pro Audio 3007 This section was derived from draft-gunther-detnet-proaudio-req-01. 3009 The editors would like to acknowledge the help of the following 3010 individuals and the companies they represent: 3012 Jeff Koftinoff, Meyer Sound 3014 Jouni Korhonen, Associate Technical Director, Broadcom 3016 Pascal Thubert, CTAO, Cisco 3018 Kieran Tyrrell, Sienda New Media Technologies GmbH 3020 10.2. Utility Telecom 3022 This section was derived from draft-wetterwald-detnet-utilities-reqs- 3023 02. 3025 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy 3026 Practice Cisco 3028 Pascal Thubert, CTAO Cisco 3030 10.3. Building Automation Systems 3032 This section was derived from draft-bas-usecase-detnet-00. 3034 10.4. Wireless for Industrial 3036 This section was derived from draft-thubert-6tisch-4detnet-01. 3038 This specification derives from the 6TiSCH architecture, which is the 3039 result of multiple interactions, in particular during the 6TiSCH 3040 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at 3041 the IETF. 3043 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier 3044 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael 3045 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon, 3046 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey, 3047 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria 3048 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation 3049 and various contributions. 3051 10.5. Cellular Radio 3053 This section was derived from draft-korhonen-detnet-telreq-00. 3055 10.6. Industrial M2M 3057 The authors would like to thank Feng Chen and Marcel Kiessling for 3058 their comments and suggestions. 3060 10.7. Internet Applications and CoMP 3062 This section was derived from draft-zha-detnet-use-case-00. 3064 This document has benefited from reviews, suggestions, comments and 3065 proposed text provided by the following members, listed in 3066 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver 3067 Huang. 3069 10.8. Electrical Utilities 3071 The wind power generation use case has been extracted from the study 3072 of Wind Farms conducted within the 5GPPP Virtuwind Project. The 3073 project is funded by the European Union's Horizon 2020 research and 3074 innovation programme under grant agreement No 671648 (VirtuWind). 3076 11. Informative References 3078 [ACE] IETF, "Authentication and Authorization for Constrained 3079 Environments", . 3082 [Ahm14] Ahmed, M. and R. Kim, "Communication network architectures 3083 for smart-wind power farms.", Energies, p. 3900-3921. , 3084 June 2014. 3086 [bacnetip] 3087 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP", 3088 January 1999. 3090 [CCAMP] IETF, "Common Control and Measurement Plane", 3091 . 3093 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND 3094 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_ 3095 and_Enhancement_v2.0, March 2015, 3096 . 3099 [CONTENT_PROTECTION] 3100 Olsen, D., "1722a Content Protection", 2012, 3101 . 3104 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI); 3105 Interface Specification", CPRI Specification V6.1, July 3106 2014, . 3109 [CPRI-transp] 3110 CPRI TWG, "CPRI requirements for Ethernet Fronthaul", 3111 November 2015, 3112 . 3115 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification, 3116 Version 1.2", 2012, . 3118 [DICE] IETF, "DTLS In Constrained Environments", 3119 . 3121 [EA12] Evans, P. and M. Annunziata, "Industrial Internet: Pushing 3122 the Boundaries of Minds and Machines", November 2012. 3124 [ESPN_DC2] 3125 Daley, D., "ESPN's DC2 Scales AVB Large", 2014, 3126 . 3129 [flnet] Japan Electrical Manufacturers Association, "JEMA 1479 - 3130 English Edition", September 2012. 3132 [Fronthaul] 3133 Chen, D. and T. Mustala, "Ethernet Fronthaul 3134 Considerations", IEEE 1904.3, February 2015, 3135 . 3138 [HART] www.hartcomm.org, "Highway Addressable remote Transducer, 3139 a group of specifications for industrial process and 3140 control devices administered by the HART Foundation". 3142 [I-D.finn-detnet-architecture] 3143 Finn, N. and P. Thubert, "Deterministic Networking 3144 Architecture", draft-finn-detnet-architecture-08 (work in 3145 progress), August 2016. 3147 [I-D.finn-detnet-problem-statement] 3148 Finn, N. and P. Thubert, "Deterministic Networking Problem 3149 Statement", draft-finn-detnet-problem-statement-05 (work 3150 in progress), March 2016. 3152 [I-D.ietf-6tisch-6top-interface] 3153 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer 3154 (6top) Interface", draft-ietf-6tisch-6top-interface-04 3155 (work in progress), July 2015. 3157 [I-D.ietf-6tisch-architecture] 3158 Thubert, P., "An Architecture for IPv6 over the TSCH mode 3159 of IEEE 802.15.4", draft-ietf-6tisch-architecture-11 (work 3160 in progress), January 2017. 3162 [I-D.ietf-6tisch-coap] 3163 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and 3164 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work 3165 in progress), March 2015. 3167 [I-D.ietf-6tisch-terminology] 3168 Palattella, M., Thubert, P., Watteyne, T., and Q. Wang, 3169 "Terminology in IPv6 over the TSCH mode of IEEE 3170 802.15.4e", draft-ietf-6tisch-terminology-08 (work in 3171 progress), December 2016. 3173 [I-D.ietf-ipv6-multilink-subnets] 3174 Thaler, D. and C. Huitema, "Multi-link Subnet Support in 3175 IPv6", draft-ietf-ipv6-multilink-subnets-00 (work in 3176 progress), July 2002. 3178 [I-D.ietf-mpls-residence-time] 3179 Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S., 3180 and S. Vainshtein, "Residence Time Measurement in MPLS 3181 network", draft-ietf-mpls-residence-time-15 (work in 3182 progress), March 2017. 3184 [I-D.ietf-roll-rpl-industrial-applicability] 3185 Phinney, T., Thubert, P., and R. Assimiti, "RPL 3186 applicability in industrial networks", draft-ietf-roll- 3187 rpl-industrial-applicability-02 (work in progress), 3188 October 2013. 3190 [I-D.ietf-tictoc-1588overmpls] 3191 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L. 3192 Montini, "Transporting Timing messages over MPLS 3193 Networks", draft-ietf-tictoc-1588overmpls-07 (work in 3194 progress), October 2015. 3196 [I-D.kh-spring-ip-ran-use-case] 3197 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing 3198 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02 3199 (work in progress), November 2014. 3201 [I-D.svshah-tsvwg-deterministic-forwarding] 3202 Shah, S. and P. Thubert, "Deterministic Forwarding PHB", 3203 draft-svshah-tsvwg-deterministic-forwarding-04 (work in 3204 progress), August 2015. 3206 [I-D.thubert-6lowpan-backbone-router] 3207 Thubert, P., "6LoWPAN Backbone Router", draft-thubert- 3208 6lowpan-backbone-router-03 (work in progress), February 3209 2013. 3211 [I-D.wang-6tisch-6top-sublayer] 3212 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer 3213 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in 3214 progress), November 2015. 3216 [IEC-60870-5-104] 3217 International Electrotechnical Commission, "International 3218 Standard IEC 60870-5-104: Network access for IEC 3219 60870-5-101 using standard transport profiles", June 2006. 3221 [IEC61400] 3222 "International standard 61400-25: Communications for 3223 monitoring and control of wind power plants", June 2013. 3225 [IEC61850-90-12] 3226 TC57 WG10, IEC., "IEC 61850-90-12 TR: Communication 3227 networks and systems for power utility automation - Part 3228 90-12: Wide area network engineering guidelines", 2015. 3230 [IEC62439-3:2012] 3231 TC65, IEC., "IEC 62439-3: Industrial communication 3232 networks - High availability automation networks - Part 3: 3233 Parallel Redundancy Protocol (PRP) and High-availability 3234 Seamless Redundancy (HSR)", 2012. 3236 [IEEE1588] 3237 IEEE, "IEEE Standard for a Precision Clock Synchronization 3238 Protocol for Networked Measurement and Control Systems", 3239 IEEE Std 1588-2008, 2008, 3240 . 3243 [IEEE1646] 3244 "Communication Delivery Time Performance Requirements for 3245 Electric Power Substation Automation", IEEE Standard 3246 1646-2004 , Apr 2004. 3248 [IEEE1722] 3249 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport 3250 Protocol for Time Sensitive Applications in a Bridged 3251 Local Area Network", IEEE Std 1722-2011, 2011, 3252 . 3255 [IEEE19043] 3256 IEEE Standards Association, "IEEE 1904.3 TF", IEEE 1904.3, 3257 2015, . 3259 [IEEE802.1TSNTG] 3260 IEEE Standards Association, "IEEE 802.1 Time-Sensitive 3261 Networks Task Group", March 2013, 3262 . 3264 [IEEE802154] 3265 IEEE standard for Information Technology, "IEEE std. 3266 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC) 3267 and Physical Layer (PHY) Specifications for Low-Rate 3268 Wireless Personal Area Networks". 3270 [IEEE802154e] 3271 IEEE standard for Information Technology, "IEEE standard 3272 for Information Technology, IEEE std. 802.15.4, Part. 3273 15.4: Wireless Medium Access Control (MAC) and Physical 3274 Layer (PHY) Specifications for Low-Rate Wireless Personal 3275 Area Networks, June 2011 as amended by IEEE std. 3276 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area 3277 Networks (LR-WPANs) Amendment 1: MAC sublayer", April 3278 2012. 3280 [IEEE8021AS] 3281 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)", 3282 IEEE 802.1AS-2001, 2011, 3283 . 3286 [IEEE8021CM] 3287 Farkas, J., "Time-Sensitive Networking for Fronthaul", 3288 Unapproved PAR, PAR for a New IEEE Standard; 3289 IEEE P802.1CM, April 2015, 3290 . 3293 [IEEE8021TSN] 3294 IEEE 802.1, "The charter of the TG is to provide the 3295 specifications that will allow time-synchronized low 3296 latency streaming services through 802 networks.", 2016, 3297 . 3299 [IETFDetNet] 3300 IETF, "Charter for IETF DetNet Working Group", 2015, 3301 . 3303 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation", 3304 . 3306 [ISA100.11a] 3307 ISA/ANSI, "Wireless Systems for Industrial Automation: 3308 Process Control and Related Applications - ISA100.11a-2011 3309 - IEC 62734", 2011, . 3312 [ISO7240-16] 3313 ISO, "ISO 7240-16:2007 Fire detection and alarm systems -- 3314 Part 16: Sound system control and indicating equipment", 3315 2007, . 3318 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006. 3320 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0", 3321 1994. 3323 [LTE-Latency] 3324 Johnston, S., "LTE Latency: How does it compare to other 3325 technologies", March 2014, 3326 . 3329 [MEF] MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells", 3330 MEF 22.1.1, July 2014, 3331 . 3334 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and 3335 wireless system", ICT-317669-METIS/D1.1 ICT- 3336 317669-METIS/D1.1, April 2013, . 3339 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL 3340 SPECIFICATION V1.1b", December 2006. 3342 [MODBUS] Modbus Organization, Inc., "MODBUS Application Protocol 3343 Specification", Apr 2012. 3345 [net5G] Ericsson, "5G Radio Access, Challenges for 2020 and 3346 Beyond", Ericsson white paper wp-5g, June 2013, 3347 . 3349 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0, 3350 February 2015, . 3353 [NGMN-fronth] 3354 NGMN Alliance, "Fronthaul Requirements for C-RAN", March 3355 2015, . 3358 [OPCXML] OPC Foundation, "OPC XML-Data Access Specification", Dec 3359 2004. 3361 [PCE] IETF, "Path Computation Element", 3362 . 3364 [profibus] 3365 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001. 3367 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 3368 Requirement Levels", BCP 14, RFC 2119, 3369 DOI 10.17487/RFC2119, March 1997, 3370 . 3372 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6 3373 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460, 3374 December 1998, . 3376 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 3377 "Definition of the Differentiated Services Field (DS 3378 Field) in the IPv4 and IPv6 Headers", RFC 2474, 3379 DOI 10.17487/RFC2474, December 1998, 3380 . 3382 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 3383 Label Switching Architecture", RFC 3031, 3384 DOI 10.17487/RFC3031, January 2001, 3385 . 3387 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 3388 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 3389 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 3390 . 3392 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 3393 Metric for IP Performance Metrics (IPPM)", RFC 3393, 3394 DOI 10.17487/RFC3393, November 2002, 3395 . 3397 [RFC3411] Harrington, D., Presuhn, R., and B. Wijnen, "An 3398 Architecture for Describing Simple Network Management 3399 Protocol (SNMP) Management Frameworks", STD 62, RFC 3411, 3400 DOI 10.17487/RFC3411, December 2002, 3401 . 3403 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between 3404 Information Models and Data Models", RFC 3444, 3405 DOI 10.17487/RFC3444, January 2003, 3406 . 3408 [RFC3972] Aura, T., "Cryptographically Generated Addresses (CGA)", 3409 RFC 3972, DOI 10.17487/RFC3972, March 2005, 3410 . 3412 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation 3413 Edge-to-Edge (PWE3) Architecture", RFC 3985, 3414 DOI 10.17487/RFC3985, March 2005, 3415 . 3417 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 3418 Architecture", RFC 4291, DOI 10.17487/RFC4291, February 3419 2006, . 3421 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure- 3422 Agnostic Time Division Multiplexing (TDM) over Packet 3423 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006, 3424 . 3426 [RFC4903] Thaler, D., "Multi-Link Subnet Issues", RFC 4903, 3427 DOI 10.17487/RFC4903, June 2007, 3428 . 3430 [RFC4919] Kushalnagar, N., Montenegro, G., and C. Schumacher, "IPv6 3431 over Low-Power Wireless Personal Area Networks (6LoWPANs): 3432 Overview, Assumptions, Problem Statement, and Goals", 3433 RFC 4919, DOI 10.17487/RFC4919, August 2007, 3434 . 3436 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and 3437 P. Pate, "Structure-Aware Time Division Multiplexed (TDM) 3438 Circuit Emulation Service over Packet Switched Network 3439 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007, 3440 . 3442 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi, 3443 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087, 3444 DOI 10.17487/RFC5087, December 2007, 3445 . 3447 [RFC6282] Hui, J., Ed. and P. Thubert, "Compression Format for IPv6 3448 Datagrams over IEEE 802.15.4-Based Networks", RFC 6282, 3449 DOI 10.17487/RFC6282, September 2011, 3450 . 3452 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J., 3453 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur, 3454 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for 3455 Low-Power and Lossy Networks", RFC 6550, 3456 DOI 10.17487/RFC6550, March 2012, 3457 . 3459 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N., 3460 and D. Barthel, "Routing Metrics Used for Path Calculation 3461 in Low-Power and Lossy Networks", RFC 6551, 3462 DOI 10.17487/RFC6551, March 2012, 3463 . 3465 [RFC6775] Shelby, Z., Ed., Chakrabarti, S., Nordmark, E., and C. 3466 Bormann, "Neighbor Discovery Optimization for IPv6 over 3467 Low-Power Wireless Personal Area Networks (6LoWPANs)", 3468 RFC 6775, DOI 10.17487/RFC6775, November 2012, 3469 . 3471 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using 3472 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the 3473 Internet of Things (IoT): Problem Statement", RFC 7554, 3474 DOI 10.17487/RFC7554, May 2015, 3475 . 3477 [Spe09] Sperotto, A., Sadre, R., Vliet, F., and A. Pras, "A First 3478 Look into SCADA Network Traffic", IP Operations and 3479 Management, p. 518-521. , June 2009. 3481 [SRP_LATENCY] 3482 Gunther, C., "Specifying SRP Latency", 2014, 3483 . 3486 [STUDIO_IP] 3487 Mace, G., "IP Networked Studio Infrastructure for 3488 Synchronized & Real-Time Multimedia Transmissions", 2007, 3489 . 3492 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in 3493 packet networks", Recommendation G.8261, August 2013, 3494 . 3496 [TEAS] IETF, "Traffic Engineering Architecture and Signaling", 3497 . 3499 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements 3500 for Evolved Universal Terrestrial Radio Access Network 3501 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013. 3503 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception 3504 (FDD)", 3GPP TS 25.104 3.14.0, March 2007. 3506 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access 3507 (E-UTRA); Base Station (BS) radio transmission and 3508 reception", 3GPP TS 36.104 10.11.0, July 2013. 3510 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access 3511 (E-UTRA); Requirements for support of radio resource 3512 management", 3GPP TS 36.133 12.7.0, April 2015. 3514 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access 3515 (E-UTRA); Physical channels and modulation", 3GPP 3516 TS 36.211 10.7.0, March 2013. 3518 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA) 3519 and Evolved Universal Terrestrial Radio Access Network 3520 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300 3521 10.11.0, September 2013. 3523 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive 3524 Networks Task Group", 2013, 3525 . 3527 [UHD-video] 3528 Holub, P., "Ultra-High Definition Videos and Their 3529 Applications over the Network", The 7th International 3530 Symposium on VICTORIES Project PetrHolub_presentation, 3531 October 2014, . 3534 [WirelessHART] 3535 www.hartcomm.org, "Industrial Communication Networks - 3536 Wireless Communication Network and Communication Profiles 3537 - WirelessHART - IEC 62591", 2010. 3539 Authors' Addresses 3541 Ethan Grossman (editor) 3542 Dolby Laboratories, Inc. 3543 1275 Market Street 3544 San Francisco, CA 94103 3545 USA 3547 Phone: +1 415 645 4726 3548 Email: ethan.grossman@dolby.com 3549 URI: http://www.dolby.com 3551 Craig Gunther 3552 Harman International 3553 10653 South River Front Parkway 3554 South Jordan, UT 84095 3555 USA 3557 Phone: +1 801 568-7675 3558 Email: craig.gunther@harman.com 3559 URI: http://www.harman.com 3560 Pascal Thubert 3561 Cisco Systems, Inc 3562 Building D 3563 45 Allee des Ormes - BP1200 3564 MOUGINS - Sophia Antipolis 06254 3565 FRANCE 3567 Phone: +33 497 23 26 34 3568 Email: pthubert@cisco.com 3570 Patrick Wetterwald 3571 Cisco Systems 3572 45 Allees des Ormes 3573 Mougins 06250 3574 FRANCE 3576 Phone: +33 4 97 23 26 36 3577 Email: pwetterw@cisco.com 3579 Jean Raymond 3580 Hydro-Quebec 3581 1500 University 3582 Montreal H3A3S7 3583 Canada 3585 Phone: +1 514 840 3000 3586 Email: raymond.jean@hydro.qc.ca 3588 Jouni Korhonen 3589 Broadcom Corporation 3590 3151 Zanker Road 3591 San Jose, CA 95134 3592 USA 3594 Email: jouni.nospam@gmail.com 3596 Yu Kaneko 3597 Toshiba 3598 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi 3599 Kanagawa, Japan 3601 Email: yu1.kaneko@toshiba.co.jp 3602 Subir Das 3603 Applied Communication Sciences 3604 150 Mount Airy Road, Basking Ridge 3605 New Jersey, 07920, USA 3607 Email: sdas@appcomsci.com 3609 Yiyong Zha 3610 Huawei Technologies 3612 Email: zhayiyong@huawei.com 3614 Balazs Varga 3615 Ericsson 3616 Konyves Kalman krt. 11/B 3617 Budapest 1097 3618 Hungary 3620 Email: balazs.a.varga@ericsson.com 3622 Janos Farkas 3623 Ericsson 3624 Konyves Kalman krt. 11/B 3625 Budapest 1097 3626 Hungary 3628 Email: janos.farkas@ericsson.com 3630 Franz-Josef Goetz 3631 Siemens 3632 Gleiwitzerstr. 555 3633 Nurnberg 90475 3634 Germany 3636 Email: franz-josef.goetz@siemens.com 3638 Juergen Schmitt 3639 Siemens 3640 Gleiwitzerstr. 555 3641 Nurnberg 90475 3642 Germany 3644 Email: juergen.jues.schmitt@siemens.com 3645 Xavier Vilajosana 3646 Worldsensing 3647 483 Arago 3648 Barcelona, Catalonia 08013 3649 Spain 3651 Email: xvilajosana@worldsensing.com 3653 Toktam Mahmoodi 3654 King's College London 3655 Strand, London WC2R 2LS 3656 London, London WC2R 2LS 3657 United Kingdom 3659 Email: toktam.mahmoodi@kcl.ac.uk 3661 Spiros Spirou 3662 Intracom Telecom 3663 19.7 km Markopoulou Ave. 3664 Peania, Attiki 19002 3665 Greece 3667 Email: spis@intracom-telecom.com 3669 Petra Vizarreta 3670 Technical University of Munich, TUM 3671 Maxvorstadt, ArcisstraBe 21 3672 Munich, Germany 80333 3673 Germany 3675 Email: petra.vizarreta@lkn.ei.tum.de