idnits 2.17.1 draft-ietf-detnet-use-cases-19.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 8, 2018) is 2027 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-30) exists of draft-ietf-6tisch-architecture-14 == Outdated reference: A later version (-13) exists of draft-ietf-detnet-architecture-08 == Outdated reference: A later version (-09) exists of draft-ietf-detnet-problem-statement-07 == Outdated reference: A later version (-16) exists of draft-ietf-detnet-security-02 Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force E. Grossman, Ed. 3 Internet-Draft DOLBY 4 Intended status: Informational October 8, 2018 5 Expires: April 11, 2019 7 Deterministic Networking Use Cases 8 draft-ietf-detnet-use-cases-19 10 Abstract 12 This draft presents use cases from diverse industries which have in 13 common a need for "deterministic flows". "Deterministic" in this 14 context means that such flows provide guaranteed bandwidth, bounded 15 latency, and other properties germane to the transport of time- 16 sensitive data. These use cases differ notably in their network 17 topologies and specific desired behavior, providing as a group broad 18 industry context for DetNet. For each use case, this document will 19 identify the use case, identify representative solutions used today, 20 and describe potential improvements that DetNet can enable. The Use 21 Case Common Themes section then extracts and enumerates the set of 22 common properties implied by these use cases. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at https://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on April 11, 2019. 41 Copyright Notice 43 Copyright (c) 2018 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (https://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5 59 2. Pro Audio and Video . . . . . . . . . . . . . . . . . . . . . 7 60 2.1. Use Case Description . . . . . . . . . . . . . . . . . . 7 61 2.1.1. Uninterrupted Stream Playback . . . . . . . . . . . . 7 62 2.1.2. Synchronized Stream Playback . . . . . . . . . . . . 8 63 2.1.3. Sound Reinforcement . . . . . . . . . . . . . . . . . 8 64 2.1.4. Secure Transmission . . . . . . . . . . . . . . . . . 9 65 2.1.4.1. Safety . . . . . . . . . . . . . . . . . . . . . 9 66 2.2. Pro Audio Today . . . . . . . . . . . . . . . . . . . . . 9 67 2.3. Pro Audio Future . . . . . . . . . . . . . . . . . . . . 9 68 2.3.1. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 9 69 2.3.2. High Reliability Stream Paths . . . . . . . . . . . . 10 70 2.3.3. Integration of Reserved Streams into IT Networks . . 10 71 2.3.4. Use of Unused Reservations by Best-Effort Traffic . . 10 72 2.3.5. Traffic Segregation . . . . . . . . . . . . . . . . . 10 73 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets . . . 11 74 2.3.5.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 11 75 2.3.6. Latency Optimization by a Central Controller . . . . 11 76 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory . . 12 77 2.4. Pro Audio Asks . . . . . . . . . . . . . . . . . . . . . 12 78 3. Electrical Utilities . . . . . . . . . . . . . . . . . . . . 12 79 3.1. Use Case Description . . . . . . . . . . . . . . . . . . 13 80 3.1.1. Transmission Use Cases . . . . . . . . . . . . . . . 13 81 3.1.1.1. Protection . . . . . . . . . . . . . . . . . . . 13 82 3.1.1.2. Intra-Substation Process Bus Communications . . . 18 83 3.1.1.3. Wide Area Monitoring and Control Systems . . . . 19 84 3.1.1.4. IEC 61850 WAN engineering guidelines requirement 85 classification . . . . . . . . . . . . . . . . . 20 86 3.1.2. Generation Use Case . . . . . . . . . . . . . . . . . 21 87 3.1.2.1. Control of the Generated Power . . . . . . . . . 21 88 3.1.2.2. Control of the Generation Infrastructure . . . . 22 89 3.1.3. Distribution use case . . . . . . . . . . . . . . . . 27 90 3.1.3.1. Fault Location Isolation and Service Restoration 91 (FLISR) . . . . . . . . . . . . . . . . . . . . . 27 92 3.2. Electrical Utilities Today . . . . . . . . . . . . . . . 28 93 3.2.1. Security Current Practices and Limitations . . . . . 28 94 3.3. Electrical Utilities Future . . . . . . . . . . . . . . . 30 95 3.3.1. Migration to Packet-Switched Network . . . . . . . . 31 96 3.3.2. Telecommunications Trends . . . . . . . . . . . . . . 31 97 3.3.2.1. General Telecommunications Requirements . . . . . 31 98 3.3.2.2. Specific Network topologies of Smart Grid 99 Applications . . . . . . . . . . . . . . . . . . 32 100 3.3.2.3. Precision Time Protocol . . . . . . . . . . . . . 33 101 3.3.3. Security Trends in Utility Networks . . . . . . . . . 34 102 3.4. Electrical Utilities Asks . . . . . . . . . . . . . . . . 36 103 4. Building Automation Systems . . . . . . . . . . . . . . . . . 36 104 4.1. Use Case Description . . . . . . . . . . . . . . . . . . 36 105 4.2. Building Automation Systems Today . . . . . . . . . . . . 37 106 4.2.1. BAS Architecture . . . . . . . . . . . . . . . . . . 37 107 4.2.2. BAS Deployment Model . . . . . . . . . . . . . . . . 38 108 4.2.3. Use Cases for Field Networks . . . . . . . . . . . . 40 109 4.2.3.1. Environmental Monitoring . . . . . . . . . . . . 40 110 4.2.3.2. Fire Detection . . . . . . . . . . . . . . . . . 40 111 4.2.3.3. Feedback Control . . . . . . . . . . . . . . . . 41 112 4.2.4. Security Considerations . . . . . . . . . . . . . . . 41 113 4.3. BAS Future . . . . . . . . . . . . . . . . . . . . . . . 41 114 4.4. BAS Asks . . . . . . . . . . . . . . . . . . . . . . . . 42 115 5. Wireless for Industrial . . . . . . . . . . . . . . . . . . . 42 116 5.1. Use Case Description . . . . . . . . . . . . . . . . . . 42 117 5.1.1. Network Convergence using 6TiSCH . . . . . . . . . . 43 118 5.1.2. Common Protocol Development for 6TiSCH . . . . . . . 43 119 5.2. Wireless Industrial Today . . . . . . . . . . . . . . . . 44 120 5.3. Wireless Industrial Future . . . . . . . . . . . . . . . 44 121 5.3.1. Unified Wireless Network and Management . . . . . . . 44 122 5.3.1.1. PCE and 6TiSCH ARQ Retries . . . . . . . . . . . 46 123 5.3.2. Schedule Management by a PCE . . . . . . . . . . . . 47 124 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests . . . . . . 47 125 5.3.2.2. 6TiSCH IP Interface . . . . . . . . . . . . . . . 48 126 5.3.3. 6TiSCH Security Considerations . . . . . . . . . . . 49 127 5.4. Wireless Industrial Asks . . . . . . . . . . . . . . . . 49 128 6. Cellular Radio . . . . . . . . . . . . . . . . . . . . . . . 49 129 6.1. Use Case Description . . . . . . . . . . . . . . . . . . 49 130 6.1.1. Network Architecture . . . . . . . . . . . . . . . . 49 131 6.1.2. Delay Constraints . . . . . . . . . . . . . . . . . . 50 132 6.1.3. Time Synchronization Constraints . . . . . . . . . . 52 133 6.1.4. Transport Loss Constraints . . . . . . . . . . . . . 54 134 6.1.5. Security Considerations . . . . . . . . . . . . . . . 54 135 6.2. Cellular Radio Networks Today . . . . . . . . . . . . . . 55 136 6.2.1. Fronthaul . . . . . . . . . . . . . . . . . . . . . . 55 137 6.2.2. Midhaul and Backhaul . . . . . . . . . . . . . . . . 55 138 6.3. Cellular Radio Networks Future . . . . . . . . . . . . . 56 139 6.4. Cellular Radio Networks Asks . . . . . . . . . . . . . . 58 140 7. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 59 141 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 59 142 7.2. Industrial M2M Communication Today . . . . . . . . . . . 60 143 7.2.1. Transport Parameters . . . . . . . . . . . . . . . . 60 144 7.2.2. Stream Creation and Destruction . . . . . . . . . . . 61 146 7.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 61 147 7.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 62 148 8. Mining Industry . . . . . . . . . . . . . . . . . . . . . . . 62 149 8.1. Use Case Description . . . . . . . . . . . . . . . . . . 62 150 8.2. Mining Industry Today . . . . . . . . . . . . . . . . . . 63 151 8.3. Mining Industry Future . . . . . . . . . . . . . . . . . 63 152 8.4. Mining Industry Asks . . . . . . . . . . . . . . . . . . 64 153 9. Private Blockchain . . . . . . . . . . . . . . . . . . . . . 64 154 9.1. Use Case Description . . . . . . . . . . . . . . . . . . 64 155 9.1.1. Blockchain Operation . . . . . . . . . . . . . . . . 65 156 9.1.2. Blockchain Network Architecture . . . . . . . . . . . 65 157 9.1.3. Security Considerations . . . . . . . . . . . . . . . 66 158 9.2. Private Blockchain Today . . . . . . . . . . . . . . . . 66 159 9.3. Private Blockchain Future . . . . . . . . . . . . . . . . 66 160 9.4. Private Blockchain Asks . . . . . . . . . . . . . . . . . 66 161 10. Network Slicing . . . . . . . . . . . . . . . . . . . . . . . 67 162 10.1. Use Case Description . . . . . . . . . . . . . . . . . . 67 163 10.2. DetNet Applied to Network Slicing . . . . . . . . . . . 67 164 10.2.1. Resource Isolation Across Slices . . . . . . . . . . 67 165 10.2.2. Deterministic Services Within Slices . . . . . . . . 67 166 10.3. A Network Slicing Use Case Example - 5G Bearer Network . 68 167 10.4. Non-5G Applications of Network Slicing . . . . . . . . . 68 168 10.5. Limitations of DetNet in Network Slicing . . . . . . . . 69 169 10.6. Network Slicing Today and Future . . . . . . . . . . . . 69 170 10.7. Network Slicing Asks . . . . . . . . . . . . . . . . . . 69 171 11. Use Case Common Themes . . . . . . . . . . . . . . . . . . . 69 172 11.1. Unified, standards-based network . . . . . . . . . . . . 69 173 11.1.1. Extensions to Ethernet . . . . . . . . . . . . . . . 69 174 11.1.2. Centrally Administered . . . . . . . . . . . . . . . 69 175 11.1.3. Standardized Data Flow Information Models . . . . . 70 176 11.1.4. L2 and L3 Integration . . . . . . . . . . . . . . . 70 177 11.1.5. Consideration for IPv4 . . . . . . . . . . . . . . . 70 178 11.1.6. Guaranteed End-to-End Delivery . . . . . . . . . . . 70 179 11.1.7. Replacement for Multiple Proprietary Deterministic 180 Networks . . . . . . . . . . . . . . . . . . . . . . 70 181 11.1.8. Mix of Deterministic and Best-Effort Traffic . . . . 71 182 11.1.9. Unused Reserved BW to be Available to Best Effort 183 Traffic . . . . . . . . . . . . . . . . . . . . . . 71 184 11.1.10. Lower Cost, Multi-Vendor Solutions . . . . . . . . . 71 185 11.2. Scalable Size . . . . . . . . . . . . . . . . . . . . . 71 186 11.2.1. Scalable Number of Flows . . . . . . . . . . . . . . 71 187 11.3. Scalable Timing Parameters and Accuracy . . . . . . . . 72 188 11.3.1. Bounded Latency . . . . . . . . . . . . . . . . . . 72 189 11.3.2. Low Latency . . . . . . . . . . . . . . . . . . . . 72 190 11.3.3. Bounded Jitter (Latency Variation) . . . . . . . . . 72 191 11.3.4. Symmetrical Path Delays . . . . . . . . . . . . . . 72 192 11.4. High Reliability and Availability . . . . . . . . . . . 72 193 11.5. Security . . . . . . . . . . . . . . . . . . . . . . . . 73 194 11.6. Deterministic Flows . . . . . . . . . . . . . . . . . . 73 195 12. Use Cases Explicitly Out of Scope for DetNet . . . . . . . . 73 196 12.1. DetNet Scope Limitations . . . . . . . . . . . . . . . . 73 197 12.2. Internet-based Applications . . . . . . . . . . . . . . 74 198 12.2.1. Use Case Description . . . . . . . . . . . . . . . . 74 199 12.2.1.1. Media Content Delivery . . . . . . . . . . . . . 74 200 12.2.1.2. Online Gaming . . . . . . . . . . . . . . . . . 74 201 12.2.1.3. Virtual Reality . . . . . . . . . . . . . . . . 74 202 12.2.2. Internet-Based Applications Today . . . . . . . . . 75 203 12.2.3. Internet-Based Applications Future . . . . . . . . . 75 204 12.2.4. Internet-Based Applications Asks . . . . . . . . . . 75 205 12.3. Pro Audio and Video - Digital Rights Management (DRM) . 75 206 12.4. Pro Audio and Video - Link Aggregation . . . . . . . . . 76 207 12.5. Pro Audio and Video - Deterministic Time to Establish 208 Streaming . . . . . . . . . . . . . . . . . . . . . . . 76 209 13. Security Considerations . . . . . . . . . . . . . . . . . . . 76 210 14. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 77 211 15. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 78 212 15.1. Pro Audio . . . . . . . . . . . . . . . . . . . . . . . 78 213 15.2. Utility Telecom . . . . . . . . . . . . . . . . . . . . 79 214 15.3. Building Automation Systems . . . . . . . . . . . . . . 79 215 15.4. Wireless for Industrial . . . . . . . . . . . . . . . . 79 216 15.5. Cellular Radio . . . . . . . . . . . . . . . . . . . . . 79 217 15.6. Industrial M2M . . . . . . . . . . . . . . . . . . . . . 79 218 15.7. Internet Applications and CoMP . . . . . . . . . . . . . 80 219 15.8. Network Slicing . . . . . . . . . . . . . . . . . . . . 80 220 15.9. Mining . . . . . . . . . . . . . . . . . . . . . . . . . 80 221 15.10. Private Blockchain . . . . . . . . . . . . . . . . . . . 80 222 16. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 80 223 17. Informative References . . . . . . . . . . . . . . . . . . . 80 224 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 87 226 1. Introduction 228 This draft documents use cases in diverse industries which require 229 deterministic flows over multi-hop paths. DetNet flows can be 230 established from either a Layer 2 or Layer 3 (IP) interface, and such 231 flows can co-exist on an IP network with best-effort traffic. DetNet 232 also provides for highly reliable flows through provision for 233 redundant paths. 235 The DetNet Use Cases explicitly do not suggest any specific design 236 for DetNet architecture or protocols; these are topics of other 237 DetNet drafts. 239 The DetNet use cases as originally submitted explicitly were not 240 considered by the DetNet Working Group to be concrete requirements; 241 The DetNet Working Group and Design Team considered these use cases, 242 identifying which elements of them could be feasibly implemented 243 within the charter of DetNet, and as a result certain of the 244 originally submitted use cases (or elements of them) have been be 245 moved to the Use Cases Explicitly Out of Scope for DetNet section. 247 The DetNet Use Cases document provide context regarding DetNet design 248 decisions. It also serves a long-lived purpose of helping those 249 learning (or new to) DetNet to understand the types of applications 250 that can be supported by DetNet. It also allow those WG contributors 251 who are users to ensure that their concerns are addressed by the WG; 252 for them this document both covers their contribution and provides a 253 long term reference to the problems they expect to be served by the 254 technology, both in the short term deliverables and as the technology 255 evolves in the future. 257 The DetNet Use Cases document has served as a "yardstick" against 258 which proposed DetNet designs can be measured, answering the question 259 "to what extent does a proposed design satisfy these various use 260 cases?" 262 The Use Case industries covered are professional audio, electrical 263 utilities, building automation systems, wireless for industrial, 264 cellular radio, industrial machine-to-machine, mining, private 265 blockchain, and network slicing. For each use case the following 266 questions are answered: 268 o What is the use case? 270 o How is it addressed today? 272 o How should it be addressed in the future? 274 o What should the IETF deliver to enable this use case? 276 The level of detail in each use case is intended to be sufficient to 277 express the relevant elements of the use case, but not greater than 278 that. 280 DetNet does not directly address clock distribution or time 281 synchronization; these are considered to be part of the overall 282 design and implementation of a time-sensitive network, using existing 283 (or future) time-specific protocols (such as [IEEE8021AS] and/or 284 [RFC5905]). 286 2. Pro Audio and Video 288 2.1. Use Case Description 290 The professional audio and video industry ("ProAV") includes: 292 o Music and film content creation 294 o Broadcast 296 o Cinema 298 o Live sound 300 o Public address, media and emergency systems at large venues 301 (airports, stadiums, churches, theme parks). 303 These industries have already transitioned audio and video signals 304 from analog to digital. However, the digital interconnect systems 305 remain primarily point-to-point with a single (or small number of) 306 signals per link, interconnected with purpose-built hardware. 308 These industries are now transitioning to packet-based infrastructure 309 to reduce cost, increase routing flexibility, and integrate with 310 existing IT infrastructure. 312 Today ProAV applications have no way to establish deterministic flows 313 from a standards-based Layer 3 (IP) interface, which is a fundamental 314 limitation to the use cases described here. Today deterministic 315 flows can be created within standards-based layer 2 LANs (e.g. using 316 IEEE 802.1 AVB) however these are not routable via IP and thus are 317 not effective for distribution over wider areas (for example 318 broadcast events that span wide geographical areas). 320 It would be highly desirable if such flows could be routed over the 321 open Internet, however solutions with more limited scope (e.g. 322 enterprise networks) would still provide a substantial improvement. 324 The following sections describe specific ProAV use cases. 326 2.1.1. Uninterrupted Stream Playback 328 Transmitting audio and video streams for live playback is unlike 329 common file transfer because uninterrupted stream playback in the 330 presence of network errors cannot be achieved by re-trying the 331 transmission; by the time the missing or corrupt packet has been 332 identified it is too late to execute a re-try operation. Buffering 333 can be used to provide enough delay to allow time for one or more 334 retries, however this is not an effective solution in applications 335 where large delays (latencies) are not acceptable (as discussed 336 below). 338 Streams with guaranteed bandwidth can eliminate congestion on the 339 network as a cause of transmission errors that would lead to playback 340 interruption. Use of redundant paths can further mitigate 341 transmission errors to provide greater stream reliability. 343 2.1.2. Synchronized Stream Playback 345 Latency in this context is the time between when a signal is 346 initially sent over a stream and when it is received. A common 347 example in ProAV is time-synchronizing audio and video when they take 348 separate paths through the playback system. In this case the latency 349 of both the audio and video streams must be bounded and consistent if 350 the sound is to remain matched to the movement in the video. A 351 common tolerance for audio/video sync is one NTSC video frame (about 352 33ms) and to maintain the audience perception of correct lip sync the 353 latency needs to be consistent within some reasonable tolerance, for 354 example 10%. 356 A common architecture for synchronizing multiple streams that have 357 different paths through the network (and thus potentially different 358 latencies) is to enable measurement of the latency of each path, and 359 have the data sinks (for example speakers) delay (buffer) all packets 360 on all but the slowest path. Each packet of each stream is assigned 361 a presentation time which is based on the longest required delay. 362 This implies that all sinks must maintain a common time reference of 363 sufficient accuracy, which can be achieved by any of various 364 techniques. 366 This type of architecture is commonly implemented using a central 367 controller that determines path delays and arbitrates buffering 368 delays. 370 2.1.3. Sound Reinforcement 372 Consider the latency (delay) from when a person speaks into a 373 microphone to when their voice emerges from the speaker. If this 374 delay is longer than about 10-15 milliseconds it is noticeable and 375 can make a sound reinforcement system unusable (see slide 6 of 376 [SRP_LATENCY]). (If you have ever tried to speak in the presence of 377 a delayed echo of your voice you may know this experience). 379 Note that the 15ms latency bound includes all parts of the signal 380 path, not just the network, so the network latency must be 381 significantly less than 15ms. 383 In some cases local performers must perform in synchrony with a 384 remote broadcast. In such cases the latencies of the broadcast 385 stream and the local performer must be adjusted to match each other, 386 with a worst case of one video frame (33ms for NTSC video). 388 In cases where audio phase is a consideration, for example beam- 389 forming using multiple speakers, latency can be in the 10 microsecond 390 range (1 audio sample at 96kHz). 392 2.1.4. Secure Transmission 394 2.1.4.1. Safety 396 Professional audio systems can include amplifiers that are capable of 397 generating hundreds or thousands of watts of audio power which if 398 used incorrectly can cause hearing damage to those in the vicinity. 399 Apart from the usual care required by the systems operators to 400 prevent such incidents, the network traffic that controls these 401 devices must be secured (as with any sensitive application traffic). 403 2.2. Pro Audio Today 405 Some proprietary systems have been created which enable deterministic 406 streams at Layer 3 however they are "engineered networks" which 407 require careful configuration to operate, often require that the 408 system be over-provisioned, and it is implied that all devices on the 409 network voluntarily play by the rules of that network. To enable 410 these industries to successfully transition to an interoperable 411 multi-vendor packet-based infrastructure requires effective open 412 standards, and establishing relevant IETF standards is a crucial 413 factor. 415 2.3. Pro Audio Future 417 2.3.1. Layer 3 Interconnecting Layer 2 Islands 419 It would be valuable to enable IP to connect multiple Layer 2 LANs. 421 As an example, ESPN recently constructed a state-of-the-art 194,000 422 sq ft, $125 million broadcast studio called DC2. The DC2 network is 423 capable of handling 46 Tbps of throughput with 60,000 simultaneous 424 signals. Inside the facility are 1,100 miles of fiber feeding four 425 audio control rooms (see [ESPN_DC2] ). 427 In designing DC2 they replaced as much point-to-point technology as 428 they could with packet-based technology. They constructed seven 429 individual studios using layer 2 LANS (using IEEE 802.1 AVB) that 430 were entirely effective at routing audio within the LANs. However to 431 interconnect these layer 2 LAN islands together they ended up using 432 dedicated paths in a custom SDN (Software Defined Networking) router 433 because there is no standards-based routing solution available. 435 2.3.2. High Reliability Stream Paths 437 On-air and other live media streams are often backed up with 438 redundant links that seamlessly act to deliver the content when the 439 primary link fails for any reason. In point-to-point systems this is 440 provided by an additional point-to-point link; the analogous 441 requirement in a packet-based system is to provide an alternate path 442 through the network such that no individual link can bring down the 443 system. 445 2.3.3. Integration of Reserved Streams into IT Networks 447 A commonly cited goal of moving to a packet based media 448 infrastructure is that costs can be reduced by using off the shelf, 449 commodity network hardware. In addition, economy of scale can be 450 realized by combining media infrastructure with IT infrastructure. 451 In keeping with these goals, stream reservation technology should be 452 compatible with existing protocols, and not compromise use of the 453 network for best effort (non-time-sensitive) traffic. 455 2.3.4. Use of Unused Reservations by Best-Effort Traffic 457 In cases where stream bandwidth is reserved but not currently used 458 (or is under-utilized) that bandwidth must be available to best- 459 effort (i.e. non-time-sensitive) traffic. For example a single 460 stream may be nailed up (reserved) for specific media content that 461 needs to be presented at different times of the day, ensuring timely 462 delivery of that content, yet in between those times the full 463 bandwidth of the network can be utilized for best-effort tasks such 464 as file transfers. 466 This also addresses a concern of IT network administrators that are 467 considering adding reserved bandwidth traffic to their networks that 468 ("users will reserve large quantities of bandwidth and then never un- 469 reserve it even though they are not using it, and soon the network 470 will have no bandwidth left"). 472 2.3.5. Traffic Segregation 474 Sink devices may be low cost devices with limited processing power. 475 In order to not overwhelm the CPUs in these devices it is important 476 to limit the amount of traffic that these devices must process. 478 As an example, consider the use of individual seat speakers in a 479 cinema. These speakers are typically required to be cost reduced 480 since the quantities in a single theater can reach hundreds of seats. 481 Discovery protocols alone in a one thousand seat theater can generate 482 enough broadcast traffic to overwhelm a low powered CPU. Thus an 483 installation like this will benefit greatly from some type of traffic 484 segregation that can define groups of seats to reduce traffic within 485 each group. All seats in the theater must still be able to 486 communicate with a central controller. 488 There are many techniques that can be used to support this feature 489 including (but not limited to) the following examples. 491 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets 493 Packet forwarding rules can be used to eliminate some extraneous 494 streaming traffic from reaching potentially low powered sink devices, 495 however there may be other types of broadcast traffic that should be 496 eliminated using other means for example VLANs or IP subnets. 498 2.3.5.2. Multicast Addressing (IPv4 and IPv6) 500 Multicast addressing is commonly used to keep bandwidth utilization 501 of shared links to a minimum. 503 Because of the MAC Address forwarding nature of Layer 2 bridges it is 504 important that a multicast MAC address is only associated with one 505 stream. This will prevent reservations from forwarding packets from 506 one stream down a path that has no interested sinks simply because 507 there is another stream on that same path that shares the same 508 multicast MAC address. 510 Since each multicast MAC Address can represent 32 different IPv4 511 multicast addresses there must be a process put in place to make sure 512 this does not occur. Requiring use of IPv6 address can achieve this, 513 however due to their continued prevalence, solutions that are 514 effective for IPv4 installations are also desirable. 516 2.3.6. Latency Optimization by a Central Controller 518 A central network controller might also perform optimizations based 519 on the individual path delays, for example sinks that are closer to 520 the source can inform the controller that they can accept greater 521 latency since they will be buffering packets to match presentation 522 times of farther away sinks. The controller might then move a stream 523 reservation on a short path to a longer path in order to free up 524 bandwidth for other critical streams on that short path. See slides 525 3-5 of [SRP_LATENCY]. 527 Additional optimization can be achieved in cases where sinks have 528 differing latency requirements, for example in a live outdoor concert 529 the speaker sinks have stricter latency requirements than the 530 recording hardware sinks. See slide 7 of [SRP_LATENCY]. 532 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory 534 Device cost can be reduced in a system with guaranteed reservations 535 with a small bounded latency due to the reduced requirements for 536 buffering (i.e. memory) on sink devices. For example, a theme park 537 might broadcast a live event across the globe via a layer 3 protocol; 538 in such cases the size of the buffers required is proportional to the 539 latency bounds and jitter caused by delivery, which depends on the 540 worst case segment of the end-to-end network path. For example on 541 todays open internet the latency is typically unacceptable for audio 542 and video streaming without many seconds of buffering. In such 543 scenarios a single gateway device at the local network that receives 544 the feed from the remote site would provide the expensive buffering 545 required to mask the latency and jitter issues associated with long 546 distance delivery. Sink devices in the local location would have no 547 additional buffering requirements, and thus no additional costs, 548 beyond those required for delivery of local content. The sink device 549 would be receiving the identical packets as those sent by the source 550 and would be unaware that there were any latency or jitter issues 551 along the path. 553 2.4. Pro Audio Asks 555 o Layer 3 routing on top of AVB (and/or other high QoS networks) 557 o Content delivery with bounded, lowest possible latency 559 o IntServ and DiffServ integration with AVB (where practical) 561 o Single network for A/V and IT traffic 563 o Standards-based, interoperable, multi-vendor 565 o IT department friendly 567 o Enterprise-wide networks (e.g. size of San Francisco but not the 568 whole Internet (yet...)) 570 3. Electrical Utilities 571 3.1. Use Case Description 573 Many systems that an electrical utility deploys today rely on high 574 availability and deterministic behavior of the underlying networks. 575 Presented here are use cases in Transmission, Generation and 576 Distribution, including key timing and reliability metrics. In 577 addition, security issues and industry trends which affect the 578 architecture of next generation utility networks are discussed. 580 3.1.1. Transmission Use Cases 582 3.1.1.1. Protection 584 Protection means not only the protection of human operators but also 585 the protection of the electrical equipment and the preservation of 586 the stability and frequency of the grid. If a fault occurs in the 587 transmission or distribution of electricity then severe damage can 588 occur to human operators, electrical equipment and the grid itself, 589 leading to blackouts. 591 Communication links in conjunction with protection relays are used to 592 selectively isolate faults on high voltage lines, transformers, 593 reactors and other important electrical equipment. The role of the 594 teleprotection system is to selectively disconnect a faulty part by 595 transferring command signals within the shortest possible time. 597 3.1.1.1.1. Key Criteria 599 The key criteria for measuring teleprotection performance are command 600 transmission time, dependability and security. These criteria are 601 defined by the IEC standard 60834 as follows: 603 o Transmission time (Speed): The time between the moment where state 604 changes at the transmitter input and the moment of the 605 corresponding change at the receiver output, including propagation 606 delay. Overall operating time for a teleprotection system 607 includes the time for initiating the command at the transmitting 608 end, the propagation delay over the network (including equipments) 609 and the selection and decision time at the receiving end, 610 including any additional delay due to a noisy environment. 612 o Dependability: The ability to issue and receive valid commands in 613 the presence of interference and/or noise, by minimizing the 614 probability of missing command (PMC). Dependability targets are 615 typically set for a specific bit error rate (BER) level. 617 o Security: The ability to prevent false tripping due to a noisy 618 environment, by minimizing the probability of unwanted commands 619 (PUC). Security targets are also set for a specific bit error 620 rate (BER) level. 622 Additional elements of the the teleprotection system that impact its 623 performance include: 625 o Network bandwidth 627 o Failure recovery capacity (aka resiliency) 629 3.1.1.1.2. Fault Detection and Clearance Timing 631 Most power line equipment can tolerate short circuits or faults for 632 up to approximately five power cycles before sustaining irreversible 633 damage or affecting other segments in the network. This translates 634 to total fault clearance time of 100ms. As a safety precaution, 635 however, actual operation time of protection systems is limited to 636 70- 80 percent of this period, including fault recognition time, 637 command transmission time and line breaker switching time. 639 Some system components, such as large electromechanical switches, 640 require particularly long time to operate and take up the majority of 641 the total clearance time, leaving only a 10ms window for the 642 telecommunications part of the protection scheme, independent of the 643 distance to travel. Given the sensitivity of the issue, new networks 644 impose requirements that are even more stringent: IEC standard 61850 645 limits the transfer time for protection messages to 1/4 - 1/2 cycle 646 or 4 - 8ms (for 60Hz lines) for the most critical messages. 648 3.1.1.1.3. Symmetric Channel Delay 650 Teleprotection channels which are differential must be synchronous, 651 which means that any delays on the transmit and receive paths must 652 match each other. Teleprotection systems ideally support zero 653 asymmetric delay; typical legacy relays can tolerate delay 654 discrepancies of up to 750us. 656 Some tools available for lowering delay variation below this 657 threshold are: 659 o For legacy systems using Time Division Multiplexing (TDM), jitter 660 buffers at the multiplexers on each end of the line can be used to 661 offset delay variation by queuing sent and received packets. The 662 length of the queues must balance the need to regulate the rate of 663 transmission with the need to limit overall delay, as larger 664 buffers result in increased latency. 666 o For jitter-prone IP packet networks, traffic management tools can 667 ensure that the teleprotection signals receive the highest 668 transmission priority to minimize jitter. 670 o Standard packet-based synchronization technologies, such as 671 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet 672 (Sync-E), can help keep networks stable by maintaining a highly 673 accurate clock source on the various network devices. 675 3.1.1.1.4. Teleprotection Network Requirements (IEC 61850) 677 The following table captures the main network metrics as based on the 678 IEC 61850 standard. 680 +-----------------------------+-------------------------------------+ 681 | Teleprotection Requirement | Attribute | 682 +-----------------------------+-------------------------------------+ 683 | One way maximum delay | 4-10 ms | 684 | Asymetric delay required | Yes | 685 | Maximum jitter | less than 250 us (750 us for legacy | 686 | | IED) | 687 | Topology | Point to point, point to Multi- | 688 | | point | 689 | Availability | 99.9999 | 690 | precise timing required | Yes | 691 | Recovery time on node | less than 50ms - hitless | 692 | failure | | 693 | performance management | Yes, Mandatory | 694 | Redundancy | Yes | 695 | Packet loss | 0.1% to 1% | 696 +-----------------------------+-------------------------------------+ 698 Table 1: Teleprotection network requirements 700 3.1.1.1.5. Inter-Trip Protection scheme 702 "Inter-tripping" is the signal-controlled tripping of a circuit 703 breaker to complete the isolation of a circuit or piece of apparatus 704 in concert with the tripping of other circuit breakers. 706 +--------------------------------+----------------------------------+ 707 | Inter-Trip protection | Attribute | 708 | Requirement | | 709 +--------------------------------+----------------------------------+ 710 | One way maximum delay | 5 ms | 711 | Asymetric delay required | No | 712 | Maximum jitter | Not critical | 713 | Topology | Point to point, point to Multi- | 714 | | point | 715 | Bandwidth | 64 Kbps | 716 | Availability | 99.9999 | 717 | precise timing required | Yes | 718 | Recovery time on node failure | less than 50ms - hitless | 719 | performance management | Yes, Mandatory | 720 | Redundancy | Yes | 721 | Packet loss | 0.1% | 722 +--------------------------------+----------------------------------+ 724 Table 2: Inter-Trip protection network requirements 726 3.1.1.1.6. Current Differential Protection Scheme 728 Current differential protection is commonly used for line protection, 729 and is typical for protecting parallel circuits. At both end of the 730 lines the current is measured by the differential relays, and both 731 relays will trip the circuit breaker if the current going into the 732 line does not equal the current going out of the line. This type of 733 protection scheme assumes some form of communications being present 734 between the relays at both end of the line, to allow both relays to 735 compare measured current values. Line differential protection 736 schemes assume a very low telecommunications delay between both 737 relays, often as low as 5ms. Moreover, as those systems are often 738 not time-synchronized, they also assume symmetric telecommunications 739 paths with constant delay, which allows comparing current measurement 740 values taken at the exact same time. 742 +----------------------------------+--------------------------------+ 743 | Current Differential protection | Attribute | 744 | Requirement | | 745 +----------------------------------+--------------------------------+ 746 | One way maximum delay | 5 ms | 747 | Asymetric delay Required | Yes | 748 | Maximum jitter | less than 250 us (750us for | 749 | | legacy IED) | 750 | Topology | Point to point, point to | 751 | | Multi-point | 752 | Bandwidth | 64 Kbps | 753 | Availability | 99.9999 | 754 | precise timing required | Yes | 755 | Recovery time on node failure | less than 50ms - hitless | 756 | performance management | Yes, Mandatory | 757 | Redundancy | Yes | 758 | Packet loss | 0.1% | 759 +----------------------------------+--------------------------------+ 761 Table 3: Current Differential Protection metrics 763 3.1.1.1.7. Distance Protection Scheme 765 Distance (Impedance Relay) protection scheme is based on voltage and 766 current measurements. The network metrics are similar (but not 767 identical to) Current Differential protection. 769 +-------------------------------+-----------------------------------+ 770 | Distance protection | Attribute | 771 | Requirement | | 772 +-------------------------------+-----------------------------------+ 773 | One way maximum delay | 5 ms | 774 | Asymetric delay Required | No | 775 | Maximum jitter | Not critical | 776 | Topology | Point to point, point to Multi- | 777 | | point | 778 | Bandwidth | 64 Kbps | 779 | Availability | 99.9999 | 780 | precise timing required | Yes | 781 | Recovery time on node failure | less than 50ms - hitless | 782 | performance management | Yes, Mandatory | 783 | Redundancy | Yes | 784 | Packet loss | 0.1% | 785 +-------------------------------+-----------------------------------+ 787 Table 4: Distance Protection requirements 789 3.1.1.1.8. Inter-Substation Protection Signaling 791 This use case describes the exchange of Sampled Value and/or GOOSE 792 (Generic Object Oriented Substation Events) message between 793 Intelligent Electronic Devices (IED) in two substations for 794 protection and tripping coordination. The two IEDs are in a master- 795 slave mode. 797 The Current Transformer or Voltage Transformer (CT/VT) in one 798 substation sends the sampled analog voltage or current value to the 799 Merging Unit (MU) over hard wire. The MU sends the time-synchronized 800 61850-9-2 sampled values to the slave IED. The slave IED forwards 801 the information to the Master IED in the other substation. The 802 master IED makes the determination (for example based on sampled 803 value differentials) to send a trip command to the originating IED. 804 Once the slave IED/Relay receives the GOOSE trip for breaker 805 tripping, it opens the breaker. It then sends a confirmation message 806 back to the master. All data exchanges between IEDs are either 807 through Sampled Value and/or GOOSE messages. 809 +----------------------------------+--------------------------------+ 810 | Inter-Substation protection | Attribute | 811 | Requirement | | 812 +----------------------------------+--------------------------------+ 813 | One way maximum delay | 5 ms | 814 | Asymetric delay Required | No | 815 | Maximum jitter | Not critical | 816 | Topology | Point to point, point to | 817 | | Multi-point | 818 | Bandwidth | 64 Kbps | 819 | Availability | 99.9999 | 820 | precise timing required | Yes | 821 | Recovery time on node failure | less than 50ms - hitless | 822 | performance management | Yes, Mandatory | 823 | Redundancy | Yes | 824 | Packet loss | 1% | 825 +----------------------------------+--------------------------------+ 827 Table 5: Inter-Substation Protection requirements 829 3.1.1.2. Intra-Substation Process Bus Communications 831 This use case describes the data flow from the CT/VT to the IEDs in 832 the substation via the MU. The CT/VT in the substation send the 833 analog voltage or current values to the MU over hard wire. The MU 834 converts the analog values into digital format (typically time- 835 synchronized Sampled Values as specified by IEC 61850-9-2) and sends 836 them to the IEDs in the substation. The GPS Master Clock can send 837 1PPS or IRIG-B format to the MU through a serial port or IEEE 1588 838 protocol via a network. Process bus communication using 61850 839 simplifies connectivity within the substation and removes the 840 requirement for multiple serial connections and removes the slow 841 serial bus architectures that are typically used. This also ensures 842 increased flexibility and increased speed with the use of multicast 843 messaging between multiple devices. 845 +----------------------------------+--------------------------------+ 846 | Intra-Substation protection | Attribute | 847 | Requirement | | 848 +----------------------------------+--------------------------------+ 849 | One way maximum delay | 5 ms | 850 | Asymetric delay Required | No | 851 | Maximum jitter | Not critical | 852 | Topology | Point to point, point to | 853 | | Multi-point | 854 | Bandwidth | 64 Kbps | 855 | Availability | 99.9999 | 856 | precise timing required | Yes | 857 | Recovery time on Node failure | less than 50ms - hitless | 858 | performance management | Yes, Mandatory | 859 | Redundancy | Yes - No | 860 | Packet loss | 0.1% | 861 +----------------------------------+--------------------------------+ 863 Table 6: Intra-Substation Protection requirements 865 3.1.1.3. Wide Area Monitoring and Control Systems 867 The application of synchrophasor measurement data from Phasor 868 Measurement Units (PMU) to Wide Area Monitoring and Control Systems 869 promises to provide important new capabilities for improving system 870 stability. Access to PMU data enables more timely situational 871 awareness over larger portions of the grid than what has been 872 possible historically with normal SCADA (Supervisory Control and Data 873 Acquisition) data. Handling the volume and real-time nature of 874 synchrophasor data presents unique challenges for existing 875 application architectures. Wide Area management System (WAMS) makes 876 it possible for the condition of the bulk power system to be observed 877 and understood in real-time so that protective, preventative, or 878 corrective action can be taken. Because of the very high sampling 879 rate of measurements and the strict requirement for time 880 synchronization of the samples, WAMS has stringent telecommunications 881 requirements in an IP network that are captured in the following 882 table: 884 +----------------------+--------------------------------------------+ 885 | WAMS Requirement | Attribute | 886 +----------------------+--------------------------------------------+ 887 | One way maximum | 50 ms | 888 | delay | | 889 | Asymetric delay | No | 890 | Required | | 891 | Maximum jitter | Not critical | 892 | Topology | Point to point, point to Multi-point, | 893 | | Multi-point to Multi-point | 894 | Bandwidth | 100 Kbps | 895 | Availability | 99.9999 | 896 | precise timing | Yes | 897 | required | | 898 | Recovery time on | less than 50ms - hitless | 899 | Node failure | | 900 | performance | Yes, Mandatory | 901 | management | | 902 | Redundancy | Yes | 903 | Packet loss | 1% | 904 | Consecutive Packet | At least 1 packet per application cycle | 905 | Loss | must be received. | 906 +----------------------+--------------------------------------------+ 908 Table 7: WAMS Special Communication Requirements 910 3.1.1.4. IEC 61850 WAN engineering guidelines requirement 911 classification 913 The IEC (International Electrotechnical Commission) has recently 914 published a Technical Report which offers guidelines on how to define 915 and deploy Wide Area Networks for the interconnections of electric 916 substations, generation plants and SCADA operation centers. The IEC 917 61850-90-12 is providing a classification of WAN communication 918 requirements into 4 classes. Table 8 summarizes these requirements: 920 +----------------+------------+------------+------------+-----------+ 921 | WAN | Class WA | Class WB | Class WC | Class WD | 922 | Requirement | | | | | 923 +----------------+------------+------------+------------+-----------+ 924 | Application | EHV (Extra | HV (High | MV (Medium | General | 925 | field | High | Voltage) | Voltage) | purpose | 926 | | Voltage) | | | | 927 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms | 928 | Jitter | 10 us | 100 us | 1 ms | 10 ms | 929 | Latency | 100 us | 1 ms | 10 ms | 100 ms | 930 | Asymetry | | | | | 931 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 | 932 | | | | | ms | 933 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | | 934 | | 10-6 | 10-4 | | | 935 | Unavailability | 10-7 to | 10-5 to | 10-3 | | 936 | | 10-6 | 10-4 | | | 937 | Recovery delay | Zero | 50 ms | 5 s | 50 s | 938 | Cyber security | extremely | High | Medium | Medium | 939 | | high | | | | 940 +----------------+------------+------------+------------+-----------+ 942 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC 944 3.1.2. Generation Use Case 946 Energy generation systems are complex infrastructures that require 947 control of both the generated power and the generation 948 infrastructure. 950 3.1.2.1. Control of the Generated Power 952 The electrical power generation frequency must be maintained within a 953 very narrow band. Deviations from the acceptable frequency range are 954 detected and the required signals are sent to the power plants for 955 frequency regulation. 957 Automatic Generation Control (AGC) is a system for adjusting the 958 power output of generators at different power plants, in response to 959 changes in the load. 961 +---------------------------------------------------+---------------+ 962 | FCAG (Frequency Control Automatic Generation) | Attribute | 963 | Requirement | | 964 +---------------------------------------------------+---------------+ 965 | One way maximum delay | 500 ms | 966 | Asymetric delay Required | No | 967 | Maximum jitter | Not critical | 968 | Topology | Point to | 969 | | point | 970 | Bandwidth | 20 Kbps | 971 | Availability | 99.999 | 972 | precise timing required | Yes | 973 | Recovery time on Node failure | N/A | 974 | performance management | Yes, | 975 | | Mandatory | 976 | Redundancy | Yes | 977 | Packet loss | 1% | 978 +---------------------------------------------------+---------------+ 980 Table 9: FCAG Communication Requirements 982 3.1.2.2. Control of the Generation Infrastructure 984 The control of the generation infrastructure combines requirements 985 from industrial automation systems and energy generation systems. 986 This section considers the use case of the control of the generation 987 infrastructure of a wind turbine. 989 | 990 | 991 | +-----------------+ 992 | | +----+ | 993 | | |WTRM| WGEN | 994 WROT x==|===| | | 995 | | +----+ WCNV| 996 | |WNAC | 997 | +---+---WYAW---+--+ 998 | | | 999 | | | +----+ 1000 |WTRF | |WMET| 1001 | | | | 1002 Wind Turbine | +--+-+ 1003 Controller | | 1004 WTUR | | | 1005 WREP | | | 1006 WSLG | | | 1007 WALG | WTOW | | 1009 Figure 1: Wind Turbine Control Network 1011 Figure 1 presents the subsystems that operate a wind turbine. These 1012 subsystems include 1014 o WROT (Rotor Control) 1016 o WNAC (Nacelle Control) (nacelle: housing containing the generator) 1018 o WTRM (Transmission Control) 1020 o WGEN (Generator) 1022 o WYAW (Yaw Controller) (of the tower head) 1024 o WCNV (In-Turbine Power Converter) 1026 o WMET (External Meteorological Station providing real time 1027 information to the controllers of the tower) 1029 Traffic characteristics relevant for the network planning and 1030 dimensioning process in a wind turbine scenario are listed below. 1031 The values in this section are based mainly on the relevant 1032 references [Ahm14] and [Spe09]. Each logical node (Figure 1) is a 1033 part of the metering network and produces analog measurements and 1034 status information which must comply with their respective data rate 1035 constraints. 1037 +-----------+--------+--------+-------------+---------+-------------+ 1038 | Subsystem | Sensor | Analog | Data Rate | Status | Data rate | 1039 | | Count | Sample | (bytes/sec) | Sample | (bytes/sec) | 1040 | | | Count | | Count | | 1041 +-----------+--------+--------+-------------+---------+-------------+ 1042 | WROT | 14 | 9 | 642 | 5 | 10 | 1043 | WTRM | 18 | 10 | 2828 | 8 | 16 | 1044 | WGEN | 14 | 12 | 73764 | 2 | 4 | 1045 | WCNV | 14 | 12 | 74060 | 2 | 4 | 1046 | WTRF | 12 | 5 | 73740 | 2 | 4 | 1047 | WNAC | 12 | 9 | 112 | 3 | 6 | 1048 | WYAW | 7 | 8 | 220 | 4 | 8 | 1049 | WTOW | 4 | 1 | 8 | 3 | 6 | 1050 | WMET | 7 | 7 | 228 | - | - | 1051 +-----------+--------+--------+-------------+---------+-------------+ 1053 Table 10: Wind Turbine Data Rate Constraints 1055 Quality of Service (QoS) constraints for different services are 1056 presented in Table 11. These constraints are defined by IEEE 1646 1057 standard [IEEE1646] and IEC 61400 standard [IEC61400]. 1059 +---------------------+---------+-------------+---------------------+ 1060 | Service | Latency | Reliability | Packet Loss Rate | 1061 +---------------------+---------+-------------+---------------------+ 1062 | Analogue measure | 16 ms | 99.99% | < 10-6 | 1063 | Status information | 16 ms | 99.99% | < 10-6 | 1064 | Protection traffic | 4 ms | 100.00% | < 10-9 | 1065 | Reporting and | 1 s | 99.99% | < 10-6 | 1066 | logging | | | | 1067 | Video surveillance | 1 s | 99.00% | No specific | 1068 | | | | requirement | 1069 | Internet connection | 60 min | 99.00% | No specific | 1070 | | | | requirement | 1071 | Control traffic | 16 ms | 100.00% | < 10-9 | 1072 | Data polling | 16 ms | 99.99% | < 10-6 | 1073 +---------------------+---------+-------------+---------------------+ 1075 Table 11: Wind Turbine Reliability and Latency Constraints 1077 3.1.2.2.1. Intra-Domain Network Considerations 1079 A wind turbine is composed of a large set of subsystems including 1080 sensors and actuators which require time-critical operation. The 1081 reliability and latency constraints of these different subsystems is 1082 shown in Table 11. These subsystems are connected to an intra-domain 1083 network which is used to monitor and control the operation of the 1084 turbine and connect it to the SCADA subsystems. The different 1085 components are interconnected using fiber optics, industrial buses, 1086 industrial Ethernet, EtherCat, or a combination of them. Industrial 1087 signaling and control protocols such as Modbus, Profibus, Profinet 1088 and EtherCat are used directly on top of the Layer 2 transport or 1089 encapsulated over TCP/IP. 1091 The Data collected from the sensors and condition monitoring systems 1092 is multiplexed onto fiber cables for transmission to the base of the 1093 tower, and to remote control centers. The turbine controller 1094 continuously monitors the condition of the wind turbine and collects 1095 statistics on its operation. This controller also manages a large 1096 number of switches, hydraulic pumps, valves, and motors within the 1097 wind turbine. 1099 There is usually a controller both at the bottom of the tower and in 1100 the nacelle. The communication between these two controllers usually 1101 takes place using fiber optics instead of copper links. Sometimes, a 1102 third controller is installed in the hub of the rotor and manages the 1103 pitch of the blades. That unit usually communicates with the nacelle 1104 unit using serial communications. 1106 3.1.2.2.2. Inter-Domain network considerations 1108 A remote control center belonging to a grid operator regulates the 1109 power output, enables remote actuation, and monitors the health of 1110 one or more wind parks in tandem. It connects to the local control 1111 center in a wind park over the Internet (Figure 2) via firewalls at 1112 both ends. The AS path between the local control center and the Wind 1113 Park typically involves several ISPs at different tiers. For 1114 example, a remote control center in Denmark can regulate a wind park 1115 in Greece over the normal public AS path between the two locations. 1117 The remote control center is part of the SCADA system, setting the 1118 desired power output to the wind park and reading back the result 1119 once the new power output level has been set. Traffic between the 1120 remote control center and the wind park typically consists of 1121 protocols like IEC 60870-5-104 [IEC-60870-5-104], OPC XML-DA 1122 [OPCXML], Modbus [MODBUS], and SNMP [RFC3411]. Currently, traffic 1123 flows between the wind farm and the remote control center are best 1124 effort. QoS requirements are not strict, so no SLAs or service 1125 provisioning mechanisms (e.g., VPN) are employed. In case of events 1126 like equipment failure, tolerance for alarm delay is on the order of 1127 minutes, due to redundant systems already in place. 1129 +--------------+ 1130 | | 1131 | | 1132 | Wind Park #1 +----+ 1133 | | | XXXXXX 1134 | | | X XXXXXXXX +----------------+ 1135 +--------------+ | XXXX X XXXXX | | 1136 +---+ XXX | Remote Control | 1137 XXX Internet +----+ Center | 1138 +----+X XXX | | 1139 +--------------+ | XXXXXXX XX | | 1140 | | | XX XXXXXXX +----------------+ 1141 | | | XXXXX 1142 | Wind Park #2 +----+ 1143 | | 1144 | | 1145 +--------------+ 1147 Figure 2: Wind Turbine Control via Internet 1149 Future use cases will require bounded latency, bounded jitter and 1150 extraordinary low packet loss for inter-domain traffic flows due to 1151 the softwarization and virtualization of core wind farm equipment 1152 (e.g. switches, firewalls and SCADA server components). These 1153 factors will create opportunities for service providers to install 1154 new services and dynamically manage them from remote locations. For 1155 example, to enable fail-over of a local SCADA server, a SCADA server 1156 in another wind farm site (under the administrative control of the 1157 same operator) could be utilized temporarily (Figure 3). In that 1158 case local traffic would be forwarded to the remote SCADA server and 1159 existing intra-domain QoS and timing parameters would have to be met 1160 for inter-domain traffic flows. 1162 +--------------+ 1163 | | 1164 | | 1165 | Wind Park #1 +----+ 1166 | | | XXXXXX 1167 | | | X XXXXXXXX +----------------+ 1168 +--------------+ | XXXX XXXXX | | 1169 +---+ Operator XXX | Remote Control | 1170 XXX Administered +----+ Center | 1171 +----+X WAN XXX | | 1172 +--------------+ | XXXXXXX XX | | 1173 | | | XX XXXXXXX +----------------+ 1174 | | | XXXXX 1175 | Wind Park #2 +----+ 1176 | | 1177 | | 1178 +--------------+ 1180 Figure 3: Wind Turbine Control via Operator Administered WAN 1182 3.1.3. Distribution use case 1184 3.1.3.1. Fault Location Isolation and Service Restoration (FLISR) 1186 Fault Location, Isolation, and Service Restoration (FLISR) refers to 1187 the ability to automatically locate the fault, isolate the fault, and 1188 restore service in the distribution network. This will likely be the 1189 first widespread application of distributed intelligence in the grid. 1191 Static power switch status (open/closed) in the network dictates the 1192 power flow to secondary substations. Reconfiguring the network in 1193 the event of a fault is typically done manually on site to energize/ 1194 de-energize alternate paths. Automating the operation of substation 1195 switchgear allows the flow of power to be altered automatically under 1196 fault conditions. 1198 FLISR can be managed centrally from a Distribution Management System 1199 (DMS) or executed locally through distributed control via intelligent 1200 switches and fault sensors. 1202 +----------------------+--------------------------------------------+ 1203 | FLISR Requirement | Attribute | 1204 +----------------------+--------------------------------------------+ 1205 | One way maximum | 80 ms | 1206 | delay | | 1207 | Asymetric delay | No | 1208 | Required | | 1209 | Maximum jitter | 40 ms | 1210 | Topology | Point to point, point to Multi-point, | 1211 | | Multi-point to Multi-point | 1212 | Bandwidth | 64 Kbps | 1213 | Availability | 99.9999 | 1214 | precise timing | Yes | 1215 | required | | 1216 | Recovery time on | Depends on customer impact | 1217 | Node failure | | 1218 | performance | Yes, Mandatory | 1219 | management | | 1220 | Redundancy | Yes | 1221 | Packet loss | 0.1% | 1222 +----------------------+--------------------------------------------+ 1224 Table 12: FLISR Communication Requirements 1226 3.2. Electrical Utilities Today 1228 Many utilities still rely on complex environments formed of multiple 1229 application-specific proprietary networks, including TDM networks. 1231 In this kind of environment there is no mixing of OT and IT 1232 applications on the same network, and information is siloed between 1233 operational areas. 1235 Specific calibration of the full chain is required, which is costly. 1237 This kind of environment prevents utility operations from realizing 1238 the operational efficiency benefits, visibility, and functional 1239 integration of operational information across grid applications and 1240 data networks. 1242 In addition, there are many security-related issues as discussed in 1243 the following section. 1245 3.2.1. Security Current Practices and Limitations 1247 Grid monitoring and control devices are already targets for cyber 1248 attacks, and legacy telecommunications protocols have many intrinsic 1249 network-related vulnerabilities. For example, DNP3, Modbus, 1250 PROFIBUS/PROFINET, and other protocols are designed around a common 1251 paradigm of request and respond. Each protocol is designed for a 1252 master device such as an HMI (Human Machine Interface) system to send 1253 commands to subordinate slave devices to retrieve data (reading 1254 inputs) or control (writing to outputs). Because many of these 1255 protocols lack authentication, encryption, or other basic security 1256 measures, they are prone to network-based attacks, allowing a 1257 malicious actor or attacker to utilize the request-and-respond system 1258 as a mechanism for command-and-control like functionality. Specific 1259 security concerns common to most industrial control, including 1260 utility telecommunication protocols include the following: 1262 o Network or transport errors (e.g. malformed packets or excessive 1263 latency) can cause protocol failure. 1265 o Protocol commands may be available that are capable of forcing 1266 slave devices into inoperable states, including powering-off 1267 devices, forcing them into a listen-only state, disabling 1268 alarming. 1270 o Protocol commands may be available that are capable of restarting 1271 communications and otherwise interrupting processes. 1273 o Protocol commands may be available that are capable of clearing, 1274 erasing, or resetting diagnostic information such as counters and 1275 diagnostic registers. 1277 o Protocol commands may be available that are capable of requesting 1278 sensitive information about the controllers, their configurations, 1279 or other need-to-know information. 1281 o Most protocols are application layer protocols transported over 1282 TCP; therefore it is easy to transport commands over non-standard 1283 ports or inject commands into authorized traffic flows. 1285 o Protocol commands may be available that are capable of 1286 broadcasting messages to many devices at once (i.e. a potential 1287 DoS). 1289 o Protocol commands may be available to query the device network to 1290 obtain defined points and their values (i.e. a configuration 1291 scan). 1293 o Protocol commands may be available that will list all available 1294 function codes (i.e. a function scan). 1296 These inherent vulnerabilities, along with increasing connectivity 1297 between IT an OT networks, make network-based attacks very feasible. 1299 Simple injection of malicious protocol commands provides control over 1300 the target process. Altering legitimate protocol traffic can also 1301 alter information about a process and disrupt the legitimate controls 1302 that are in place over that process. A man-in-the-middle attack 1303 could provide both control over a process and misrepresentation of 1304 data back to operator consoles. 1306 3.3. Electrical Utilities Future 1308 The business and technology trends that are sweeping the utility 1309 industry will drastically transform the utility business from the way 1310 it has been for many decades. At the core of many of these changes 1311 is a drive to modernize the electrical grid with an integrated 1312 telecommunications infrastructure. However, interoperability 1313 concerns, legacy networks, disparate tools, and stringent security 1314 requirements all add complexity to the grid transformation. Given 1315 the range and diversity of the requirements that should be addressed 1316 by the next generation telecommunications infrastructure, utilities 1317 need to adopt a holistic architectural approach to integrate the 1318 electrical grid with digital telecommunications across the entire 1319 power delivery chain. 1321 The key to modernizing grid telecommunications is to provide a 1322 common, adaptable, multi-service network infrastructure for the 1323 entire utility organization. Such a network serves as the platform 1324 for current capabilities while enabling future expansion of the 1325 network to accommodate new applications and services. 1327 To meet this diverse set of requirements, both today and in the 1328 future, the next generation utility telecommunnications network will 1329 be based on open-standards-based IP architecture. An end-to-end IP 1330 architecture takes advantage of nearly three decades of IP technology 1331 development, facilitating interoperability and device management 1332 across disparate networks and devices, as it has been already 1333 demonstrated in many mission-critical and highly secure networks. 1335 IPv6 is seen as a future telecommunications technology for the Smart 1336 Grid; the IEC (International Electrotechnical Commission) and 1337 different National Committees have mandated a specific adhoc group 1338 (AHG8) to define the migration strategy to IPv6 for all the IEC TC57 1339 power automation standards. The AHG8 has recently finalised the work 1340 on the migration strategy and the following Technical Report has been 1341 issued: IEC TR 62357-200:2015: Guidelines for migration from Internet 1342 Protocol version 4 (IPv4) to Internet Protocol version 6 (IPv6). 1344 Cloud-based SCADA systems will control and monitor the critical and 1345 non-critical subsystems of generation systems, for example wind 1346 farms. 1348 3.3.1. Migration to Packet-Switched Network 1350 Throughout the world, utilities are increasingly planning for a 1351 future based on smart grid applications requiring advanced 1352 telecommunications systems. Many of these applications utilize 1353 packet connectivity for communicating information and control signals 1354 across the utility's Wide Area Network (WAN), made possible by 1355 technologies such as multiprotocol label switching (MPLS). The data 1356 that traverses the utility WAN includes: 1358 o Grid monitoring, control, and protection data 1360 o Non-control grid data (e.g. asset data for condition-based 1361 monitoring) 1363 o Physical safety and security data (e.g. voice and video) 1365 o Remote worker access to corporate applications (voice, maps, 1366 schematics, etc.) 1368 o Field area network backhaul for smart metering, and distribution 1369 grid management 1371 o Enterprise traffic (email, collaboration tools, business 1372 applications) 1374 WANs support this wide variety of traffic to and from substations, 1375 the transmission and distribution grid, generation sites, between 1376 control centers, and between work locations and data centers. To 1377 maintain this rapidly expanding set of applications, many utilities 1378 are taking steps to evolve present time-division multiplexing (TDM) 1379 based and frame relay infrastructures to packet systems. Packet- 1380 based networks are designed to provide greater functionalities and 1381 higher levels of service for applications, while continuing to 1382 deliver reliability and deterministic (real-time) traffic support. 1384 3.3.2. Telecommunications Trends 1386 These general telecommunications topics are in addition to the use 1387 cases that have been addressed so far. These include both current 1388 and future telecommunications related topics that should be factored 1389 into the network architecture and design. 1391 3.3.2.1. General Telecommunications Requirements 1393 o IP Connectivity everywhere 1395 o Monitoring services everywhere and from different remote centers 1396 o Move services to a virtual data center 1398 o Unify access to applications / information from the corporate 1399 network 1401 o Unify services 1403 o Unified Communications Solutions 1405 o Mix of fiber and microwave technologies - obsolescence of SONET/ 1406 SDH or TDM 1408 o Standardize grid telecommunications protocol to opened standard to 1409 ensure interoperability 1411 o Reliable Telecommunications for Transmission and Distribution 1412 Substations 1414 o IEEE 1588 time synchronization Client / Server Capabilities 1416 o Integration of Multicast Design 1418 o QoS Requirements Mapping 1420 o Enable Future Network Expansion 1422 o Substation Network Resilience 1424 o Fast Convergence Design 1426 o Scalable Headend Design 1428 o Define Service Level Agreements (SLA) and Enable SLA Monitoring 1430 o Integration of 3G/4G Technologies and future technologies 1432 o Ethernet Connectivity for Station Bus Architecture 1434 o Ethernet Connectivity for Process Bus Architecture 1436 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP 1438 3.3.2.2. Specific Network topologies of Smart Grid Applications 1440 Utilities often have very large private telecommunications networks. 1441 It covers an entire territory / country. The main purpose of the 1442 network, until now, has been to support transmission network 1443 monitoring, control, and automation, remote control of generation 1444 sites, and providing FCAPS (Fault, Configuration, Accounting, 1445 Performance, Security) services from centralized network operation 1446 centers. 1448 Going forward, one network will support operation and maintenance of 1449 electrical networks (generation, transmission, and distribution), 1450 voice and data services for ten of thousands of employees and for 1451 exchange with neighboring interconnections, and administrative 1452 services. To meet those requirements, utility may deploy several 1453 physical networks leveraging different technologies across the 1454 country: an optical network and a microwave network for instance. 1455 Each protection and automatism system between two points has two 1456 telecommunications circuits, one on each network. Path diversity 1457 between two substations is key. Regardless of the event type 1458 (hurricane, ice storm, etc.), one path shall stay available so the 1459 system can still operate. 1461 In the optical network, signals are transmitted over more than tens 1462 of thousands of circuits using fiber optic links, microwave and 1463 telephone cables. This network is the nervous system of the 1464 utility's power transmission operations. The optical network 1465 represents ten of thousands of km of cable deployed along the power 1466 lines, with individual runs as long as 280 km. 1468 3.3.2.3. Precision Time Protocol 1470 Some utilities do not use GPS clocks in generation substations. One 1471 of the main reasons is that some of the generation plants are 30 to 1472 50 meters deep under ground and the GPS signal can be weak and 1473 unreliable. Instead, atomic clocks are used. Clocks are 1474 synchronized amongst each other. Rubidium clocks provide clock and 1475 1ms timestamps for IRIG-B. 1477 Some companies plan to transition to the Precision Time Protocol 1478 (PTP, [IEEE1588]), distributing the synchronization signal over the 1479 IP/MPLS network. PTP provides a mechanism for synchronizing the 1480 clocks of participating nodes to a high degree of accuracy and 1481 precision. 1483 PTP operates based on the following assumptions: 1485 It is assumed that the network eliminates cyclic forwarding of PTP 1486 messages within each communication path (e.g. by using a spanning 1487 tree protocol). 1489 PTP is tolerant of an occasional missed message, duplicated 1490 message, or message that arrived out of order. However, PTP 1491 assumes that such impairments are relatively rare. 1493 PTP was designed assuming a multicast communication model, however 1494 PTP also supports a unicast communication model as long as the 1495 behavior of the protocol is preserved. 1497 Like all message-based time transfer protocols, PTP time accuracy 1498 is degraded by delay asymmetry in the paths taken by event 1499 messages. Asymmetry is not detectable by PTP, however, if such 1500 delays are known a priori, PTP can correct for asymmetry. 1502 IEC 61850 defines the use of IEC/IEEE 61850-9-3:2016. The title is: 1503 Precision time protocol profile for power utility automation. It is 1504 based on Annex B/IEC 62439 which offers the support of redundant 1505 attachment of clocks to Parallel Redundancy Protocol (PRP) and High- 1506 availability Seamless Redundancy (HSR) networks. 1508 3.3.3. Security Trends in Utility Networks 1510 Although advanced telecommunications networks can assist in 1511 transforming the energy industry by playing a critical role in 1512 maintaining high levels of reliability, performance, and 1513 manageability, they also introduce the need for an integrated 1514 security infrastructure. Many of the technologies being deployed to 1515 support smart grid projects such as smart meters and sensors can 1516 increase the vulnerability of the grid to attack. Top security 1517 concerns for utilities migrating to an intelligent smart grid 1518 telecommunications platform center on the following trends: 1520 o Integration of distributed energy resources 1522 o Proliferation of digital devices to enable management, automation, 1523 protection, and control 1525 o Regulatory mandates to comply with standards for critical 1526 infrastructure protection 1528 o Migration to new systems for outage management, distribution 1529 automation, condition-based maintenance, load forecasting, and 1530 smart metering 1532 o Demand for new levels of customer service and energy management 1534 This development of a diverse set of networks to support the 1535 integration of microgrids, open-access energy competition, and the 1536 use of network-controlled devices is driving the need for a converged 1537 security infrastructure for all participants in the smart grid, 1538 including utilities, energy service providers, large commercial and 1539 industrial, as well as residential customers. Securing the assets of 1540 electric power delivery systems (from the control center to the 1541 substation, to the feeders and down to customer meters) requires an 1542 end-to-end security infrastructure that protects the myriad of 1543 telecommunications assets used to operate, monitor, and control power 1544 flow and measurement. 1546 "Cyber security" refers to all the security issues in automation and 1547 telecommunications that affect any functions related to the operation 1548 of the electric power systems. Specifically, it involves the 1549 concepts of: 1551 o Integrity : data cannot be altered undetectably 1553 o Authenticity : the telecommunications parties involved must be 1554 validated as genuine 1556 o Authorization : only requests and commands from the authorized 1557 users can be accepted by the system 1559 o Confidentiality : data must not be accessible to any 1560 unauthenticated users 1562 When designing and deploying new smart grid devices and 1563 telecommunications systems, it is imperative to understand the 1564 various impacts of these new components under a variety of attack 1565 situations on the power grid. Consequences of a cyber attack on the 1566 grid telecommunications network can be catastrophic. This is why 1567 security for smart grid is not just an ad hoc feature or product, 1568 it's a complete framework integrating both physical and Cyber 1569 security requirements and covering the entire smart grid networks 1570 from generation to distribution. Security has therefore become one 1571 of the main foundations of the utility telecom network architecture 1572 and must be considered at every layer with a defense-in-depth 1573 approach. Migrating to IP based protocols is key to address these 1574 challenges for two reasons: 1576 o IP enables a rich set of features and capabilities to enhance the 1577 security posture 1579 o IP is based on open standards, which allows interoperability 1580 between different vendors and products, driving down the costs 1581 associated with implementing security solutions in OT networks. 1583 Securing OT (Operation technology) telecommunications over packet- 1584 switched IP networks follow the same principles that are foundational 1585 for securing the IT infrastructure, i.e., consideration must be given 1586 to enforcing electronic access control for both person-to-machine and 1587 machine-to-machine communications, and providing the appropriate 1588 levels of data privacy, device and platform integrity, and threat 1589 detection and mitigation. 1591 3.4. Electrical Utilities Asks 1593 o Mixed L2 and L3 topologies 1595 o Deterministic behavior 1597 o Bounded latency and jitter 1599 o Tight feedback intervals 1601 o High availability, low recovery time 1603 o Redundancy, low packet loss 1605 o Precise timing 1607 o Centralized computing of deterministic paths 1609 o Distributed configuration may also be useful 1611 4. Building Automation Systems 1613 4.1. Use Case Description 1615 A Building Automation System (BAS) manages equipment and sensors in a 1616 building for improving residents' comfort, reducing energy 1617 consumption, and responding to failures and emergencies. For 1618 example, the BAS measures the temperature of a room using sensors and 1619 then controls the HVAC (heating, ventilating, and air conditioning) 1620 to maintain a set temperature and minimize energy consumption. 1622 A BAS primarily performs the following functions: 1624 o Periodically measures states of devices, for example humidity and 1625 illuminance of rooms, open/close state of doors, FAN speed, etc. 1627 o Stores the measured data. 1629 o Provides the measured data to BAS systems and operators. 1631 o Generates alarms for abnormal state of devices. 1633 o Controls devices (e.g. turn off room lights at 10:00 PM). 1635 4.2. Building Automation Systems Today 1637 4.2.1. BAS Architecture 1639 A typical BAS architecture of today is shown in Figure 4. 1641 +----------------------------+ 1642 | | 1643 | BMS HMI | 1644 | | | | 1645 | +----------------------+ | 1646 | | Management Network | | 1647 | +----------------------+ | 1648 | | | | 1649 | LC LC | 1650 | | | | 1651 | +----------------------+ | 1652 | | Field Network | | 1653 | +----------------------+ | 1654 | | | | | | 1655 | Dev Dev Dev Dev | 1656 | | 1657 +----------------------------+ 1659 BMS := Building Management Server 1660 HMI := Human Machine Interface 1661 LC := Local Controller 1663 Figure 4: BAS architecture 1665 There are typically two layers of network in a BAS. The upper one is 1666 called the Management Network and the lower one is called the Field 1667 Network. In management networks an IP-based communication protocol 1668 is used, while in field networks non-IP based communication protocols 1669 ("field protocols") are mainly used. Field networks have specific 1670 timing requirements, whereas management networks can be best-effort. 1672 A Human Machine Interface (HMI) is typically a desktop PC used by 1673 operators to monitor and display device states, send device control 1674 commands to Local Controllers (LCs), and configure building schedules 1675 (for example "turn off all room lights in the building at 10:00 PM"). 1677 A Building Management Server (BMS) performs the following operations. 1679 o Collect and store device states from LCs at regular intervals. 1681 o Send control values to LCs according to a building schedule. 1683 o Send an alarm signal to operators if it detects abnormal devices 1684 states. 1686 The BMS and HMI communicate with LCs via IP-based "management 1687 protocols" (see standards [bacnetip], [knx]). 1689 A LC is typically a Programmable Logic Controller (PLC) which is 1690 connected to several tens or hundreds of devices using "field 1691 protocols". An LC performs the following kinds of operations: 1693 o Measure device states and provide the information to BMS or HMI. 1695 o Send control values to devices, unilaterally or as part of a 1696 feedback control loop. 1698 There are many field protocols used today; some are standards-based 1699 and others are proprietary (see standards [lontalk], [modbus], 1700 [profibus] and [flnet]). The result is that BASs have multiple MAC/ 1701 PHY modules and interfaces. This makes BASs more expensive, slower 1702 to develop, and can result in "vendor lock-in" with multiple types of 1703 management applications. 1705 4.2.2. BAS Deployment Model 1707 An example BAS for medium or large buildings is shown in Figure 5. 1708 The physical layout spans multiple floors, and there is a monitoring 1709 room where the BAS management entities are located. Each floor will 1710 have one or more LCs depending upon the number of devices connected 1711 to the field network. 1713 +--------------------------------------------------+ 1714 | Floor 3 | 1715 | +----LC~~~~+~~~~~+~~~~~+ | 1716 | | | | | | 1717 | | Dev Dev Dev | 1718 | | | 1719 |--- | ------------------------------------------| 1720 | | Floor 2 | 1721 | +----LC~~~~+~~~~~+~~~~~+ Field Network | 1722 | | | | | | 1723 | | Dev Dev Dev | 1724 | | | 1725 |--- | ------------------------------------------| 1726 | | Floor 1 | 1727 | +----LC~~~~+~~~~~+~~~~~+ +-----------------| 1728 | | | | | | Monitoring Room | 1729 | | Dev Dev Dev | | 1730 | | | BMS HMI | 1731 | | Management Network | | | | 1732 | +--------------------------------+-----+ | 1733 | | | 1734 +--------------------------------------------------+ 1736 Figure 5: BAS Deployment model for Medium/Large Buildings 1738 Each LC is connected to the monitoring room via the Management 1739 network, and the management functions are performed within the 1740 building. In most cases, fast Ethernet (e.g. 100BASE-T) is used for 1741 the management network. Since the management network is non- 1742 realtime, use of Ethernet without quality of service is sufficient 1743 for today's deployment. 1745 In the field network a variety of physical interfaces such as RS232C 1746 and RS485 are used, which have specific timing requirements. Thus if 1747 a field network is to be replaced with an Ethernet or wireless 1748 network, such networks must support time-critical deterministic 1749 flows. 1751 In Figure 6, another deployment model is presented in which the 1752 management system is hosted remotely. This is becoming popular for 1753 small office and residential buildings in which a standalone 1754 monitoring system is not cost-effective. 1756 +---------------+ 1757 | Remote Center | 1758 | | 1759 | BMS HMI | 1760 +------------------------------------+ | | | | 1761 | Floor 2 | | +---+---+ | 1762 | +----LC~~~~+~~~~~+ Field Network| | | | 1763 | | | | | | Router | 1764 | | Dev Dev | +-------|-------+ 1765 | | | | 1766 |--- | ------------------------------| | 1767 | | Floor 1 | | 1768 | +----LC~~~~+~~~~~+ | | 1769 | | | | | | 1770 | | Dev Dev | | 1771 | | | | 1772 | | Management Network | WAN | 1773 | +------------------------Router-------------+ 1774 | | 1775 +------------------------------------+ 1777 Figure 6: Deployment model for Small Buildings 1779 Some interoperability is possible today in the Management Network, 1780 but not in today's field networks due to their non-IP-based design. 1782 4.2.3. Use Cases for Field Networks 1784 Below are use cases for Environmental Monitoring, Fire Detection, and 1785 Feedback Control, and their implications for field network 1786 performance. 1788 4.2.3.1. Environmental Monitoring 1790 The BMS polls each LC at a maximum measurement interval of 100ms (for 1791 example to draw a historical chart of 1 second granularity with a 10x 1792 sampling interval) and then performs the operations as specified by 1793 the operator. Each LC needs to measure each of its several hundred 1794 sensors once per measurement interval. Latency is not critical in 1795 this scenario as long as all sensor values are completed in the 1796 measurement interval. Availability is expected to be 99.999 %. 1798 4.2.3.2. Fire Detection 1800 On detection of a fire, the BMS must stop the HVAC, close the fire 1801 shutters, turn on the fire sprinklers, send an alarm, etc. There are 1802 typically ~10s of sensors per LC that BMS needs to manage. In this 1803 scenario the measurement interval is 10-50ms, the communication delay 1804 is 10ms, and the availability must be 99.9999 %. 1806 4.2.3.3. Feedback Control 1808 BAS systems utilize feedback control in various ways; the most time- 1809 critial is control of DC motors, which require a short feedback 1810 interval (1-5ms) with low communication delay (10ms) and jitter 1811 (1ms). The feedback interval depends on the characteristics of the 1812 device and a target quality of control value. There are typically 1813 ~10s of such devices per LC. 1815 Communication delay is expected to be less than 10ms, jitter less 1816 than 1ms while the availability must be 99.9999% . 1818 4.2.4. Security Considerations 1820 When BAS field networks were developed it was assumed that the field 1821 networks would always be physically isolated from external networks 1822 and therefore security was not a concern. In today's world many BASs 1823 are managed remotely and are thus connected to shared IP networks and 1824 so security is definitely a concern, yet security features are not 1825 available in the majority of BAS field network deployments . 1827 The management network, being an IP-based network, has the protocols 1828 available to enable network security, but in practice many BAS 1829 systems do not implement even the available security features such as 1830 device authentication or encryption for data in transit. 1832 4.3. BAS Future 1834 In the future more fine-grained environmental monitoring and lower 1835 energy consumption will emerge which will require more sensors and 1836 devices, thus requiring larger and more complex building networks. 1838 Building networks will be connected to or converged with other 1839 networks (Enterprise network, Home network, and Internet). 1841 Therefore better facilities for network management, control, 1842 reliability and security are critical in order to improve resident 1843 and operator convenience and comfort. For example the ability to 1844 monitor and control building devices via the internet would enable 1845 (for example) control of room lights or HVAC from a resident's 1846 desktop PC or phone application. 1848 4.4. BAS Asks 1850 The community would like to see an interoperable protocol 1851 specification that can satisfy the timing, security, availability and 1852 QoS constraints described above, such that the resulting converged 1853 network can replace the disparate field networks. Ideally this 1854 connectivity could extend to the open Internet. 1856 This would imply an architecture that can guarantee 1858 o Low communication delays (from <10ms to 100ms in a network of 1859 several hundred devices) 1861 o Low jitter (< 1 ms) 1863 o Tight feedback intervals (1ms - 10ms) 1865 o High network availability (up to 99.9999% ) 1867 o Availability of network data in disaster scenario 1869 o Authentication between management and field devices (both local 1870 and remote) 1872 o Integrity and data origin authentication of communication data 1873 between field and management devices 1875 o Confidentiality of data when communicated to a remote device 1877 5. Wireless for Industrial 1879 5.1. Use Case Description 1881 Wireless networks are useful for industrial applications, for example 1882 when portable, fast-moving or rotating objects are involved, and for 1883 the resource-constrained devices found in the Internet of Things 1884 (IoT). 1886 Such network-connected sensors, actuators, control loops (etc.) 1887 typically require that the underlying network support real-time 1888 quality of service (QoS), as well as specific classes of other 1889 network properties such as reliability, redundancy, and security. 1891 These networks may also contain very large numbers of devices, for 1892 example for factories, "big data" acquisition, and the IoT. Given 1893 the large numbers of devices installed, and the potential 1894 pervasiveness of the IoT, this is a huge and very cost-sensitive 1895 market. For example, a 1% cost reduction in some areas could save 1896 $100B 1898 5.1.1. Network Convergence using 6TiSCH 1900 Some wireless network technologies support real-time QoS, and are 1901 thus useful for these kinds of networks, but others do not. For 1902 example WiFi is pervasive but does not provide guaranteed timing or 1903 delivery of packets, and thus is not useful in this context. 1905 This use case focuses on one specific wireless network technology 1906 which provides the required deterministic QoS, which is "IPv6 over 1907 the TSCH mode of IEEE 802.15.4e" (6TiSCH, where TSCH stands for 1908 "Time-Slotted Channel Hopping", see [I-D.ietf-6tisch-architecture], 1909 [IEEE802154], [IEEE802154e], and [RFC7554]). 1911 There are other deterministic wireless busses and networks available 1912 today, however they are imcompatible with each other, and 1913 incompatible with IP traffic (for example [ISA100], [WirelessHART]). 1915 Thus the primary goal of this use case is to apply 6TiSCH as a 1916 converged IP- and standards-based wireless network for industrial 1917 applications, i.e. to replace multiple proprietary and/or 1918 incompatible wireless networking and wireless network management 1919 standards. 1921 5.1.2. Common Protocol Development for 6TiSCH 1923 Today there are a number of protocols required by 6TiSCH which are 1924 still in development, and a second intent of this use case is to 1925 highlight the ways in which these "missing" protocols share goals in 1926 common with DetNet. Thus it is possible that some of the protocol 1927 technology developed for DetNet will also be applicable to 6TiSCH. 1929 These protocol goals are identified here, along with their 1930 relationship to DetNet. It is likely that ultimately the resulting 1931 protocols will not be identical, but will share design principles 1932 which contribute to the eficiency of enabling both DetNet and 6TiSCH. 1934 One such commonality is that although at a different time scale, in 1935 both TSN [IEEE802.1TSNTG] and TSCH a packet crosses the network from 1936 node to node follows a precise schedule, as a train that leaves 1937 intermediate stations at precise times along its path. This kind of 1938 operation reduces collisions, saves energy, and enables engineering 1939 the network for deterministic properties. 1941 Another commonality is remote monitoring and scheduling management of 1942 a TSCH network by a Path Computation Element (PCE) and Network 1943 Management Entity (NME). The PCE/NME manage timeslots and device 1944 resources in a manner that minimizes the interaction with and the 1945 load placed on resource-constrained devices. For example, a tiny IoT 1946 device may have just enough buffers to store one or a few IPv6 1947 packets, and will have limited bandwidth between peers such that it 1948 can maintain only a small amount of peer information, and will not be 1949 able to store many packets waiting to be forwarded. It is 1950 advantageous then for it to only be required to carry out the 1951 specific behavior assigned to it by the PCE/NME (as opposed to 1952 maintaining its own IP stack, for example). 1954 Note: Current WG discussion indicates that some peer-to-peer 1955 communication must be assumed, i.e. the PCE may communicate only 1956 indirectly with any given device, enabling hierarchical configuration 1957 of the system. 1959 6TiSCH depends on [PCE] and [I-D.ietf-detnet-architecture]. 1961 6TiSCH also depends on the fact that DetNet will maintain consistency 1962 with [IEEE802.1TSNTG]. 1964 5.2. Wireless Industrial Today 1966 Today industrial wireless is accomplished using multiple 1967 deterministic wireless networks which are incompatible with each 1968 other and with IP traffic. 1970 6TiSCH is not yet fully specified, so it cannot be used in today's 1971 applications. 1973 5.3. Wireless Industrial Future 1975 5.3.1. Unified Wireless Network and Management 1977 DetNet and 6TiSCH together can enable converged transport of 1978 deterministic and best-effort traffic flows between real-time 1979 industrial devices and wide area networks via IP routing. A high 1980 level view of a basic such network is shown in Figure 7. 1982 ---+-------- ............ ------------ 1983 | External Network | 1984 | +-----+ 1985 +-----+ | NME | 1986 | | LLN Border | | 1987 | | router +-----+ 1988 +-----+ 1989 o o o 1990 o o o o 1991 o o LLN o o o 1992 o o o o 1993 o 1995 Figure 7: Basic 6TiSCH Network 1997 Figure 8 shows a backbone router federating multiple synchronized 1998 6TiSCH subnets into a single subnet connected to the external 1999 network. 2001 ---+-------- ............ ------------ 2002 | External Network | 2003 | +-----+ 2004 | +-----+ | NME | 2005 +-----+ | +-----+ | | 2006 | | Router | | PCE | +-----+ 2007 | | +--| | 2008 +-----+ +-----+ 2009 | | 2010 | Subnet Backbone | 2011 +--------------------+------------------+ 2012 | | | 2013 +-----+ +-----+ +-----+ 2014 | | Backbone | | Backbone | | Backbone 2015 o | | router | | router | | router 2016 +-----+ +-----+ +-----+ 2017 o o o o o 2018 o o o o o o o o o o o 2019 o o o LLN o o o o 2020 o o o o o o o o o o o o 2022 Figure 8: Extended 6TiSCH Network 2024 The backbone router must ensure end-to-end deterministic behavior 2025 between the LLN and the backbone. This should be accomplished in 2026 conformance with the work done in [I-D.ietf-detnet-architecture] with 2027 respect to Layer-3 aspects of deterministic networks that span 2028 multiple Layer-2 domains. 2030 The PCE must compute a deterministic path end-to-end across the TSCH 2031 network and IEEE802.1 TSN Ethernet backbone, and DetNet protocols are 2032 expected to enable end-to-end deterministic forwarding. 2034 +-----+ 2035 | IoT | 2036 | G/W | 2037 +-----+ 2038 ^ <---- Elimination 2039 | | 2040 Track branch | | 2041 +-------+ +--------+ Subnet Backbone 2042 | | 2043 +--|--+ +--|--+ 2044 | | | Backbone | | | Backbone 2045 o | | | router | | | router 2046 +--/--+ +--|--+ 2047 o / o o---o----/ o 2048 o o---o--/ o o o o o 2049 o \ / o o LLN o 2050 o v <---- Replication 2051 o 2053 Figure 9: 6TiSCH Network with PRE 2055 5.3.1.1. PCE and 6TiSCH ARQ Retries 2057 Note: The possible use of ARQ techniques in DetNet is currently 2058 considered a possible design alternative. 2060 6TiSCH uses the IEEE802.15.4 Automatic Repeat-reQuest (ARQ) mechanism 2061 to provide higher reliability of packet delivery. ARQ is related to 2062 packet replication and elimination because there are two independent 2063 paths for packets to arrive at the destination, and if an expected 2064 packed does not arrive on one path then it checks for the packet on 2065 the second path. 2067 Although to date this mechanism is only used by wireless networks, 2068 this may be a technique that would be appropriate for DetNet and so 2069 aspects of the enabling protocol could be co-developed. 2071 For example, in Figure 9, a Track is laid out from a field device in 2072 a 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN 2073 backbone. 2075 In ARQ the Replication function in the field device sends a copy of 2076 each packet over two different branches, and the PCE schedules each 2077 hop of both branches so that the two copies arrive in due time at the 2078 gateway. In case of a loss on one branch, hopefully the other copy 2079 of the packet still arrives within the allocated time. If two copies 2080 make it to the IoT gateway, the Elimination function in the gateway 2081 ignores the extra packet and presents only one copy to upper layers. 2083 At each 6TiSCH hop along the Track, the PCE may schedule more than 2084 one timeSlot for a packet, so as to support Layer-2 retries (ARQ). 2086 In current deployments, a TSCH Track does not necessarily support PRE 2087 but is systematically multi-path. This means that a Track is 2088 scheduled so as to ensure that each hop has at least two forwarding 2089 solutions, and the forwarding decision is to try the preferred one 2090 and use the other in case of Layer-2 transmission failure as detected 2091 by ARQ. 2093 5.3.2. Schedule Management by a PCE 2095 A common feature of 6TiSCH and DetNet is the action of a PCE to 2096 configure paths through the network. Specifically, what is needed is 2097 a protocol and data model that the PCE will use to get/set the 2098 relevant configuration from/to the devices, as well as perform 2099 operations on the devices. This protocol should be developed by 2100 DetNet with consideration for its reuse by 6TiSCH. The remainder of 2101 this section provides a bit more context from the 6TiSCH side. 2103 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests 2105 The 6TiSCH device does not expect to place the request for bandwidth 2106 between itself and another device in the network. Rather, an 2107 operation control system invoked through a human interface specifies 2108 the required traffic specification and the end nodes (in terms of 2109 latency and reliability). Based on this information, the PCE must 2110 compute a path between the end nodes and provision the network with 2111 per-flow state that describes the per-hop operation for a given 2112 packet, the corresponding timeslots, and the flow identification that 2113 enables recognizing that a certain packet belongs to a certain path, 2114 etc. 2116 For a static configuration that serves a certain purpose for a long 2117 period of time, it is expected that a node will be provisioned in one 2118 shot with a full schedule, which incorporates the aggregation of its 2119 behavior for multiple paths. 6TiSCH expects that the programing of 2120 the schedule will be done over COAP as discussed in 2121 [I-D.ietf-6tisch-coap]. 2123 6TiSCH expects that the PCE commands will be mapped back and forth 2124 into CoAP by a gateway function at the edge of the 6TiSCH network. 2125 For instance, it is possible that a mapping entity on the backbone 2126 transforms a non-CoAP protocol such as PCEP into the RESTful 2127 interfaces that the 6TiSCH devices support. This architecture will 2128 be refined to comply with DetNet [I-D.ietf-detnet-architecture] when 2129 the work is formalized. Related information about 6TiSCH can be 2130 found at [I-D.ietf-6tisch-6top-interface] and RPL [RFC6550]. 2132 A protocol may be used to update the state in the devices during 2133 runtime, for example if it appears that a path through the network 2134 has ceased to perform as expected, but in 6TiSCH that flow was not 2135 designed and no protocol was selected. DetNet should define the 2136 appropriate end-to-end protocols to be used in that case. The 2137 implication is that these state updates take place once the system is 2138 configured and running, i.e. they are not limited to the initial 2139 communication of the configuration of the system. 2141 A "slotFrame" is the base object that a PCE would manipulate to 2142 program a schedule into an LLN node ([I-D.ietf-6tisch-architecture]). 2144 The PCE should read energy data from devices and compute paths that 2145 will implement policies on how energy in devices is consumed, for 2146 instance to ensure that the spent energy does not exceeded the 2147 available energy over a period of time. Note: this statement implies 2148 that an extensible protocol for communicating device info to the PCE 2149 and enabling the PCE to act on it will be part of the DetNet 2150 architecture, however for subnets with specific protocols (e.g. 2151 CoAP) a gateway may be required. 2153 6TiSCH devices can discover their neighbors over the radio using a 2154 mechanism such as beacons, but even though the neighbor information 2155 is available in the 6TiSCH interface data model, 6TiSCH does not 2156 describe a protocol to proactively push the neighborhood information 2157 to a PCE. DetNet should define such a protocol; one possible design 2158 alternative is that it could operate over CoAP, alternatively it 2159 could be converted to/from CoAP by a gateway. Such a protocol could 2160 carry multiple metrics, for example similar to those used for RPL 2161 operations [RFC6551] 2163 5.3.2.2. 6TiSCH IP Interface 2165 "6top" ([I-D.wang-6tisch-6top-sublayer]) is a logical link control 2166 sitting between the IP layer and the TSCH MAC layer which provides 2167 the link abstraction that is required for IP operations. The 6top 2168 data model and management interfaces are further discussed in 2169 [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap]. 2171 An IP packet that is sent along a 6TiSCH path uses the Differentiated 2172 Services Per-Hop-Behavior Group called Deterministic Forwarding, as 2173 described in [I-D.svshah-tsvwg-deterministic-forwarding]. 2175 5.3.3. 6TiSCH Security Considerations 2177 On top of the classical requirements for protection of control 2178 signaling, it must be noted that 6TiSCH networks operate on limited 2179 resources that can be depleted rapidly in a DoS attack on the system, 2180 for instance by placing a rogue device in the network, or by 2181 obtaining management control and setting up unexpected additional 2182 paths. 2184 5.4. Wireless Industrial Asks 2186 6TiSCH depends on DetNet to define: 2188 o Configuration (state) and operations for deterministic paths 2190 o End-to-end protocols for deterministic forwarding (tagging, IP) 2192 o Protocol for packet replication and elimination 2194 6. Cellular Radio 2196 6.1. Use Case Description 2198 This use case describes the application of deterministic networking 2199 in the context of cellular telecom transport networks. Important 2200 elements include time synchronization, clock distribution, and ways 2201 of establishing time-sensitive streams for both Layer-2 and Layer-3 2202 user plane traffic. 2204 6.1.1. Network Architecture 2206 Figure 10 illustrates a typical 3GPP-defined cellular network 2207 architecture, which includes "Fronthaul", "Midhaul" and "Backhaul" 2208 network segments. The "Fronthaul" is the network connecting base 2209 stations (baseband processing units) to the remote radio heads 2210 (antennas). The "Midhaul" is the network inter-connecting base 2211 stations (or small cell sites). The "Backhaul" is the network or 2212 links connecting the radio base station sites to the network 2213 controller/gateway sites (i.e. the core of the 3GPP cellular 2214 network). 2216 In Figure 10 "eNB" ("E-UTRAN Node B") is the hardware that is 2217 connected to the mobile phone network which communicates directly 2218 with mobile handsets ([TS36300]). 2220 Y (remote radio heads (antennas)) 2221 \ 2222 Y__ \.--. .--. +------+ 2223 \_( `. +---+ _(Back`. | 3GPP | 2224 Y------( Front )----|eNB|----( Haul )----| core | 2225 ( ` .Haul ) +---+ ( ` . ) ) | netw | 2226 /`--(___.-' \ `--(___.-' +------+ 2227 Y_/ / \.--. \ 2228 Y_/ _( Mid`. \ 2229 ( Haul ) \ 2230 ( ` . ) ) \ 2231 `--(___.-'\_____+---+ (small cell sites) 2232 \ |SCe|__Y 2233 +---+ +---+ 2234 Y__|eNB|__Y 2235 +---+ 2236 Y_/ \_Y ("local" radios) 2238 Figure 10: Generic 3GPP-based Cellular Network Architecture 2240 6.1.2. Delay Constraints 2242 The available processing time for Fronthaul networking overhead is 2243 limited to the available time after the baseband processing of the 2244 radio frame has completed. For example in Long Term Evolution (LTE) 2245 radio, processing of a radio frame is allocated 3ms but typically the 2246 processing uses most of it, allowing only a small fraction to be used 2247 by the Fronthaul network (e.g. up to 250us one-way delay, though the 2248 existing spec ([NGMN-fronth]) supports delay only up to 100us). This 2249 ultimately determines the distance the remote radio heads can be 2250 located from the base stations (e.g., 100us equals roughly 20 km of 2251 optical fiber-based transport). Allocation options of the available 2252 time budget between processing and transport are under heavy 2253 discussions in the mobile industry. 2255 For packet-based transport the allocated transport time (e.g. CPRI 2256 would allow for 100us delay [CPRI]) is consumed by all nodes and 2257 buffering between the remote radio head and the baseband processing 2258 unit, plus the distance-incurred delay. 2260 The baseband processing time and the available "delay budget" for the 2261 fronthaul is likely to change in the forthcoming "5G" due to reduced 2262 radio round trip times and other architectural and service 2263 requirements [NGMN]. 2265 The transport time budget, as noted above, places limitations on the 2266 distance that remote radio heads can be located from base stations 2267 (i.e. the link length). In the above analysis, the entire transport 2268 time budget is assumed to be available for link propagation delay. 2269 However the transport time budget can be broken down into three 2270 components: scheduling /queueing delay, transmission delay, and link 2271 propagation delay. Using today's Fronthaul networking technology, 2272 the queuing, scheduling and transmission components might become the 2273 dominant factors in the total transport time rather than the link 2274 propagation delay. This is especially true in cases where the 2275 Fronthaul link is relatively short and it is shared among multiple 2276 Fronthaul flows, for example in indoor and small cell networks, 2277 massive MIMO antenna networks, and split Fronthaul architectures. 2279 DetNet technology can improve this application by controlling and 2280 reducing the time required for the queuing, scheduling and 2281 transmission operations by properly assigning the network resources, 2282 thus leaving more of the transport time budget available for link 2283 propagation, and thus enabling longer link lengths. However, link 2284 length is usually a given parameter and is not a controllable network 2285 parameter, since RRH and BBU sights are usually located in 2286 predetermined locations. However, the number of antennas in an RRH 2287 sight might increase for example by adding more antennas, increasing 2288 the MIMO capability of the network or support of massive MIMO. This 2289 means increasing the number of the fronthaul flows sharing the same 2290 fronthaul link. DetNet can now control the bandwidth assignment of 2291 the fronthaul link and the scheduling of fronthaul packets over this 2292 link and provide adequate buffer provisioning for each flow to reduce 2293 the packet loss rate. 2295 Another way in which DetNet technology can aid Fronthaul networks is 2296 by providing effective isolation from best-effort (and other classes 2297 of) traffic, which can arise as a result of network slicing in 5G 2298 networks where Fronthaul traffic generated in different network 2299 slices might have differing performance requirements. DetNet 2300 technology can also dynamically control the bandwidth assignment, 2301 scheduling and packet forwarding decisions and the buffer 2302 provisioning of the Fronthaul flows to guarantee the end-to-end delay 2303 of the Fronthaul packets and minimize the packet loss rate. 2305 [METIS] documents the fundamental challenges as well as overall 2306 technical goals of the future 5G mobile and wireless system as the 2307 starting point. These future systems should support much higher data 2308 volumes and rates and significantly lower end-to-end latency for 100x 2309 more connected devices (at similar cost and energy consumption levels 2310 as today's system). 2312 For Midhaul connections, delay constraints are driven by Inter-Site 2313 radio functions like Coordinated Multipoint Processing (CoMP, see 2314 [CoMP]). CoMP reception and transmission is a framework in which 2315 multiple geographically distributed antenna nodes cooperate to 2316 improve the performance of the users served in the common cooperation 2317 area. The design principal of CoMP is to extend the current single- 2318 cell to multi-UE (User Equipment) transmission to a multi-cell-to- 2319 multi-UEs transmission by base station cooperation. 2321 CoMP has delay-sensitive performance parameters, which are "midhaul 2322 latency" and "CSI (Channel State Information) reporting and 2323 accuracy". The essential feature of CoMP is signaling between eNBs, 2324 so Midhaul latency is the dominating limitation of CoMP performance. 2325 Generally, CoMP can benefit from coordinated scheduling (either 2326 distributed or centralized) of different cells if the signaling delay 2327 between eNBs is within 1-10ms. This delay requirement is both rigid 2328 and absolute because any uncertainty in delay will degrade the 2329 performance significantly. 2331 Inter-site CoMP is one of the key requirements for 5G and is also a 2332 near-term goal for the current 4.5G network architecture. 2334 6.1.3. Time Synchronization Constraints 2336 Fronthaul time synchronization requirements are given by [TS25104], 2337 [TS36104], [TS36211], and [TS36133]. These can be summarized for the 2338 current 3GPP LTE-based networks as: 2340 Delay Accuracy: 2341 +-8ns (i.e. +-1/32 Tc, where Tc is the UMTS Chip time of 1/3.84 2342 MHz) resulting in a round trip accuracy of +-16ns. The value is 2343 this low to meet the 3GPP Timing Alignment Error (TAE) measurement 2344 requirements. Note: performance guarantees of low nanosecond 2345 values such as these are considered to be below the DetNet layer - 2346 it is assumed that the underlying implementation, e.g. the 2347 hardware, will provide sufficient support (e.g. buffering) to 2348 enable this level of accuracy. These values are maintained in the 2349 use case to give an indication of the overall application. 2351 Timing Alignment Error: 2352 Timing Alignment Error (TAE) is problematic to Fronthaul networks 2353 and must be minimized. If the transport network cannot guarantee 2354 low enough TAE then additional buffering has to be introduced at 2355 the edges of the network to buffer out the jitter. Buffering is 2356 not desirable as it reduces the total available delay budget. 2357 Packet Delay Variation (PDV) requirements can be derived from TAE 2358 for packet based Fronthaul networks. 2360 * For multiple input multiple output (MIMO) or TX diversity 2361 transmissions, at each carrier frequency, TAE shall not exceed 2362 65 ns (i.e. 1/4 Tc). 2364 * For intra-band contiguous carrier aggregation, with or without 2365 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2 2366 Tc). 2368 * For intra-band non-contiguous carrier aggregation, with or 2369 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e. 2370 one Tc). 2372 * For inter-band carrier aggregation, with or without MIMO or TX 2373 diversity, TAE shall not exceed 260 ns. 2375 Transport link contribution to radio frequency error: 2376 +-2 PPB. This value is considered to be "available" for the 2377 Fronthaul link out of the total 50 PPB budget reserved for the 2378 radio interface. Note: the reason that the transport link 2379 contributes to radio frequency error is as follows. The current 2380 way of doing Fronthaul is from the radio unit to remote radio head 2381 directly. The remote radio head is essentially a passive device 2382 (without buffering etc.) The transport drives the antenna 2383 directly by feeding it with samples and everything the transport 2384 adds will be introduced to radio as-is. So if the transport 2385 causes additional frequency error that shows immediately on the 2386 radio as well. Note: performance guarantees of low nanosecond 2387 values such as these are considered to be below the DetNet layer - 2388 it is assumed that the underlying implementation, e.g. the 2389 hardware, will provide sufficient support to enable this level of 2390 performance. These values are maintained in the use case to give 2391 an indication of the overall application. 2393 The above listed time synchronization requirements are difficult to 2394 meet with point-to-point connected networks, and more difficult when 2395 the network includes multiple hops. It is expected that networks 2396 must include buffering at the ends of the connections as imposed by 2397 the jitter requirements, since trying to meet the jitter requirements 2398 in every intermediate node is likely to be too costly. However, 2399 every measure to reduce jitter and delay on the path makes it easier 2400 to meet the end-to-end requirements. 2402 In order to meet the timing requirements both senders and receivers 2403 must remain time synchronized, demanding very accurate clock 2404 distribution, for example support for IEEE 1588 transparent clocks or 2405 boundary clocks in every intermediate node. 2407 In cellular networks from the LTE radio era onward, phase 2408 synchronization is needed in addition to frequency synchronization 2409 ([TS36300], [TS23401]). Time constraints are also important due to 2410 their impact on packet loss. If a packet is delivered too late, then 2411 the packet may be dropped by the host. 2413 6.1.4. Transport Loss Constraints 2415 Fronthaul and Midhaul networks assume almost error-free transport. 2416 Errors can result in a reset of the radio interfaces, which can cause 2417 reduced throughput or broken radio connectivity for mobile customers. 2419 For packetized Fronthaul and Midhaul connections packet loss may be 2420 caused by BER, congestion, or network failure scenarios. Different 2421 fronthaul functional splits are being considered by 3GPP, requiring 2422 strict frame loss ratio (FLR) guarantees. As one example (referring 2423 to the legacy CPRI split which is option 8 in 3GPP) lower layers 2424 splits may imply an FLR of less than 10E-7 for data traffic and less 2425 than 10E-6 for control and management traffic. Current tools for 2426 eliminating packet loss for Fronthaul and Midhaul networks have 2427 serious challenges, for example retransmitting lost packets and/or 2428 using forward error correction (FEC) to circumvent bit errors is 2429 practically impossible due to the additional delay incurred. Using 2430 redundant streams for better guarantees for delivery is also 2431 practically impossible in many cases due to high bandwidth 2432 requirements of Fronthaul and Midhaul networks. Protection switching 2433 is also a candidate but current technologies for the path switch are 2434 too slow to avoid reset of mobile interfaces. 2436 Fronthaul links are assumed to be symmetric, and all Fronthaul 2437 streams (i.e. those carrying radio data) have equal priority and 2438 cannot delay or pre-empt each other. This implies that the network 2439 must guarantee that each time-sensitive flow meets their schedule. 2441 6.1.5. Security Considerations 2443 Establishing time-sensitive streams in the network entails reserving 2444 networking resources for long periods of time. It is important that 2445 these reservation requests be authenticated to prevent malicious 2446 reservation attempts from hostile nodes (or accidental 2447 misconfiguration). This is particularly important in the case where 2448 the reservation requests span administrative domains. Furthermore, 2449 the reservation information itself should be digitally signed to 2450 reduce the risk of a legitimate node pushing a stale or hostile 2451 configuration into another networking node. 2453 Note: This is considered important for the security policy of the 2454 network, but does not affect the core DetNet architecture and design. 2456 6.2. Cellular Radio Networks Today 2458 6.2.1. Fronthaul 2460 Today's Fronthaul networks typically consist of: 2462 o Dedicated point-to-point fiber connection is common 2464 o Proprietary protocols and framings 2466 o Custom equipment and no real networking 2468 Current solutions for Fronthaul are direct optical cables or 2469 Wavelength-Division Multiplexing (WDM) connections. 2471 6.2.2. Midhaul and Backhaul 2473 Today's Midhaul and Backhaul networks typically consist of: 2475 o Mostly normal IP networks, MPLS-TP, etc. 2477 o Clock distribution and sync using 1588 and SyncE 2479 Telecommunication networks in the Mid- and Backhaul are already 2480 heading towards transport networks where precise time synchronization 2481 support is one of the basic building blocks. While the transport 2482 networks themselves have practically transitioned to all-IP packet- 2483 based networks to meet the bandwidth and cost requirements, highly 2484 accurate clock distribution has become a challenge. 2486 In the past, Mid- and Backhaul connections were typically based on 2487 Time Division Multiplexing (TDM-based) and provided frequency 2488 synchronization capabilities as a part of the transport media. 2489 Alternatively other technologies such as Global Positioning System 2490 (GPS) or Synchronous Ethernet (SyncE) are used [SyncE]. 2492 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985] 2493 for legacy transport support) have become popular tools to build and 2494 manage new all-IP Radio Access Networks (RANs) 2495 [I-D.kh-spring-ip-ran-use-case]. Although various timing and 2496 synchronization optimizations have already been proposed and 2497 implemented including 1588 PTP enhancements 2498 [I-D.ietf-tictoc-1588overmpls] and [RFC8169], these solution are not 2499 necessarily sufficient for the forthcoming RAN architectures nor do 2500 they guarantee the more stringent time-synchronization requirements 2501 such as [CPRI]. 2503 There are also existing solutions for TDM over IP such as [RFC4553], 2504 [RFC5086], and [RFC5087], as well as TDM over Ethernet transports 2505 such as [MEF8]. 2507 6.3. Cellular Radio Networks Future 2509 Future Cellular Radio Networks will be based on a mix of different 2510 xHaul networks (xHaul = front-, mid- and backhaul), and future 2511 transport networks should be able to support all of them 2512 simultaneously. It is already envisioned today that: 2514 o Not all "cellular radio network" traffic will be IP, for example 2515 some will remain at Layer 2 (e.g. Ethernet based). DetNet 2516 solutions must address all traffic types (Layer 2, Layer 3) with 2517 the same tools and allow their transport simultaneously. 2519 o All forms of xHaul networks will need some form of DetNet 2520 solutions. For example with the advent of 5G some Backhaul 2521 traffic will also have DetNet requirements, for example traffic 2522 belonging to time-critical 5G applications. 2524 o Different splits of the functionality run on the base stations and 2525 the on-site units could co-exist on the same Fronthaul and 2526 Backhaul network. 2528 Future Cellular Radio networks should contain the following: 2530 o Unified standards-based transport protocols and standard 2531 networking equipment that can make use of underlying deterministic 2532 link-layer services 2534 o Unified and standards-based network management systems and 2535 protocols in all parts of the network (including Fronthaul) 2537 New radio access network deployment models and architectures may 2538 require time- sensitive networking services with strict requirements 2539 on other parts of the network that previously were not considered to 2540 be packetized at all. Time and synchronization support are already 2541 topical for Backhaul and Midhaul packet networks [MEF22.1.1] and are 2542 becoming a real issue for Fronthaul networks also. Specifically in 2543 Fronthaul networks the timing and synchronization requirements can be 2544 extreme for packet based technologies, for example, on the order of 2545 sub +-20 ns packet delay variation (PDV) and frequency accuracy of 2546 +0.002 PPM [Fronthaul]. 2548 The actual transport protocols and/or solutions to establish required 2549 transport "circuits" (pinned-down paths) for Fronthaul traffic are 2550 still undefined. Those are likely to include (but are not limited 2551 to) solutions directly over Ethernet, over IP, and using MPLS/ 2552 PseudoWire transport. 2554 Even the current time-sensitive networking features may not be 2555 sufficient for Fronthaul traffic. Therefore, having specific 2556 profiles that take the requirements of Fronthaul into account is 2557 desirable [IEEE8021CM]. 2559 Interesting and important work for time-sensitive networking has been 2560 done for Ethernet [TSNTG], which specifies the use of IEEE 1588 time 2561 precision protocol (PTP) [IEEE1588] in the context of IEEE 802.1D and 2562 IEEE 802.1Q. [IEEE8021AS] specifies a Layer 2 time synchronizing 2563 service, and other specifications such as IEEE 1722 [IEEE1722] 2564 specify Ethernet-based Layer-2 transport for time-sensitive streams. 2566 New promising work seeks to enable the transport of time-sensitive 2567 fronthaul streams in Ethernet bridged networks [IEEE8021CM]. 2568 Analogous to IEEE 1722 there is an ongoing standardization effort to 2569 define the Layer-2 transport encapsulation format for transporting 2570 radio over Ethernet (RoE) in the IEEE 1904.3 Task Force [IEEE19143]. 2572 As mentioned in Section 6.1.2, 5G communications will provide one of 2573 the most challenging cases for delay sensitive networking. In order 2574 to meet the challenges of ultra-low latency and ultra-high 2575 throughput, 3GPP has studied various "functional splits" for 5G, 2576 i.e., physical decomposition of the gNodeB base station and 2577 deployment of its functional blocks in different locations [TR38801]. 2579 These splits are numbered from split option 1 (Dual Connectivity, a 2580 split in which the radio resource control is centralized and other 2581 radio stack layers are in distributed units) to split option 8 (a 2582 PHY-RF split in which RF functionality is in a distributed unit and 2583 the rest of the radio stack is in the centralized unit), with each 2584 intermediate split having its own data rate and delay requirements. 2585 Packetized versions of different splits have recently been proposed 2586 including eCPRI [eCPRI] and RoE (as previously noted). Both provide 2587 Ethernet encapsulations, and eCPRI is also capable of IP 2588 encapsulation. 2590 All-IP RANs and xHaul networks would benefit from time 2591 synchronization and time-sensitive transport services. Although 2592 Ethernet appears to be the unifying technology for the transport, 2593 there is still a disconnect providing Layer 3 services. The protocol 2594 stack typically has a number of layers below the Ethernet Layer 2 2595 that shows up to the Layer 3 IP transport. It is not uncommon that 2596 on top of the lowest layer (optical) transport there is the first 2597 layer of Ethernet followed one or more layers of MPLS, PseudoWires 2598 and/or other tunneling protocols finally carrying the Ethernet layer 2599 visible to the user plane IP traffic. 2601 While there are existing technologies to establish circuits through 2602 the routed and switched networks (especially in MPLS/PWE space), 2603 there is still no way to signal the time synchronization and time- 2604 sensitive stream requirements/reservations for Layer-3 flows in a way 2605 that addresses the entire transport stack, including the Ethernet 2606 layers that need to be configured. 2608 Furthermore, not all "user plane" traffic will be IP. Therefore, the 2609 same solution also must address the use cases where the user plane 2610 traffic is a different layer, for example Ethernet frames. 2612 There is existing work describing the problem statement 2613 [I-D.ietf-detnet-problem-statement] and the architecture 2614 [I-D.ietf-detnet-architecture] for deterministic networking (DetNet) 2615 that targets solutions for time-sensitive (IP/transport) streams with 2616 deterministic properties over Ethernet-based switched networks. 2618 6.4. Cellular Radio Networks Asks 2620 A standard for data plane transport specification which is: 2622 o Unified among all xHauls (meaning that different flows with 2623 diverse DetNet requirements can coexist in the same network and 2624 traverse the same nodes without interfering with each other) 2626 o Deployed in a highly deterministic network environment 2628 o Capable of supporting multiple functional splits simultaneously, 2629 including existing Backhaul and CPRI Fronthaul and potentially new 2630 modes as defined for example in 3GPP; these goals can be supported 2631 by the existing DetNet Use Case Common Themes, notably "Mix of 2632 Deterministic and Best-Effort Traffic", "Bounded Latency", "Low 2633 Latency", "Symmetrical Path Delays", and "Deterministic Flows". 2635 o Capable of supporting Network Slicing and Multi-tenancy; these 2636 goals can be supported by the same DetNet themes noted above. 2638 o Capable of transporting both in-band and out-band control traffic 2639 (OAM info, ...). 2641 o Deployable over multiple data link technologies (e.g., IEEE 802.3, 2642 mmWave, etc.). 2644 A standard for data flow information models that are: 2646 o Aware of the time sensitivity and constraints of the target 2647 networking environment 2649 o Aware of underlying deterministic networking services (e.g., on 2650 the Ethernet layer) 2652 7. Industrial M2M 2654 7.1. Use Case Description 2656 Industrial Automation in general refers to automation of 2657 manufacturing, quality control and material processing. This 2658 "machine to machine" (M2M) use case considers machine units in a 2659 plant floor which periodically exchange data with upstream or 2660 downstream machine modules and/or a supervisory controller within a 2661 local area network. 2663 The actors of M2M communication are Programmable Logic Controllers 2664 (PLCs). Communication between PLCs and between PLCs and the 2665 supervisory PLC (S-PLC) is achieved via critical control/data streams 2666 Figure 11. 2668 S (Sensor) 2669 \ +-----+ 2670 PLC__ \.--. .--. ---| MES | 2671 \_( `. _( `./ +-----+ 2672 A------( Local )-------------( L2 ) 2673 ( Net ) ( Net ) +-------+ 2674 /`--(___.-' `--(___.-' ----| S-PLC | 2675 S_/ / PLC .--. / +-------+ 2676 A_/ \_( `. 2677 (Actuator) ( Local ) 2678 ( Net ) 2679 /`--(___.-'\ 2680 / \ A 2681 S A 2683 Figure 11: Current Generic Industrial M2M Network Architecture 2685 This use case focuses on PLC-related communications; communication to 2686 Manufacturing-Execution-Systems (MESs) are not addressed. 2688 This use case covers only critical control/data streams; non-critical 2689 traffic between industrial automation applications (such as 2690 communication of state, configuration, set-up, and database 2691 communication) are adequately served by currently available 2692 prioritizing techniques. Such traffic can use up to 80% of the total 2693 bandwidth required. There is also a subset of non-time-critical 2694 traffic that must be reliable even though it is not time sensitive. 2696 In this use case the primary need for deterministic networking is to 2697 provide end-to-end delivery of M2M messages within specific timing 2698 constraints, for example in closed loop automation control. Today 2699 this level of determinism is provided by proprietary networking 2700 technologies. In addition, standard networking technologies are used 2701 to connect the local network to remote industrial automation sites, 2702 e.g. over an enterprise or metro network which also carries other 2703 types of traffic. Therefore, flows that should be forwarded with 2704 deterministic guarantees need to be sustained regardless of the 2705 amount of other flows in those networks. 2707 7.2. Industrial M2M Communication Today 2709 Today, proprietary networks fulfill the needed timing and 2710 availability for M2M networks. 2712 The network topologies used today by industrial automation are 2713 similar to those used by telecom networks: Daisy Chain, Ring, Hub and 2714 Spoke, and Comb (a subset of Daisy Chain). 2716 PLC-related control/data streams are transmitted periodically and 2717 carry either a pre-configured payload or a payload configured during 2718 runtime. 2720 Some industrial applications require time synchronization at the end 2721 nodes. For such time-coordinated PLCs, accuracy of 1 microsecond is 2722 required. Even in the case of "non-time-coordinated" PLCs time sync 2723 may be needed e.g. for timestamping of sensor data. 2725 Industrial network scenarios require advanced security solutions. 2726 Many of the current industrial production networks are physically 2727 separated. Preventing critical flows from be leaked outside a domain 2728 is handled today by filtering policies that are typically enforced in 2729 firewalls. 2731 7.2.1. Transport Parameters 2733 The Cycle Time defines the frequency of message(s) between industrial 2734 actors. The Cycle Time is application dependent, in the range of 1ms 2735 - 100ms for critical control/data streams. 2737 Because industrial applications assume deterministic transport for 2738 critical Control-Data-Stream parameters (instead of defining latency 2739 and delay variation parameters) it is sufficient to fulfill the upper 2740 bound of latency (maximum latency). The underlying networking 2741 infrastructure must ensure a maximum end-to-end delivery time of 2742 messages in the range of 100 microseconds to 50 milliseconds 2743 depending on the control loop application. 2745 The bandwidth requirements of control/data streams are usually 2746 calculated directly from the bytes-per-cycle parameter of the control 2747 loop. For PLC-to-PLC communication one can expect 2 - 32 streams 2748 with packet size in the range of 100 - 700 bytes. For S-PLC to PLCs 2749 the number of streams is higher - up to 256 streams. Usually no more 2750 than 20% of available bandwidth is used for critical control/data 2751 streams. In today's networks 1Gbps links are commonly used. 2753 Most PLC control loops are rather tolerant of packet loss, however 2754 critical control/data streams accept no more than 1 packet loss per 2755 consecutive communication cycle (i.e. if a packet gets lost in cycle 2756 "n", then the next cycle ("n+1") must be lossless). After two or 2757 more consecutive packet losses the network may be considered to be 2758 "down" by the Application. 2760 As network downtime may impact the whole production system the 2761 required network availability is rather high (99,999%). 2763 Based on the above parameters some form of redundancy will be 2764 required for M2M communications, however any individual solution 2765 depends on several parameters including cycle time, delivery time, 2766 etc. 2768 7.2.2. Stream Creation and Destruction 2770 In an industrial environment, critical control/data streams are 2771 created rather infrequently, on the order of ~10 times per day / week 2772 / month. Most of these critical control/data streams get created at 2773 machine startup, however flexibility is also needed during runtime, 2774 for example when adding or removing a machine. Going forward as 2775 production systems become more flexible, there will be a significant 2776 increase in the rate at which streams are created, changed and 2777 destroyed. 2779 7.3. Industrial M2M Future 2781 A converged IP-standards-based network with deterministic properties 2782 that can satisfy the timing, security and reliability constraints 2783 described above. Today's proprietary networks could then be 2784 interfaced to such a network via gateways or, in the case of new 2785 installations, devices could be connected directly to the converged 2786 network. 2788 For this use case time synchronization accuracy on the order of 1us 2789 is expected. 2791 7.4. Industrial M2M Asks 2793 o Converged IP-based network 2795 o Deterministic behavior (bounded latency and jitter ) 2797 o High availability (presumably through redundancy) (99.999 %) 2799 o Low message delivery time (100us - 50ms) 2801 o Low packet loss (burstless, 0.1-1 %) 2803 o Security (e.g. prevent critical flows from being leaked between 2804 physically separated networks) 2806 8. Mining Industry 2808 8.1. Use Case Description 2810 The mining industry is highly dependent on networks to monitor and 2811 control their systems both in open-pit and underground extraction, 2812 transport and refining processes. In order to reduce risks and 2813 increase operational efficiency in mining operations, a number of 2814 processes have migrated the operators from the extraction site to 2815 remote control and monitoring. 2817 In the case of open pit mining, autonomous trucks are used to 2818 transport the raw materials from the open pit to the refining factory 2819 where the final product (e.g. Copper) is obtained. Although the 2820 operation is autonomous, the tracks are remotely monitored from a 2821 central facility. 2823 In pit mines, the monitoring of the tailings or mine dumps is 2824 critical in order to avoid any environmental pollution. In the past, 2825 monitoring has been conducted through manual inspection of pre- 2826 installed dataloggers. Cabling is not usually exploited in such 2827 scenarios due to the cost and complex deployment requirements. 2828 Currently, wireless technologies are being employed to monitor these 2829 cases permanently. Slopes are also monitored in order to anticipate 2830 possible mine collapse. Due to the unstable terrain, cable 2831 maintenance is costly and complex and hence wireless technologies are 2832 employed. 2834 In the underground monitoring case, autonomous vehicles with 2835 extraction tools travel autonomously through the tunnels, but their 2836 operational tasks (such as excavation, stone breaking and transport) 2837 are controlled remotely from a central facility. This generates 2838 video and feedback upstream traffic plus downstream actuator control 2839 traffic. 2841 8.2. Mining Industry Today 2843 Currently the mining industry uses a packet switched architecture 2844 supported by high speed ethernet. However in order to achieve the 2845 delay and packet loss requirements the network bandwidth is 2846 overestimated, thus providing very low efficiency in terms of 2847 resource usage. 2849 QoS is implemented at the Routers to separate video, management, 2850 monitoring and process control traffic for each stream. 2852 Since mobility is involved in this process, the connection between 2853 the backbone and the mobile devices (e.g. trucks, trains and 2854 excavators) is solved using a wireless link. These links are based 2855 on 802.11 for open-pit mining and leaky feeder for underground 2856 mining. 2858 Lately in pit mines the use of LPWAN technologies has been extended: 2859 Tailings, slopes and mine dumps are monitored by battery-powered 2860 dataloggers that make use of robust long range radio technologies. 2861 Reliability is usually ensured through retransmissions at L2. 2862 Gateways or concentrators act as bridges forwarding the data to the 2863 backbone ethernet network. Deterministic requirements are biased 2864 towards reliability rather than latency as events are slowly 2865 triggered or can be anticipated in advance. 2867 At the mineral processing stage, conveyor belts and refining 2868 processes are controlled by a SCADA system, which provides the in- 2869 factory delay-constrained networking requirements. 2871 Voice communications are currently served by a redundant trunking 2872 infrastructure, independent from current data networks. 2874 8.3. Mining Industry Future 2876 Mining operations and management are currently converging towards a 2877 combination of autonomous operation and teleoperation of transport 2878 and extraction machines. This means that video, audio, monitoring 2879 and process control traffic will increase dramatically. Ideally, all 2880 activities on the mine will rely on network infrastructure. 2882 Wireless for open-pit mining is already a reality with LPWAN 2883 technologies and it is expected to evolve to more advanced LPWAN 2884 technologies such as those based on LTE to increase last hop 2885 reliability or novel LPWAN flavours with deterministic access. 2887 One area in which DetNet can improve this use case is in the wired 2888 networks that make up the "backbone network" of the system, which 2889 connect together many wireless access points (APs). The mobile 2890 machines (which are connected to the network via wireless) transition 2891 from one AP to the next as they move about. A deterministic, 2892 reliable, low latency backbone can enable these transitions to be 2893 more reliable. 2895 Connections which extend all the way from the base stations to the 2896 machinery via a mix of wired and wireless hops would also be 2897 beneficial, for example to improve remote control responsiveness of 2898 digging machines. However to guarantee deterministic performance of 2899 a DetNet, the end-to-end underlying network must be deterministic. 2900 Thus for this use case if a deterministic wireless transport is 2901 integrated with a wire-based DetNet network, it could create the 2902 desired wired plus wireless end-to-end deterministic network. 2904 8.4. Mining Industry Asks 2906 o Improved bandwidth efficiency 2908 o Very low delay to enable machine teleoperation 2910 o Dedicated bandwidth usage for high resolution video streams 2912 o Predictable delay to enable realtime monitoring 2914 o Potential to construct a unified DetNet network over a combination 2915 of wired and deterministic wireless links 2917 9. Private Blockchain 2919 9.1. Use Case Description 2921 Blockchain was created with bitcoin, as a 'public' blockchain on the 2922 open Internet, however blockchain has also spread far beyond its 2923 original host into various industries such as smart manufacturing, 2924 logistics, security, legal rights and others. In these industries 2925 blockchain runs in designated and carefully managed network in which 2926 deterministic networking requirements could be addressed by Detnet. 2927 Such implementations are referred to as 'private' blockchain. 2929 The sole distinction between public and private blockchain is related 2930 to who is allowed to participate in the network, execute the 2931 consensus protocol and maintain the shared ledger. 2933 Today's networks treat the traffic from blockchain on a best-effort 2934 basis, but blockchain operation could be made much more efficient if 2935 deterministic networking service were available to minimize latency 2936 and packet loss in the network. 2938 9.1.1. Blockchain Operation 2940 A 'block' runs as a container of a batch of primary items such as 2941 transactions, property records etc. The blocks are chained in such a 2942 way that the hash of the previous block works as the pointer header 2943 of the new block, where confirmation of each block requires a 2944 consensus mechanism. When an item arrives at a blockchain node, the 2945 latter broadcasts this item to the rest of nodes which receive and 2946 verify it and put it in the ongoing block. Block confirmation 2947 process begins as the amount of items reaches the predefined block 2948 capacity, and the node broadcasts its proved block to the rest of 2949 nodes to be verified and chained. 2951 9.1.2. Blockchain Network Architecture 2953 Blockchain node communication and coordination is achieved mainly 2954 through frequent point to multi-point communication, however 2955 persistent point-to-point connections are used to transport both the 2956 items and the blocks to the other nodes. 2958 When a node initiates, it first requests the other nodes' address 2959 from a specific entity such as DNS, then it creates persistent 2960 connections each of with other nodes. If node A confirms an item, it 2961 sends the item to the other nodes via the persistent connections. 2963 As a new block in a node completes and gets proved among the nodes, 2964 it starts propagating this block towards its neighbor nodes. Assume 2965 node A receives a block, it sends invite message after verification 2966 to its neighbor B, B checks if the designated block is available, it 2967 responds get message to A if it is unavailable, and A send the 2968 complete block to B. B repeats the process as A to start the next 2969 round of block propagation. 2971 The challenge of blockchain network operation is not overall data 2972 rates, since the volume from both block and item stays between 2973 hundreds of bytes to a couple of mega bytes per second, but is in 2974 transporting the blocks with minimum latency to maximize efficiency 2975 of the blockchain consensus process. 2977 9.1.3. Security Considerations 2979 Security is crucial to blockchain applications, and todayt blockchain 2980 addresses its security issues mainly at the application level, where 2981 cryptography as well as hash-based consensus play a leading role 2982 preventing both double-spending and malicious service attack. 2983 However, there is concern that in the proposed use case of a private 2984 blockchain network which is dependent on deterministic properties, 2985 the network could be vulnerable to delays and other specific attacks 2986 against determinism which could interrupt service. 2988 9.2. Private Blockchain Today 2990 Today private blockchain runs in L2 or L3 VPN, in general without 2991 guaranteed determinism. The industry players are starting to realize 2992 that improving determinism in their blockchain networks could improve 2993 the performance of their service, but as of today these goals are not 2994 being met. 2996 9.3. Private Blockchain Future 2998 Blockchain system performance can be greatly improved through 2999 deterministic networking service primarily because it would 3000 accelerate the consensus process. It would be valuable to be able to 3001 design a private blockchain network with the following properties: 3003 o Transport of point to multi-point traffic in a coordinated network 3004 architecture rather than at the application layer (which typically 3005 uses point-to-point connections) 3007 o Guaranteed transport latency 3009 o Reduced packet loss (to the point where packet retransmission- 3010 incurred delay would be negligible.) 3012 9.4. Private Blockchain Asks 3014 o Layer 2 and Layer 3 multicast of blockchain traffic 3016 o Item and block delivery with bounded, low latency and negligible 3017 packet loss 3019 o Coexistence in a single network of blockchain and IT traffic. 3021 o Ability to scale the network by distributing the centralized 3022 control of the network across multiple control entities. 3024 10. Network Slicing 3026 10.1. Use Case Description 3028 Network Slicing divides one physical network infrastructure into 3029 multiple logical networks. Each slice, corresponding to a logical 3030 network, uses resources and network functions independently from each 3031 other. Network Slicing provides flexibility of resource allocation 3032 and service quality customization. 3034 Future services will demand network performance with a wide variety 3035 of characteristics such as high data rate, low latency, low loss 3036 rate, security and many other parameters. Ideally every service 3037 would have its own physical network satisfying its particular 3038 performance requirements, however that would be prohibitively 3039 expensive. Network Slicing can provide a customized slice for a 3040 single service, and multiple slices can share the same physical 3041 network. This method can optimize the performance for the service at 3042 lower cost, and the flexibility of setting up and release the slices 3043 also allows the user to allocate the network resources dynamically. 3045 Unlike the other use cases presented here, Network Slicing is not a 3046 specific application that depends on specific deterministic 3047 properties; rather it is introduced as an area of networking to which 3048 DetNet might be applicable. 3050 10.2. DetNet Applied to Network Slicing 3052 10.2.1. Resource Isolation Across Slices 3054 One of the requirements discussed for Network Slicing is the "hard" 3055 separation of various users' deterministic performance. That is, it 3056 should be impossible for activity, lack of activity, or changes in 3057 activity of one or more users to have any appreciable effect on the 3058 deterministic performance parameters of any other slices. Typical 3059 techniques used today, which share a physical network among users, do 3060 not offer this level of isolation. DetNet can supply point-to-point 3061 or point-to-multipoint paths that offer bandwidth and latency 3062 guarantees to a user that cannot be affected by other users' data 3063 traffic. Thus DetNet is a powerful tool when latency and reliability 3064 are required in Network Slicing. 3066 10.2.2. Deterministic Services Within Slices 3068 Slices may need to provide services with DetNet-type performance 3069 guarantees, however note that a system can be implemented to provide 3070 such services in more than one way. For example the slice itself 3071 might be implemented using DetNet, and thus the slice can provide 3072 service guarantees and isolation to its users without any particular 3073 DetNet awareness on the part of the users' applications. 3074 Alternatively, a "non-DetNet-aware" slice may host an application 3075 that itself implements DetNet services and thus can enjoy similar 3076 service guarantees. 3078 10.3. A Network Slicing Use Case Example - 5G Bearer Network 3080 Network Slicing is a core feature of 5G defined in 3GPP, which is 3081 currently under development. A network slice in a mobile network is 3082 a complete logical network including Radio Access Network (RAN) and 3083 Core Network (CN). It provides telecommunication services and 3084 network capabilities, which may vary from slice to slice. A 5G 3085 bearer network is a typical use case of Network Slicing; for example 3086 consider three 5G service scenarios: eMMB, URLLC, and mMTC. 3088 o eMBB (Enhanced Mobile Broadband) focuses on services characterized 3089 by high data rates, such as high definition videos, virtual 3090 reality, augmented reality, and fixed mobile convergence. 3092 o URLLC (Ultra-Reliable and Low Latency Communications) focuses on 3093 latency-sensitive services, such as self-driving vehicles, remote 3094 surgery, or drone control. 3096 o mMTC (massive Machine Type Communications) focuses on services 3097 that have high requirements for connection density, such as those 3098 typical for smart city and smart agriculture use cases. 3100 A 5G bearer network could use DetNet to provide hard resource 3101 isolation across slices and within the slice. For example consider 3102 Slice-A and Slice-B, with DetNet used to transit services URLLC-A and 3103 URLLC-B over them. Without DetNet, URLLC-A and URLLC-B would compete 3104 for bandwidth resource, and latency and reliability would not be 3105 guaranteed. With DetNet, URLLC-A and URLLC-B have separate bandwidth 3106 reservation and there is no resource conflict between them, as though 3107 they were in different logical networks. 3109 10.4. Non-5G Applications of Network Slicing 3111 Although operation of services not related to 5G is not part of the 3112 5G Network Slicing definition and scope, Network Slicing is likely to 3113 become a preferred approach to providing various services across a 3114 shared physical infrastructure. Examples include providing 3115 electrical utilities services and pro audio services via slices. Use 3116 cases like these could become more common once the work for the 5G 3117 core network evolves to include wired as well as wireless access. 3119 10.5. Limitations of DetNet in Network Slicing 3121 DetNet cannot cover every Network Slicing use case. One issue is 3122 that DetNet is a point-to-point or point-to-multipoint technology, 3123 however Network Slicing ultimately needs multi-point to multi-point 3124 guarantees. Another issue is that the number of flows that can be 3125 carried by DetNet is limited by DetNet scalability; flow aggregation 3126 and queuing management modification may help address this. 3127 Additional work and discussion are needed to address these topics. 3129 10.6. Network Slicing Today and Future 3131 Network Slicing has the promise to satisfy many requirements of 3132 future network deployment scenarios, but it is still a collection of 3133 ideas and analysis, without a specific technical solution. DetNet is 3134 one of various technologies that have potential to be used in Network 3135 Slicing, along with for example Flex-E and Segment Routing. For more 3136 information please see the IETF99 Network Slicing BOF session agenda 3137 and materials. 3139 10.7. Network Slicing Asks 3141 o Isolation from other flows through Queuing Management 3143 o Service Quality Customization and Guarantee 3145 o Security 3147 11. Use Case Common Themes 3149 This section summarizes the expected properties of a DetNet network, 3150 based on the use cases as described in this draft. 3152 11.1. Unified, standards-based network 3154 11.1.1. Extensions to Ethernet 3156 A DetNet network is not "a new kind of network" - it based on 3157 extensions to existing Ethernet standards, including elements of IEEE 3158 802.1 AVB/TSN and related standards. Presumably it will be possible 3159 to run DetNet over other underlying transports besides Ethernet, but 3160 Ethernet is explicitly supported. 3162 11.1.2. Centrally Administered 3164 In general a DetNet network is not expected to be "plug and play" - 3165 it is expected that there is some centralized network configuration 3166 and control system. Such a system may be in a single central 3167 location, or it maybe distributed across multiple control entities 3168 that function together as a unified control system for the network. 3169 However, the ability to "hot swap" components (e.g. due to 3170 malfunction) is similar enough to "plug and play" that this kind of 3171 behavior may be expected in DetNet networks, depending on the 3172 implementation. 3174 11.1.3. Standardized Data Flow Information Models 3176 Data Flow Information Models to be used with DetNet networks are to 3177 be specified by DetNet. 3179 11.1.4. L2 and L3 Integration 3181 A DetNet network is intended to integrate between Layer 2 (bridged) 3182 network(s) (e.g. AVB/TSN LAN) and Layer 3 (routed) network(s) (e.g. 3183 using IP-based protocols). One example of this is "making AVB/TSN- 3184 type deterministic performance available from Layer 3 applications, 3185 e.g. using RTP". Another example is "connecting two AVB/TSN LANs 3186 ("islands") together through a standard router". 3188 11.1.5. Consideration for IPv4 3190 This Use Cases draft explicitly does not specify any particular 3191 implementation or protocol, however it has been observed that various 3192 of the use cases described (and their associated industries) are 3193 explicitly based on IPv4 (as opposed to IPv6) and it is not 3194 considered practical to expect them to migrate to IPv6 in order to 3195 use DetNet. Thus the expectation is that even if not every feature 3196 of DetNet is available in an IPv4 context, at least some of the 3197 significant benefits (such as guaranteed end-to-end delivery and low 3198 latency) are expected to be available. 3200 11.1.6. Guaranteed End-to-End Delivery 3202 Packets sent over DetNet are guaranteed not to be dropped by the 3203 network due to congestion. However, the network may drop packets for 3204 intended reasons, e.g. per security measures. Also note that this 3205 guarantee applies to the actions of DetNet protocol software, and 3206 does not provide any guarantee against lower level errors such as 3207 media errors or checksum errors. 3209 11.1.7. Replacement for Multiple Proprietary Deterministic Networks 3211 There are many proprietary non-interoperable deterministic Ethernet- 3212 based networks currently available; DetNet is intended to provide an 3213 open-standards-based alternative to such networks. 3215 11.1.8. Mix of Deterministic and Best-Effort Traffic 3217 DetNet is intended to support coexistance of time-sensitive 3218 operational (OT) traffic and information (IT) traffic on the same 3219 ("unified") network. 3221 11.1.9. Unused Reserved BW to be Available to Best Effort Traffic 3223 If bandwidth reservations are made for a stream but the associated 3224 bandwidth is not used at any point in time, that bandwidth is made 3225 available on the network for best-effort traffic. If the owner of 3226 the reserved stream then starts transmitting again, the bandwidth is 3227 no longer available for best-effort traffic, on a moment-to-moment 3228 basis. Note that such "temporarily available" bandwidth is not 3229 available for time-sensitive traffic, which must have its own 3230 reservation. 3232 11.1.10. Lower Cost, Multi-Vendor Solutions 3234 The DetNet network specifications are intended to enable an ecosystem 3235 in which multiple vendors can create interoperable products, thus 3236 promoting device diversity and potentially higher numbers of each 3237 device manufactured, promoting cost reduction and cost competition 3238 among vendors. The intent is that DetNet networks should be able to 3239 be created at lower cost and with greater diversity of available 3240 devices than existing proprietary networks. 3242 11.2. Scalable Size 3244 DetNet networks range in size from very small, e.g. inside a single 3245 industrial machine, to very large, for example a Utility Grid network 3246 spanning a whole country, and involving many "hops" over various 3247 kinds of links for example radio repeaters, microwave linkes, fiber 3248 optic links, etc.. However recall that the scope of DetNet is 3249 confined to networks that are centrally administered, and explicitly 3250 excludes unbounded decentralized networks such as the Internet. 3252 11.2.1. Scalable Number of Flows 3254 The number of flows in a given network application can potentially be 3255 large, and can potentially grow faster than the number of nodes and 3256 hops. So the network should provide a sufficient (perhaps 3257 configurable) maximum number of flows for any given application. 3259 11.3. Scalable Timing Parameters and Accuracy 3261 11.3.1. Bounded Latency 3263 The DetNet Data Flow Information Model is expected to provide means 3264 to configure the network that include parameters for querying network 3265 path latency, requesting bounded latency for a given stream, 3266 requesting worst case maximum and/or minimum latency for a given path 3267 or stream, and so on. It is an expected case that the network may 3268 not be able to provide a given requested service level, and if so the 3269 network control system should reply that the requested services is 3270 not available (as opposed to accepting the parameter but then not 3271 delivering the desired behavior). 3273 11.3.2. Low Latency 3275 Applications may require "extremely low latency" however depending on 3276 the application these may mean very different latency values; for 3277 example "low latency" across a Utility grid network is on a different 3278 time scale than "low latency" in a motor control loop in a small 3279 machine. The intent is that the mechanisms for specifying desired 3280 latency include wide ranges, and that architecturally there is 3281 nothing to prevent arbirtrarily low latencies from being implemented 3282 in a given network. 3284 11.3.3. Bounded Jitter (Latency Variation) 3286 As with the other Latency-related elements noted above, parameters 3287 should be available to determine or request the allowed variation in 3288 latency. 3290 11.3.4. Symmetrical Path Delays 3292 Some applications would like to specify that the transit delay time 3293 values be equal for both the transmit and return paths. 3295 11.4. High Reliability and Availability 3297 Reliablity is of critical importance to many DetNet applications, in 3298 which consequences of failure can be extraordinarily high in terms of 3299 cost and even human life. DetNet based systems are expected to be 3300 implemented with essentially arbitrarily high availability (for 3301 example 99.9999% up time, or even 12 nines). The intent is that the 3302 DetNet designs should not make any assumptions about the level of 3303 reliability and availability that may be required of a given system, 3304 and should define parameters for communicating these kinds of metrics 3305 within the network. 3307 A strategy used by DetNet for providing such extraordinarily high 3308 levels of reliability is to provide redundant paths that can be 3309 seamlessly switched between, while maintaining the required 3310 performance of that system. 3312 11.5. Security 3314 Security is of critical importance to many DetNet applications. A 3315 DetNet network must be able to be made secure against devices 3316 failures, attackers, misbehaving devices, and so on. In a DetNet 3317 network the data traffic is expected to be be time-sensitive, thus in 3318 addition to arriving with the data content as intended, the data must 3319 also arrive at the expected time. This may present "new" security 3320 challenges to implementers, and must be addressed accordingly. There 3321 are other security implications, including (but not limited to) the 3322 change in attack surface presented by packet replication and 3323 elimination. 3325 11.6. Deterministic Flows 3327 Reserved bandwidth data flows must be isolated from each other and 3328 from best-effort traffic, so that even if the network is saturated 3329 with best-effort (and/or reserved bandwidth) traffic, the configured 3330 flows are not adversely affected. 3332 12. Use Cases Explicitly Out of Scope for DetNet 3334 This section contains use case text that has been determined to be 3335 outside of the scope of the present DetNet work. 3337 12.1. DetNet Scope Limitations 3339 The scope of DetNet is deliberately limited to specific use cases 3340 that are consistent with the WG charter, subject to the 3341 interpretation of the WG. At the time the DetNet Use Cases were 3342 solicited and provided by the authors the scope of DetNet was not 3343 clearly defined, and as that clarity has emerged, certain of the use 3344 cases have been determined to be outside the scope of the present 3345 DetNet work. Such text has been moved into this section to clarify 3346 that these use cases will not be supported by the DetNet work. 3348 The text in this section was moved here based on the following 3349 "exclusion" principles. Or, as an alternative to moving all such 3350 text to this section, some draft text has been modified in situ to 3351 reflect these same principles. 3353 The following principles have been established to clarify the scope 3354 of the present DetNet work. 3356 o The scope of network addressed by DetNet is limited to networks 3357 that can be centrally controlled, i.e. an "enterprise" aka 3358 "corporate" network. This explicitly excludes "the open 3359 Internet". 3361 o Maintaining synchronized time across a DetNet network is crucial 3362 to its operation, however DetNet assumes that time is to be 3363 maintained using other means, for example (but not limited to) 3364 Precision Time Protocol ([IEEE1588]). A use case may state the 3365 accuracy and reliability that it expects from the DetNet network 3366 as part of a whole system, however it is understood that such 3367 timing properties are not guaranteed by DetNet itself. It is 3368 currently an open question as to whether DetNet protocols will 3369 include a way for an application to communicate such timing 3370 expectations to the network, and if so whether they would be 3371 expected to materially affect the performance they would receive 3372 from the network as a result. 3374 12.2. Internet-based Applications 3376 There are many applications that communicate over the open Internet 3377 that could benefit from guaranteed delivery and bounded latency. 3378 However as noted above, all such applications when run over the open 3379 Internet are out of scope for DetNet. These same applications may be 3380 in-scope when run in constrained environments, i.e. within a 3381 centrally controlled DetNet network. The following are some examples 3382 of such applications. 3384 12.2.1. Use Case Description 3386 12.2.1.1. Media Content Delivery 3388 Media content delivery continues to be an important use of the 3389 Internet, yet users often experience poor quality audio and video due 3390 to the delay and jitter inherent in today's Internet. 3392 12.2.1.2. Online Gaming 3394 Online gaming is a significant part of the gaming market, however 3395 latency can degrade the end user experience. For example "First 3396 Person Shooter" games are highly delay-sensitive. 3398 12.2.1.3. Virtual Reality 3400 Virtual reality has many commercial applications including real 3401 estate presentations, remote medical procedures, and so on. Low 3402 latency is critical to interacting with the virtual world because 3403 perceptual delays can cause motion sickness. 3405 12.2.2. Internet-Based Applications Today 3407 Internet service today is by definition "best effort", with no 3408 guarantees on delivery or bandwidth. 3410 12.2.3. Internet-Based Applications Future 3412 An Internet from which one can play a video without glitches and play 3413 games without lag. 3415 For online gaming, the maximum round-trip delay can be 100ms and 3416 stricter for FPS gaming which can be 10-50ms. Transport delay is the 3417 dominate part with a 5-20ms budget. 3419 For VR, 1-10ms maximum delay is needed and total network budget is 3420 1-5ms if doing remote VR. 3422 Flow identification can be used for gaming and VR, i.e. it can 3423 recognize a critical flow and provide appropriate latency bounds. 3425 12.2.4. Internet-Based Applications Asks 3427 o Unified control and management protocols to handle time-critical 3428 data flow 3430 o Application-aware flow filtering mechanism to recognize the timing 3431 critical flow without doing 5-tuple matching 3433 o Unified control plane to provide low latency service on Layer-3 3434 without changing the data plane 3436 o OAM system and protocols which can help to provide E2E-delay 3437 sensitive service provisioning 3439 12.3. Pro Audio and Video - Digital Rights Management (DRM) 3441 This section was moved here because this is considered a Link layer 3442 topic, not direct responsibility of DetNet. 3444 Digital Rights Management (DRM) is very important to the audio and 3445 video industries. Any time protected content is introduced into a 3446 network there are DRM concerns that must be maintained (see 3447 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of 3448 network technology, however there are cases when a secure link 3449 supporting authentication and encryption is required by content 3450 owners to carry their audio or video content when it is outside their 3451 own secure environment (for example see [DCI]). 3453 As an example, two techniques are Digital Transmission Content 3454 Protection (DTCP) and High-Bandwidth Digital Content Protection 3455 (HDCP). HDCP content is not approved for retransmission within any 3456 other type of DRM, while DTCP may be retransmitted under HDCP. 3457 Therefore if the source of a stream is outside of the network and it 3458 uses HDCP protection it is only allowed to be placed on the network 3459 with that same HDCP protection. 3461 12.4. Pro Audio and Video - Link Aggregation 3463 Note: The term "Link Aggregation" is used here as defined by the text 3464 in the following paragraph, i.e. not following a more common Network 3465 Industry definition. Current WG consensus is that this item won't be 3466 directly supported by the DetNet architecture, for example because it 3467 implies guarantee of in-order delivery of packets which conflicts 3468 with the core goal of achieving the lowest possible latency. 3470 For transmitting streams that require more bandwidth than a single 3471 link in the target network can support, link aggregation is a 3472 technique for combining (aggregating) the bandwidth available on 3473 multiple physical links to create a single logical link of the 3474 required bandwidth. However, if aggregation is to be used, the 3475 network controller (or equivalent) must be able to determine the 3476 maximum latency of any path through the aggregate link. 3478 12.5. Pro Audio and Video - Deterministic Time to Establish Streaming 3480 The DetNet Working Group has decided that guidelines for establishing 3481 a deterministic time to establish stream startup are not within scope 3482 of DetNet. If bounded timing of establishing or re-establish streams 3483 is required in a given use case, it is up to the application/system 3484 to achieve this. 3486 13. Security Considerations 3488 This document covers a number of representative applications and 3489 network scenarios that are expected to make use of DetNet 3490 technologies. Each of the potential DetNet uses cases will have 3491 security considerations from both the use-specific and DetNet 3492 technology perspectives. While some use-specific security 3493 considerations are discussed above, a more comprehensive discussion 3494 of such considerations is captured in DetNet Security Considerations 3495 [I-D.ietf-detnet-security]. Readers are encouraged to review this 3496 document to gain a more complete understanding of DetNet related 3497 security considerations. 3499 14. Contributors 3501 RFC7322 limits the number of authors listed on the front page of a 3502 draft to a maximum of 5, far fewer than the 20 individuals below who 3503 made important contributions to this draft. The editor wishes to 3504 thank and acknowledge each of the following authors for contributing 3505 text to this draft. See also Section 15. 3507 Craig Gunther (Harman International) 3508 10653 South River Front Parkway, South Jordan,UT 84095 3509 phone +1 801 568-7675, email craig.gunther@harman.com 3511 Pascal Thubert (Cisco Systems, Inc) 3512 Building D, 45 Allee des Ormes - BP1200, MOUGINS 3513 Sophia Antipolis 06254 FRANCE 3514 phone +33 497 23 26 34, email pthubert@cisco.com 3516 Patrick Wetterwald (Cisco Systems) 3517 45 Allees des Ormes, Mougins, 06250 FRANCE 3518 phone +33 4 97 23 26 36, email pwetterw@cisco.com 3520 Jean Raymond (Hydro-Quebec) 3521 1500 University, Montreal, H3A3S7, Canada 3522 phone +1 514 840 3000, email raymond.jean@hydro.qc.ca 3524 Jouni Korhonen (Broadcom Corporation) 3525 3151 Zanker Road, San Jose, 95134, CA, USA 3526 email jouni.nospam@gmail.com 3528 Yu Kaneko (Toshiba) 3529 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi, Kanagawa, Japan 3530 email yu1.kaneko@toshiba.co.jp 3532 Subir Das (Vencore Labs) 3533 150 Mount Airy Road, Basking Ridge, New Jersey, 07920, USA 3534 email sdas@appcomsci.com 3536 Balazs Varga (Ericsson) 3537 Konyves Kalman krt. 11/B, Budapest, Hungary, 1097 3538 email balazs.a.varga@ericsson.com 3540 Janos Farkas (Ericsson) 3541 Konyves Kalman krt. 11/B, Budapest, Hungary, 1097 3542 email janos.farkas@ericsson.com 3544 Franz-Josef Goetz (Siemens) 3545 Gleiwitzerstr. 555, Nurnberg, Germany, 90475 3546 email franz-josef.goetz@siemens.com 3547 Juergen Schmitt (Siemens) 3548 Gleiwitzerstr. 555, Nurnberg, Germany, 90475 3549 email juergen.jues.schmitt@siemens.com 3551 Xavier Vilajosana (Worldsensing) 3552 483 Arago, Barcelona, Catalonia, 08013, Spain 3553 email xvilajosana@worldsensing.com 3555 Toktam Mahmoodi (King's College London) 3556 Strand, London WC2R 2LS, United Kingdom 3557 email toktam.mahmoodi@kcl.ac.uk 3559 Spiros Spirou (Intracom Telecom) 3560 19.7 km Markopoulou Ave., Peania, Attiki, 19002, Greece 3561 email spiros.spirou@gmail.com 3563 Petra Vizarreta (Technical University of Munich) 3564 Maxvorstadt, ArcisstraBe 21, Munich, 80333, Germany 3565 email petra.stojsavljevic@tum.de 3567 Daniel Huang (ZTE Corporation, Inc.) 3568 No. 50 Software Avenue, Nanjing, Jiangsu, 210012, P.R. China 3569 email huang.guangping@zte.com.cn 3571 Xuesong Geng (Huawei Technologies) 3572 email gengxuesong@huawei.com 3574 Diego Dujovne (Universidad Diego Portales) 3575 email diego.dujovne@mail.udp.cl 3577 Maik Seewald (Cisco Systems) 3578 email maseewal@cisco.com 3580 15. Acknowledgments 3582 15.1. Pro Audio 3584 This section was derived from draft-gunther-detnet-proaudio-req-01. 3586 The editors would like to acknowledge the help of the following 3587 individuals and the companies they represent: 3589 Jeff Koftinoff, Meyer Sound 3591 Jouni Korhonen, Associate Technical Director, Broadcom 3593 Pascal Thubert, CTAO, Cisco 3594 Kieran Tyrrell, Sienda New Media Technologies GmbH 3596 15.2. Utility Telecom 3598 This section was derived from draft-wetterwald-detnet-utilities-reqs- 3599 02. 3601 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy 3602 Practice Cisco 3604 Pascal Thubert, CTAO Cisco 3606 The wind power generation use case has been extracted from the study 3607 of Wind Farms conducted within the 5GPPP Virtuwind Project. The 3608 project is funded by the European Union's Horizon 2020 research and 3609 innovation programme under grant agreement No 671648 (VirtuWind). 3611 15.3. Building Automation Systems 3613 This section was derived from draft-bas-usecase-detnet-00. 3615 15.4. Wireless for Industrial 3617 This section was derived from draft-thubert-6tisch-4detnet-01. 3619 This specification derives from the 6TiSCH architecture, which is the 3620 result of multiple interactions, in particular during the 6TiSCH 3621 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at 3622 the IETF. 3624 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier 3625 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael 3626 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon, 3627 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey, 3628 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria 3629 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation 3630 and various contributions. 3632 15.5. Cellular Radio 3634 This section was derived from draft-korhonen-detnet-telreq-00. 3636 15.6. Industrial M2M 3638 The authors would like to thank Feng Chen and Marcel Kiessling for 3639 their comments and suggestions. 3641 15.7. Internet Applications and CoMP 3643 This section was derived from draft-zha-detnet-use-case-00 by Yiyong 3644 Zha. 3646 This document has benefited from reviews, suggestions, comments and 3647 proposed text provided by the following members, listed in 3648 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver 3649 Huang. 3651 15.8. Network Slicing 3653 This section was written by Xuesong Geng, who would like to 3654 acknowledge Norm Finn and Mach Chen for their useful comments. 3656 15.9. Mining 3658 This section was written by Diego Dujovne in conjunction with Xavier 3659 Vilasojana. 3661 15.10. Private Blockchain 3663 This section was written by Daniel Huang. 3665 16. IANA Considerations 3667 This memo includes no requests from IANA. 3669 17. Informative References 3671 [Ahm14] Ahmed, M. and R. Kim, "Communication network architectures 3672 for smart-wind power farms.", Energies, p. 3900-3921. , 3673 June 2014. 3675 [bacnetip] 3676 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP", 3677 January 1999. 3679 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND 3680 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_ 3681 and_Enhancement_v2.0, March 2015, 3682 . 3685 [CONTENT_PROTECTION] 3686 Olsen, D., "1722a Content Protection", 2012, 3687 . 3690 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI); 3691 Interface Specification", CPRI Specification V6.1, July 3692 2014, . 3695 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification, 3696 Version 1.2", 2012, . 3698 [eCPRI] IEEE Standards Association, "Common Public Radio 3699 Interface, "Common Public Radio Interface: eCPRI Interface 3700 Specification V1.0", 2017, . 3702 [ESPN_DC2] 3703 Daley, D., "ESPN's DC2 Scales AVB Large", 2014, 3704 . 3707 [flnet] Japan Electrical Manufacturers Association, "JEMA 1479 - 3708 English Edition", September 2012. 3710 [Fronthaul] 3711 Chen, D. and T. Mustala, "Ethernet Fronthaul 3712 Considerations", IEEE 1904.3, February 2015, 3713 . 3716 [I-D.ietf-6tisch-6top-interface] 3717 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer 3718 (6top) Interface", draft-ietf-6tisch-6top-interface-04 3719 (work in progress), July 2015. 3721 [I-D.ietf-6tisch-architecture] 3722 Thubert, P., "An Architecture for IPv6 over the TSCH mode 3723 of IEEE 802.15.4", draft-ietf-6tisch-architecture-14 (work 3724 in progress), April 2018. 3726 [I-D.ietf-6tisch-coap] 3727 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and 3728 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work 3729 in progress), March 2015. 3731 [I-D.ietf-detnet-architecture] 3732 Finn, N., Thubert, P., Varga, B., and J. Farkas, 3733 "Deterministic Networking Architecture", draft-ietf- 3734 detnet-architecture-08 (work in progress), September 2018. 3736 [I-D.ietf-detnet-problem-statement] 3737 Finn, N. and P. Thubert, "Deterministic Networking Problem 3738 Statement", draft-ietf-detnet-problem-statement-07 (work 3739 in progress), October 2018. 3741 [I-D.ietf-detnet-security] 3742 Mizrahi, T., Grossman, E., Hacker, A., Das, S., Dowdell, 3743 J., Austad, H., Stanton, K., and N. Finn, "Deterministic 3744 Networking (DetNet) Security Considerations", draft-ietf- 3745 detnet-security-02 (work in progress), April 2018. 3747 [I-D.ietf-tictoc-1588overmpls] 3748 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L. 3749 Montini, "Transporting Timing messages over MPLS 3750 Networks", draft-ietf-tictoc-1588overmpls-07 (work in 3751 progress), October 2015. 3753 [I-D.kh-spring-ip-ran-use-case] 3754 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing 3755 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02 3756 (work in progress), November 2014. 3758 [I-D.svshah-tsvwg-deterministic-forwarding] 3759 Shah, S. and P. Thubert, "Deterministic Forwarding PHB", 3760 draft-svshah-tsvwg-deterministic-forwarding-04 (work in 3761 progress), August 2015. 3763 [I-D.wang-6tisch-6top-sublayer] 3764 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer 3765 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in 3766 progress), November 2015. 3768 [IEC-60870-5-104] 3769 International Electrotechnical Commission, "International 3770 Standard IEC 60870-5-104: Network access for IEC 3771 60870-5-101 using standard transport profiles", June 2006. 3773 [IEC61400] 3774 "International standard 61400-25: Communications for 3775 monitoring and control of wind power plants", June 2013. 3777 [IEEE1588] 3778 IEEE, "IEEE Standard for a Precision Clock Synchronization 3779 Protocol for Networked Measurement and Control Systems", 3780 IEEE Std 1588-2008, 2008, 3781 . 3784 [IEEE1646] 3785 "Communication Delivery Time Performance Requirements for 3786 Electric Power Substation Automation", IEEE Standard 3787 1646-2004 , Apr 2004. 3789 [IEEE1722] 3790 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport 3791 Protocol for Time Sensitive Applications in a Bridged 3792 Local Area Network", IEEE Std 1722-2011, 2011, 3793 . 3796 [IEEE19143] 3797 IEEE Standards Association, "P1914.3/D3.1 Draft Standard 3798 for Radio over Ethernet Encapsulations and Mappings", 3799 IEEE 1914.3, 2018, 3800 . 3802 [IEEE802.1TSNTG] 3803 IEEE Standards Association, "IEEE 802.1 Time-Sensitive 3804 Networks Task Group", March 2013, 3805 . 3807 [IEEE802154] 3808 IEEE standard for Information Technology, "IEEE std. 3809 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC) 3810 and Physical Layer (PHY) Specifications for Low-Rate 3811 Wireless Personal Area Networks". 3813 [IEEE802154e] 3814 IEEE standard for Information Technology, "IEEE standard 3815 for Information Technology, IEEE std. 802.15.4, Part. 3816 15.4: Wireless Medium Access Control (MAC) and Physical 3817 Layer (PHY) Specifications for Low-Rate Wireless Personal 3818 Area Networks, June 2011 as amended by IEEE std. 3819 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area 3820 Networks (LR-WPANs) Amendment 1: MAC sublayer", April 3821 2012. 3823 [IEEE8021AS] 3824 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)", 3825 IEEE 802.1AS-2001, 2011, 3826 . 3829 [IEEE8021CM] 3830 Farkas, J., "Time-Sensitive Networking for Fronthaul", 3831 Unapproved PAR, PAR for a New IEEE Standard; 3832 IEEE P802.1CM, April 2015, 3833 . 3836 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation", 3837 . 3839 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006. 3841 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0", 3842 1994. 3844 [MEF22.1.1] 3845 MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells", 3846 MEF 22.1.1, July 2014, 3847 . 3850 [MEF8] MEF, "Implementation Agreement for the Emulation of PDH 3851 Circuits over Metro Ethernet Networks", MEF 8, October 3852 2004, 3853 . 3856 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and 3857 wireless system", ICT-317669-METIS/D1.1 ICT- 3858 317669-METIS/D1.1, April 2013, . 3861 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL 3862 SPECIFICATION V1.1b", December 2006. 3864 [MODBUS] Modbus Organization, Inc., "MODBUS Application Protocol 3865 Specification", Apr 2012. 3867 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0, 3868 February 2015, . 3871 [NGMN-fronth] 3872 NGMN Alliance, "Fronthaul Requirements for C-RAN", March 3873 2015, . 3876 [OPCXML] OPC Foundation, "OPC XML-Data Access Specification", Dec 3877 2004. 3879 [PCE] IETF, "Path Computation Element", 3880 . 3882 [profibus] 3883 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001. 3885 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 3886 Label Switching Architecture", RFC 3031, 3887 DOI 10.17487/RFC3031, January 2001, 3888 . 3890 [RFC3411] Harrington, D., Presuhn, R., and B. Wijnen, "An 3891 Architecture for Describing Simple Network Management 3892 Protocol (SNMP) Management Frameworks", STD 62, RFC 3411, 3893 DOI 10.17487/RFC3411, December 2002, 3894 . 3896 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation 3897 Edge-to-Edge (PWE3) Architecture", RFC 3985, 3898 DOI 10.17487/RFC3985, March 2005, 3899 . 3901 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure- 3902 Agnostic Time Division Multiplexing (TDM) over Packet 3903 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006, 3904 . 3906 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and 3907 P. Pate, "Structure-Aware Time Division Multiplexed (TDM) 3908 Circuit Emulation Service over Packet Switched Network 3909 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007, 3910 . 3912 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi, 3913 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087, 3914 DOI 10.17487/RFC5087, December 2007, 3915 . 3917 [RFC5905] Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch, 3918 "Network Time Protocol Version 4: Protocol and Algorithms 3919 Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010, 3920 . 3922 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J., 3923 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur, 3924 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for 3925 Low-Power and Lossy Networks", RFC 6550, 3926 DOI 10.17487/RFC6550, March 2012, 3927 . 3929 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N., 3930 and D. Barthel, "Routing Metrics Used for Path Calculation 3931 in Low-Power and Lossy Networks", RFC 6551, 3932 DOI 10.17487/RFC6551, March 2012, 3933 . 3935 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using 3936 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the 3937 Internet of Things (IoT): Problem Statement", RFC 7554, 3938 DOI 10.17487/RFC7554, May 2015, 3939 . 3941 [RFC8169] Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S., 3942 and A. Vainshtein, "Residence Time Measurement in MPLS 3943 Networks", RFC 8169, DOI 10.17487/RFC8169, May 2017, 3944 . 3946 [Spe09] Sperotto, A., Sadre, R., Vliet, F., and A. Pras, "A First 3947 Look into SCADA Network Traffic", IP Operations and 3948 Management, p. 518-521. , June 2009. 3950 [SRP_LATENCY] 3951 Gunther, C., "Specifying SRP Latency", 2014, 3952 . 3955 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in 3956 packet networks", Recommendation G.8261, August 2013, 3957 . 3959 [TR38801] IEEE Standards Association, "3GPP TR 38.801, Technical 3960 Specification Group Radio Access Network; Study on new 3961 radio access technology: Radio access architecture and 3962 interfaces (Release 14)", 2017, 3963 . 3966 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements 3967 for Evolved Universal Terrestrial Radio Access Network 3968 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013. 3970 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception 3971 (FDD)", 3GPP TS 25.104 3.14.0, March 2007. 3973 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access 3974 (E-UTRA); Base Station (BS) radio transmission and 3975 reception", 3GPP TS 36.104 10.11.0, July 2013. 3977 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access 3978 (E-UTRA); Requirements for support of radio resource 3979 management", 3GPP TS 36.133 12.7.0, April 2015. 3981 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access 3982 (E-UTRA); Physical channels and modulation", 3GPP 3983 TS 36.211 10.7.0, March 2013. 3985 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA) 3986 and Evolved Universal Terrestrial Radio Access Network 3987 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300 3988 10.11.0, September 2013. 3990 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive 3991 Networks Task Group", 2013, 3992 . 3994 [WirelessHART] 3995 www.hartcomm.org, "Industrial Communication Networks - 3996 Wireless Communication Network and Communication Profiles 3997 - WirelessHART - IEC 62591", 2010. 3999 Author's Address 4001 Ethan Grossman (editor) 4002 Dolby Laboratories, Inc. 4003 1275 Market Street 4004 San Francisco, CA 94103 4005 USA 4007 Phone: +1 415 645 4726 4008 Email: ethan.grossman@dolby.com 4009 URI: http://www.dolby.com