idnits 2.17.1 draft-ietf-detnet-use-cases-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'ACE' is defined on line 2962, but no explicit reference was found in the text == Unused Reference: 'CCAMP' is defined on line 2974, but no explicit reference was found in the text == Unused Reference: 'CPRI-transp' is defined on line 2993, but no explicit reference was found in the text == Unused Reference: 'DICE' is defined on line 3002, but no explicit reference was found in the text == Unused Reference: 'EA12' is defined on line 3005, but no explicit reference was found in the text == Unused Reference: 'HART' is defined on line 3022, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-6tisch-terminology' is defined on line 3051, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-ipv6-multilink-subnets' is defined on line 3057, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-roll-rpl-industrial-applicability' is defined on line 3068, but no explicit reference was found in the text == Unused Reference: 'I-D.thubert-6lowpan-backbone-router' is defined on line 3090, but no explicit reference was found in the text == Unused Reference: 'IEC61850-90-12' is defined on line 3109, but no explicit reference was found in the text == Unused Reference: 'IEEE8021TSN' is defined on line 3177, but no explicit reference was found in the text == Unused Reference: 'IETFDetNet' is defined on line 3183, but no explicit reference was found in the text == Unused Reference: 'LTE-Latency' is defined on line 3207, but no explicit reference was found in the text == Unused Reference: 'RFC2119' is defined on line 3251, but no explicit reference was found in the text == Unused Reference: 'RFC2460' is defined on line 3256, but no explicit reference was found in the text == Unused Reference: 'RFC2474' is defined on line 3260, but no explicit reference was found in the text == Unused Reference: 'RFC3209' is defined on line 3271, but no explicit reference was found in the text == Unused Reference: 'RFC3393' is defined on line 3276, but no explicit reference was found in the text == Unused Reference: 'RFC3444' is defined on line 3287, but no explicit reference was found in the text == Unused Reference: 'RFC3972' is defined on line 3292, but no explicit reference was found in the text == Unused Reference: 'RFC4291' is defined on line 3301, but no explicit reference was found in the text == Unused Reference: 'RFC4903' is defined on line 3310, but no explicit reference was found in the text == Unused Reference: 'RFC4919' is defined on line 3314, but no explicit reference was found in the text == Unused Reference: 'RFC6282' is defined on line 3331, but no explicit reference was found in the text == Unused Reference: 'RFC6775' is defined on line 3349, but no explicit reference was found in the text == Unused Reference: 'TEAS' is defined on line 3380, but no explicit reference was found in the text == Unused Reference: 'UHD-video' is defined on line 3411, but no explicit reference was found in the text == Outdated reference: A later version (-30) exists of draft-ietf-6tisch-architecture-10 == Outdated reference: A later version (-10) exists of draft-ietf-6tisch-terminology-07 == Outdated reference: A later version (-15) exists of draft-ietf-mpls-residence-time-11 -- Obsolete informational reference (is this intentional?): RFC 2460 (Obsoleted by RFC 8200) Summary: 1 error (**), 0 flaws (~~), 32 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force E. Grossman, Ed. 3 Internet-Draft DOLBY 4 Intended status: Informational C. Gunther 5 Expires: April 6, 2017 HARMAN 6 P. Thubert 7 P. Wetterwald 8 CISCO 9 J. Raymond 10 HYDRO-QUEBEC 11 J. Korhonen 12 BROADCOM 13 Y. Kaneko 14 Toshiba 15 S. Das 16 Applied Communication Sciences 17 Y. Zha 18 HUAWEI 19 B. Varga 20 J. Farkas 21 Ericsson 22 F. Goetz 23 J. Schmitt 24 Siemens 25 X. Vilajosana 26 Worldsensing 27 T. Mahmoodi 28 King's College London 29 S. Spirou 30 Intracom Telecom 31 P. Vizarreta 32 Technical University of Munich, TUM 33 October 3, 2016 35 Deterministic Networking Use Cases 36 draft-ietf-detnet-use-cases-11 38 Abstract 40 This draft documents requirements in several diverse industries to 41 establish multi-hop paths for characterized flows with deterministic 42 properties. In this context deterministic implies that streams can 43 be established which provide guaranteed bandwidth and latency which 44 can be established from either a Layer 2 or Layer 3 (IP) interface, 45 and which can co-exist on an IP network with best-effort traffic. 47 Additional requirements include optional redundant paths, very high 48 reliability paths, time synchronization, and clock distribution. 50 Industries considered include wireless for industrial applications, 51 professional audio, electrical utilities, building automation 52 systems, radio/mobile access networks, automotive, and gaming. 54 For each case, this document will identify the application, identify 55 representative solutions used today, and what new uses an IETF DetNet 56 solution may enable. 58 Status of This Memo 60 This Internet-Draft is submitted in full conformance with the 61 provisions of BCP 78 and BCP 79. 63 Internet-Drafts are working documents of the Internet Engineering 64 Task Force (IETF). Note that other groups may also distribute 65 working documents as Internet-Drafts. The list of current Internet- 66 Drafts is at http://datatracker.ietf.org/drafts/current/. 68 Internet-Drafts are draft documents valid for a maximum of six months 69 and may be updated, replaced, or obsoleted by other documents at any 70 time. It is inappropriate to use Internet-Drafts as reference 71 material or to cite them other than as "work in progress." 73 This Internet-Draft will expire on April 6, 2017. 75 Copyright Notice 77 Copyright (c) 2016 IETF Trust and the persons identified as the 78 document authors. All rights reserved. 80 This document is subject to BCP 78 and the IETF Trust's Legal 81 Provisions Relating to IETF Documents 82 (http://trustee.ietf.org/license-info) in effect on the date of 83 publication of this document. Please review these documents 84 carefully, as they describe your rights and restrictions with respect 85 to this document. Code Components extracted from this document must 86 include Simplified BSD License text as described in Section 4.e of 87 the Trust Legal Provisions and are provided without warranty as 88 described in the Simplified BSD License. 90 Table of Contents 92 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5 93 2. Pro Audio and Video . . . . . . . . . . . . . . . . . . . . . 6 94 2.1. Use Case Description . . . . . . . . . . . . . . . . . . 6 95 2.1.1. Uninterrupted Stream Playback . . . . . . . . . . . . 6 96 2.1.2. Synchronized Stream Playback . . . . . . . . . . . . 7 97 2.1.3. Sound Reinforcement . . . . . . . . . . . . . . . . . 7 98 2.1.4. Deterministic Time to Establish Streaming . . . . . . 8 99 2.1.5. Secure Transmission . . . . . . . . . . . . . . . . . 8 100 2.1.5.1. Safety . . . . . . . . . . . . . . . . . . . . . 8 101 2.2. Pro Audio Today . . . . . . . . . . . . . . . . . . . . . 9 102 2.3. Pro Audio Future . . . . . . . . . . . . . . . . . . . . 9 103 2.3.1. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 9 104 2.3.2. High Reliability Stream Paths . . . . . . . . . . . . 9 105 2.3.3. Integration of Reserved Streams into IT Networks . . 9 106 2.3.4. Use of Unused Reservations by Best-Effort Traffic . . 10 107 2.3.5. Traffic Segregation . . . . . . . . . . . . . . . . . 10 108 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets . . . 10 109 2.3.5.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 11 110 2.3.6. Latency Optimization by a Central Controller . . . . 11 111 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory . . 11 112 2.4. Pro Audio Asks . . . . . . . . . . . . . . . . . . . . . 12 113 3. Electrical Utilities . . . . . . . . . . . . . . . . . . . . 12 114 3.1. Use Case Description . . . . . . . . . . . . . . . . . . 12 115 3.1.1. Transmission Use Cases . . . . . . . . . . . . . . . 12 116 3.1.1.1. Protection . . . . . . . . . . . . . . . . . . . 12 117 3.1.1.2. Intra-Substation Process Bus Communications . . . 18 118 3.1.1.3. Wide Area Monitoring and Control Systems . . . . 19 119 3.1.1.4. IEC 61850 WAN engineering guidelines requirement 120 classification . . . . . . . . . . . . . . . . . 20 121 3.1.2. Generation Use Case . . . . . . . . . . . . . . . . . 21 122 3.1.2.1. Control of the Generated Power . . . . . . . . . 21 123 3.1.2.2. Control of the Generation Infrastructure . . . . 22 124 3.1.3. Distribution use case . . . . . . . . . . . . . . . . 27 125 3.1.3.1. Fault Location Isolation and Service Restoration 126 (FLISR) . . . . . . . . . . . . . . . . . . . . . 27 127 3.2. Electrical Utilities Today . . . . . . . . . . . . . . . 28 128 3.2.1. Security Current Practices and Limitations . . . . . 28 129 3.3. Electrical Utilities Future . . . . . . . . . . . . . . . 30 130 3.3.1. Migration to Packet-Switched Network . . . . . . . . 31 131 3.3.2. Telecommunications Trends . . . . . . . . . . . . . . 31 132 3.3.2.1. General Telecommunications Requirements . . . . . 31 133 3.3.2.2. Specific Network topologies of Smart Grid 134 Applications . . . . . . . . . . . . . . . . . . 32 135 3.3.2.3. Precision Time Protocol . . . . . . . . . . . . . 33 136 3.3.3. Security Trends in Utility Networks . . . . . . . . . 34 137 3.4. Electrical Utilities Asks . . . . . . . . . . . . . . . . 36 138 4. Building Automation Systems . . . . . . . . . . . . . . . . . 36 139 4.1. Use Case Description . . . . . . . . . . . . . . . . . . 36 140 4.2. Building Automation Systems Today . . . . . . . . . . . . 37 141 4.2.1. BAS Architecture . . . . . . . . . . . . . . . . . . 37 142 4.2.2. BAS Deployment Model . . . . . . . . . . . . . . . . 38 143 4.2.3. Use Cases for Field Networks . . . . . . . . . . . . 40 144 4.2.3.1. Environmental Monitoring . . . . . . . . . . . . 40 145 4.2.3.2. Fire Detection . . . . . . . . . . . . . . . . . 40 146 4.2.3.3. Feedback Control . . . . . . . . . . . . . . . . 41 147 4.2.4. Security Considerations . . . . . . . . . . . . . . . 41 148 4.3. BAS Future . . . . . . . . . . . . . . . . . . . . . . . 41 149 4.4. BAS Asks . . . . . . . . . . . . . . . . . . . . . . . . 42 150 5. Wireless for Industrial . . . . . . . . . . . . . . . . . . . 42 151 5.1. Use Case Description . . . . . . . . . . . . . . . . . . 42 152 5.1.1. Network Convergence using 6TiSCH . . . . . . . . . . 43 153 5.1.2. Common Protocol Development for 6TiSCH . . . . . . . 43 154 5.2. Wireless Industrial Today . . . . . . . . . . . . . . . . 44 155 5.3. Wireless Industrial Future . . . . . . . . . . . . . . . 44 156 5.3.1. Unified Wireless Network and Management . . . . . . . 44 157 5.3.1.1. PCE and 6TiSCH ARQ Retries . . . . . . . . . . . 46 158 5.3.2. Schedule Management by a PCE . . . . . . . . . . . . 47 159 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests . . . . . . 47 160 5.3.2.2. 6TiSCH IP Interface . . . . . . . . . . . . . . . 48 161 5.3.3. 6TiSCH Security Considerations . . . . . . . . . . . 49 162 5.4. Wireless Industrial Asks . . . . . . . . . . . . . . . . 49 163 6. Cellular Radio . . . . . . . . . . . . . . . . . . . . . . . 49 164 6.1. Use Case Description . . . . . . . . . . . . . . . . . . 49 165 6.1.1. Network Architecture . . . . . . . . . . . . . . . . 49 166 6.1.2. Delay Constraints . . . . . . . . . . . . . . . . . . 50 167 6.1.3. Time Synchronization Constraints . . . . . . . . . . 51 168 6.1.4. Transport Loss Constraints . . . . . . . . . . . . . 53 169 6.1.5. Security Considerations . . . . . . . . . . . . . . . 53 170 6.2. Cellular Radio Networks Today . . . . . . . . . . . . . . 54 171 6.2.1. Fronthaul . . . . . . . . . . . . . . . . . . . . . . 54 172 6.2.2. Midhaul and Backhaul . . . . . . . . . . . . . . . . 54 173 6.3. Cellular Radio Networks Future . . . . . . . . . . . . . 55 174 6.4. Cellular Radio Networks Asks . . . . . . . . . . . . . . 57 175 7. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 57 176 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 57 177 7.2. Industrial M2M Communication Today . . . . . . . . . . . 58 178 7.2.1. Transport Parameters . . . . . . . . . . . . . . . . 59 179 7.2.2. Stream Creation and Destruction . . . . . . . . . . . 60 180 7.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 60 181 7.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 60 182 8. Use Case Common Elements . . . . . . . . . . . . . . . . . . 60 183 9. Use Cases Explicitly Out of Scope for DetNet . . . . . . . . 61 184 9.1. DetNet Scope Limitations . . . . . . . . . . . . . . . . 62 185 9.2. Internet-based Applications . . . . . . . . . . . . . . . 62 186 9.2.1. Use Case Description . . . . . . . . . . . . . . . . 62 187 9.2.1.1. Media Content Delivery . . . . . . . . . . . . . 63 188 9.2.1.2. Online Gaming . . . . . . . . . . . . . . . . . . 63 189 9.2.1.3. Virtual Reality . . . . . . . . . . . . . . . . . 63 190 9.2.2. Internet-Based Applications Today . . . . . . . . . . 63 191 9.2.3. Internet-Based Applications Future . . . . . . . . . 63 192 9.2.4. Internet-Based Applications Asks . . . . . . . . . . 63 193 9.3. Pro Audio and Video - Digital Rights Management (DRM) . . 64 194 9.4. Pro Audio and Video - Link Aggregation . . . . . . . . . 64 195 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 65 196 10.1. Pro Audio . . . . . . . . . . . . . . . . . . . . . . . 65 197 10.2. Utility Telecom . . . . . . . . . . . . . . . . . . . . 65 198 10.3. Building Automation Systems . . . . . . . . . . . . . . 65 199 10.4. Wireless for Industrial . . . . . . . . . . . . . . . . 65 200 10.5. Cellular Radio . . . . . . . . . . . . . . . . . . . . . 66 201 10.6. Industrial M2M . . . . . . . . . . . . . . . . . . . . . 66 202 10.7. Internet Applications and CoMP . . . . . . . . . . . . . 66 203 10.8. Electrical Utilities . . . . . . . . . . . . . . . . . . 66 204 11. Informative References . . . . . . . . . . . . . . . . . . . 66 205 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 76 207 1. Introduction 209 This draft presents use cases from diverse industries which have in 210 common a need for deterministic streams, but which also differ 211 notably in their network topologies and specific desired behavior. 212 Together, they provide broad industry context for DetNet and a 213 yardstick against which proposed DetNet designs can be measured (to 214 what extent does a proposed design satisfy these various use cases?) 216 For DetNet, use cases explicitly do not define requirements; The 217 DetNet WG will consider the use cases, decide which elements are in 218 scope for DetNet, and the results will be incorporated into future 219 drafts. Similarly, the DetNet use case draft explicitly does not 220 suggest any specific design, architecture or protocols, which will be 221 topics of future drafts. 223 We present for each use case the answers to the following questions: 225 o What is the use case? 227 o How is it addressed today? 229 o How would you like it to be addressed in the future? 231 o What do you want the IETF to deliver? 233 The level of detail in each use case should be sufficient to express 234 the relevant elements of the use case, but not more. 236 At the end we consider the use cases collectively, and examine the 237 most significant goals they have in common. 239 2. Pro Audio and Video 241 2.1. Use Case Description 243 The professional audio and video industry ("ProAV") includes: 245 o Music and film content creation 247 o Broadcast 249 o Cinema 251 o Live sound 253 o Public address, media and emergency systems at large venues 254 (airports, stadiums, churches, theme parks). 256 These industries have already transitioned audio and video signals 257 from analog to digital. However, the digital interconnect systems 258 remain primarily point-to-point with a single (or small number of) 259 signals per link, interconnected with purpose-built hardware. 261 These industries are now transitioning to packet-based infrastructure 262 to reduce cost, increase routing flexibility, and integrate with 263 existing IT infrastructure. 265 Today ProAV applications have no way to establish deterministic 266 streams from a standards-based Layer 3 (IP) interface, which is a 267 fundamental limitation to the use cases described here. Today 268 deterministic streams can be created within standards-based layer 2 269 LANs (e.g. using IEEE 802.1 AVB) however these are not routable via 270 IP and thus are not effective for distribution over wider areas (for 271 example broadcast events that span wide geographical areas). 273 It would be highly desirable if such streams could be routed over the 274 open Internet, however solutions with more limited scope (e.g. 275 enterprise networks) would still provide a substantial improvement. 277 The following sections describe specific ProAV use cases. 279 2.1.1. Uninterrupted Stream Playback 281 Transmitting audio and video streams for live playback is unlike 282 common file transfer because uninterrupted stream playback in the 283 presence of network errors cannot be achieved by re-trying the 284 transmission; by the time the missing or corrupt packet has been 285 identified it is too late to execute a re-try operation. Buffering 286 can be used to provide enough delay to allow time for one or more 287 retries, however this is not an effective solution in applications 288 where large delays (latencies) are not acceptable (as discussed 289 below). 291 Streams with guaranteed bandwidth can eliminate congestion on the 292 network as a cause of transmission errors that would lead to playback 293 interruption. Use of redundant paths can further mitigate 294 transmission errors to provide greater stream reliability. 296 2.1.2. Synchronized Stream Playback 298 Latency in this context is the time between when a signal is 299 initially sent over a stream and when it is received. A common 300 example in ProAV is time-synchronizing audio and video when they take 301 separate paths through the playback system. In this case the latency 302 of both the audio and video streams must be bounded and consistent if 303 the sound is to remain matched to the movement in the video. A 304 common tolerance for audio/video sync is one NTSC video frame (about 305 33ms) and to maintain the audience perception of correct lip sync the 306 latency needs to be consistent within some reasonable tolerance, for 307 example 10%. 309 A common architecture for synchronizing multiple streams that have 310 different paths through the network (and thus potentially different 311 latencies) is to enable measurement of the latency of each path, and 312 have the data sinks (for example speakers) delay (buffer) all packets 313 on all but the slowest path. Each packet of each stream is assigned 314 a presentation time which is based on the longest required delay. 315 This implies that all sinks must maintain a common time reference of 316 sufficient accuracy, which can be achieved by any of various 317 techniques. 319 This type of architecture is commonly implemented using a central 320 controller that determines path delays and arbitrates buffering 321 delays. 323 2.1.3. Sound Reinforcement 325 Consider the latency (delay) from when a person speaks into a 326 microphone to when their voice emerges from the speaker. If this 327 delay is longer than about 10-15 milliseconds it is noticeable and 328 can make a sound reinforcement system unusable (see slide 6 of 329 [SRP_LATENCY]). (If you have ever tried to speak in the presence of 330 a delayed echo of your voice you may know this experience). 332 Note that the 15ms latency bound includes all parts of the signal 333 path, not just the network, so the network latency must be 334 significantly less than 15ms. 336 In some cases local performers must perform in synchrony with a 337 remote broadcast. In such cases the latencies of the broadcast 338 stream and the local performer must be adjusted to match each other, 339 with a worst case of one video frame (33ms for NTSC video). 341 In cases where audio phase is a consideration, for example beam- 342 forming using multiple speakers, latency requirements can be in the 343 10 microsecond range (1 audio sample at 96kHz). 345 2.1.4. Deterministic Time to Establish Streaming 347 Note: It is still under WG discussion whether this topic (stream 348 startup time) is within scope of DetNet. 350 Some audio systems installed in public environments (airports, 351 hospitals) have unique requirements with regards to health, safety 352 and fire concerns. One such requirement is a maximum of 3 seconds 353 for a system to respond to an emergency detection and begin sending 354 appropriate warning signals and alarms without human intervention. 355 For this requirement to be met, the system must support a bounded and 356 acceptable time from a notification signal to specific stream 357 establishment. For further details see [ISO7240-16]. 359 Similar requirements apply when the system is restarted after a power 360 cycle, cable re-connection, or system reconfiguration. 362 In many cases such re-establishment of streaming state must be 363 achieved by the peer devices themselves, i.e. without a central 364 controller (since such a controller may only be present during 365 initial network configuration). 367 Video systems introduce related requirements, for example when 368 transitioning from one camera feed (video stream) to another (see 369 [STUDIO_IP] and [ESPN_DC2]). 371 2.1.5. Secure Transmission 373 2.1.5.1. Safety 375 Professional audio systems can include amplifiers that are capable of 376 generating hundreds or thousands of watts of audio power which if 377 used incorrectly can cause hearing damage to those in the vicinity. 378 Apart from the usual care required by the systems operators to 379 prevent such incidents, the network traffic that controls these 380 devices must be secured (as with any sensitive application traffic). 382 2.2. Pro Audio Today 384 Some proprietary systems have been created which enable deterministic 385 streams at Layer 3 however they are "engineered networks" which 386 require careful configuration to operate, often require that the 387 system be over-provisioned, and it is implied that all devices on the 388 network voluntarily play by the rules of that network. To enable 389 these industries to successfully transition to an interoperable 390 multi-vendor packet-based infrastructure requires effective open 391 standards, and we believe that establishing relevant IETF standards 392 is a crucial factor. 394 2.3. Pro Audio Future 396 2.3.1. Layer 3 Interconnecting Layer 2 Islands 398 It would be valuable to enable IP to connect multiple Layer 2 LANs. 400 As an example, ESPN recently constructed a state-of-the-art 194,000 401 sq ft, $125 million broadcast studio called DC2. The DC2 network is 402 capable of handling 46 Tbps of throughput with 60,000 simultaneous 403 signals. Inside the facility are 1,100 miles of fiber feeding four 404 audio control rooms (see [ESPN_DC2] ). 406 In designing DC2 they replaced as much point-to-point technology as 407 they could with packet-based technology. They constructed seven 408 individual studios using layer 2 LANS (using IEEE 802.1 AVB) that 409 were entirely effective at routing audio within the LANs. However to 410 interconnect these layer 2 LAN islands together they ended up using 411 dedicated paths in a custom SDN (Software Defined Networking) router 412 because there is no standards-based routing solution available. 414 2.3.2. High Reliability Stream Paths 416 On-air and other live media streams are often backed up with 417 redundant links that seamlessly act to deliver the content when the 418 primary link fails for any reason. In point-to-point systems this is 419 provided by an additional point-to-point link; the analogous 420 requirement in a packet-based system is to provide an alternate path 421 through the network such that no individual link can bring down the 422 system. 424 2.3.3. Integration of Reserved Streams into IT Networks 426 A commonly cited goal of moving to a packet based media 427 infrastructure is that costs can be reduced by using off the shelf, 428 commodity network hardware. In addition, economy of scale can be 429 realized by combining media infrastructure with IT infrastructure. 431 In keeping with these goals, stream reservation technology should be 432 compatible with existing protocols, and not compromise use of the 433 network for best effort (non-time-sensitive) traffic. 435 2.3.4. Use of Unused Reservations by Best-Effort Traffic 437 In cases where stream bandwidth is reserved but not currently used 438 (or is under-utilized) that bandwidth must be available to best- 439 effort (i.e. non-time-sensitive) traffic. For example a single 440 stream may be nailed up (reserved) for specific media content that 441 needs to be presented at different times of the day, ensuring timely 442 delivery of that content, yet in between those times the full 443 bandwidth of the network can be utilized for best-effort tasks such 444 as file transfers. 446 This also addresses a concern of IT network administrators that are 447 considering adding reserved bandwidth traffic to their networks that 448 ("users will reserve large quantities of bandwidth and then never un- 449 reserve it even though they are not using it, and soon the network 450 will have no bandwidth left"). 452 2.3.5. Traffic Segregation 454 Note: It is still under WG discussion whether this topic will be 455 addressed by DetNet. 457 Sink devices may be low cost devices with limited processing power. 458 In order to not overwhelm the CPUs in these devices it is important 459 to limit the amount of traffic that these devices must process. 461 As an example, consider the use of individual seat speakers in a 462 cinema. These speakers are typically required to be cost reduced 463 since the quantities in a single theater can reach hundreds of seats. 464 Discovery protocols alone in a one thousand seat theater can generate 465 enough broadcast traffic to overwhelm a low powered CPU. Thus an 466 installation like this will benefit greatly from some type of traffic 467 segregation that can define groups of seats to reduce traffic within 468 each group. All seats in the theater must still be able to 469 communicate with a central controller. 471 There are many techniques that can be used to support this 472 requirement including (but not limited to) the following examples. 474 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets 476 Packet forwarding rules can be used to eliminate some extraneous 477 streaming traffic from reaching potentially low powered sink devices, 478 however there may be other types of broadcast traffic that should be 479 eliminated using other means for example VLANs or IP subnets. 481 2.3.5.2. Multicast Addressing (IPv4 and IPv6) 483 Multicast addressing is commonly used to keep bandwidth utilization 484 of shared links to a minimum. 486 Because of the MAC Address forwarding nature of Layer 2 bridges it is 487 important that a multicast MAC address is only associated with one 488 stream. This will prevent reservations from forwarding packets from 489 one stream down a path that has no interested sinks simply because 490 there is another stream on that same path that shares the same 491 multicast MAC address. 493 Since each multicast MAC Address can represent 32 different IPv4 494 multicast addresses there must be a process put in place to make sure 495 this does not occur. Requiring use of IPv6 address can achieve this, 496 however due to their continued prevalence, solutions that are 497 effective for IPv4 installations are also required. 499 2.3.6. Latency Optimization by a Central Controller 501 A central network controller might also perform optimizations based 502 on the individual path delays, for example sinks that are closer to 503 the source can inform the controller that they can accept greater 504 latency since they will be buffering packets to match presentation 505 times of farther away sinks. The controller might then move a stream 506 reservation on a short path to a longer path in order to free up 507 bandwidth for other critical streams on that short path. See slides 508 3-5 of [SRP_LATENCY]. 510 Additional optimization can be achieved in cases where sinks have 511 differing latency requirements, for example in a live outdoor concert 512 the speaker sinks have stricter latency requirements than the 513 recording hardware sinks. See slide 7 of [SRP_LATENCY]. 515 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory 517 Device cost can be reduced in a system with guaranteed reservations 518 with a small bounded latency due to the reduced requirements for 519 buffering (i.e. memory) on sink devices. For example, a theme park 520 might broadcast a live event across the globe via a layer 3 protocol; 521 in such cases the size of the buffers required is proportional to the 522 latency bounds and jitter caused by delivery, which depends on the 523 worst case segment of the end-to-end network path. For example on 524 todays open internet the latency is typically unacceptable for audio 525 and video streaming without many seconds of buffering. In such 526 scenarios a single gateway device at the local network that receives 527 the feed from the remote site would provide the expensive buffering 528 required to mask the latency and jitter issues associated with long 529 distance delivery. Sink devices in the local location would have no 530 additional buffering requirements, and thus no additional costs, 531 beyond those required for delivery of local content. The sink device 532 would be receiving the identical packets as those sent by the source 533 and would be unaware that there were any latency or jitter issues 534 along the path. 536 2.4. Pro Audio Asks 538 o Layer 3 routing on top of AVB (and/or other high QoS networks) 540 o Content delivery with bounded, lowest possible latency 542 o IntServ and DiffServ integration with AVB (where practical) 544 o Single network for A/V and IT traffic 546 o Standards-based, interoperable, multi-vendor 548 o IT department friendly 550 o Enterprise-wide networks (e.g. size of San Francisco but not the 551 whole Internet (yet...)) 553 3. Electrical Utilities 555 3.1. Use Case Description 557 Many systems that an electrical utility deploys today rely on high 558 availability and deterministic behavior of the underlying networks. 559 Here we present use cases in Transmission, Generation and 560 Distribution, including key timing and reliability metrics. We also 561 discuss security issues and industry trends which affect the 562 architecture of next generation utility networks 564 3.1.1. Transmission Use Cases 566 3.1.1.1. Protection 568 Protection means not only the protection of human operators but also 569 the protection of the electrical equipment and the preservation of 570 the stability and frequency of the grid. If a fault occurs in the 571 transmission or distribution of electricity then severe damage can 572 occur to human operators, electrical equipment and the grid itself, 573 leading to blackouts. 575 Communication links in conjunction with protection relays are used to 576 selectively isolate faults on high voltage lines, transformers, 577 reactors and other important electrical equipment. The role of the 578 teleprotection system is to selectively disconnect a faulty part by 579 transferring command signals within the shortest possible time. 581 3.1.1.1.1. Key Criteria 583 The key criteria for measuring teleprotection performance are command 584 transmission time, dependability and security. These criteria are 585 defined by the IEC standard 60834 as follows: 587 o Transmission time (Speed): The time between the moment where state 588 changes at the transmitter input and the moment of the 589 corresponding change at the receiver output, including propagation 590 delay. Overall operating time for a teleprotection system 591 includes the time for initiating the command at the transmitting 592 end, the propagation delay over the network (including equipments) 593 and the selection and decision time at the receiving end, 594 including any additional delay due to a noisy environment. 596 o Dependability: The ability to issue and receive valid commands in 597 the presence of interference and/or noise, by minimizing the 598 probability of missing command (PMC). Dependability targets are 599 typically set for a specific bit error rate (BER) level. 601 o Security: The ability to prevent false tripping due to a noisy 602 environment, by minimizing the probability of unwanted commands 603 (PUC). Security targets are also set for a specific bit error 604 rate (BER) level. 606 Additional elements of the the teleprotection system that impact its 607 performance include: 609 o Network bandwidth 611 o Failure recovery capacity (aka resiliency) 613 3.1.1.1.2. Fault Detection and Clearance Timing 615 Most power line equipment can tolerate short circuits or faults for 616 up to approximately five power cycles before sustaining irreversible 617 damage or affecting other segments in the network. This translates 618 to total fault clearance time of 100ms. As a safety precaution, 619 however, actual operation time of protection systems is limited to 620 70- 80 percent of this period, including fault recognition time, 621 command transmission time and line breaker switching time. 623 Some system components, such as large electromechanical switches, 624 require particularly long time to operate and take up the majority of 625 the total clearance time, leaving only a 10ms window for the 626 telecommunications part of the protection scheme, independent of the 627 distance to travel. Given the sensitivity of the issue, new networks 628 impose requirements that are even more stringent: IEC standard 61850 629 limits the transfer time for protection messages to 1/4 - 1/2 cycle 630 or 4 - 8ms (for 60Hz lines) for the most critical messages. 632 3.1.1.1.3. Symmetric Channel Delay 634 Note: It is currently under WG discussion whether symmetric path 635 delays are to be guaranteed by DetNet. 637 Teleprotection channels which are differential must be synchronous, 638 which means that any delays on the transmit and receive paths must 639 match each other. Teleprotection systems ideally support zero 640 asymmetric delay; typical legacy relays can tolerate delay 641 discrepancies of up to 750us. 643 Some tools available for lowering delay variation below this 644 threshold are: 646 o For legacy systems using Time Division Multiplexing (TDM), jitter 647 buffers at the multiplexers on each end of the line can be used to 648 offset delay variation by queuing sent and received packets. The 649 length of the queues must balance the need to regulate the rate of 650 transmission with the need to limit overall delay, as larger 651 buffers result in increased latency. 653 o For jitter-prone IP packet networks, traffic management tools can 654 ensure that the teleprotection signals receive the highest 655 transmission priority to minimize jitter. 657 o Standard packet-based synchronization technologies, such as 658 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet 659 (Sync-E), can help keep networks stable by maintaining a highly 660 accurate clock source on the various network devices. 662 3.1.1.1.4. Teleprotection Network Requirements (IEC 61850) 664 The following table captures the main network metrics as based on the 665 IEC 61850 standard. 667 +-----------------------------+-------------------------------------+ 668 | Teleprotection Requirement | Attribute | 669 +-----------------------------+-------------------------------------+ 670 | One way maximum delay | 4-10 ms | 671 | Asymetric delay required | Yes | 672 | Maximum jitter | less than 250 us (750 us for legacy | 673 | | IED) | 674 | Topology | Point to point, point to Multi- | 675 | | point | 676 | Availability | 99.9999 | 677 | precise timing required | Yes | 678 | Recovery time on node | less than 50ms - hitless | 679 | failure | | 680 | performance management | Yes, Mandatory | 681 | Redundancy | Yes | 682 | Packet loss | 0.1% to 1% | 683 +-----------------------------+-------------------------------------+ 685 Table 1: Teleprotection network requirements 687 3.1.1.1.5. Inter-Trip Protection scheme 689 "Inter-tripping" is the signal-controlled tripping of a circuit 690 breaker to complete the isolation of a circuit or piece of apparatus 691 in concert with the tripping of other circuit breakers. 693 +--------------------------------+----------------------------------+ 694 | Inter-Trip protection | Attribute | 695 | Requirement | | 696 +--------------------------------+----------------------------------+ 697 | One way maximum delay | 5 ms | 698 | Asymetric delay required | No | 699 | Maximum jitter | Not critical | 700 | Topology | Point to point, point to Multi- | 701 | | point | 702 | Bandwidth | 64 Kbps | 703 | Availability | 99.9999 | 704 | precise timing required | Yes | 705 | Recovery time on node failure | less than 50ms - hitless | 706 | performance management | Yes, Mandatory | 707 | Redundancy | Yes | 708 | Packet loss | 0.1% | 709 +--------------------------------+----------------------------------+ 711 Table 2: Inter-Trip protection network requirements 713 3.1.1.1.6. Current Differential Protection Scheme 715 Current differential protection is commonly used for line protection, 716 and is typical for protecting parallel circuits. At both end of the 717 lines the current is measured by the differential relays, and both 718 relays will trip the circuit breaker if the current going into the 719 line does not equal the current going out of the line. This type of 720 protection scheme assumes some form of communications being present 721 between the relays at both end of the line, to allow both relays to 722 compare measured current values. Line differential protection 723 schemes assume a very low telecommunications delay between both 724 relays, often as low as 5ms. Moreover, as those systems are often 725 not time-synchronized, they also assume symmetric telecommunications 726 paths with constant delay, which allows comparing current measurement 727 values taken at the exact same time. 729 +----------------------------------+--------------------------------+ 730 | Current Differential protection | Attribute | 731 | Requirement | | 732 +----------------------------------+--------------------------------+ 733 | One way maximum delay | 5 ms | 734 | Asymetric delay Required | Yes | 735 | Maximum jitter | less than 250 us (750us for | 736 | | legacy IED) | 737 | Topology | Point to point, point to | 738 | | Multi-point | 739 | Bandwidth | 64 Kbps | 740 | Availability | 99.9999 | 741 | precise timing required | Yes | 742 | Recovery time on node failure | less than 50ms - hitless | 743 | performance management | Yes, Mandatory | 744 | Redundancy | Yes | 745 | Packet loss | 0.1% | 746 +----------------------------------+--------------------------------+ 748 Table 3: Current Differential Protection metrics 750 3.1.1.1.7. Distance Protection Scheme 752 Distance (Impedance Relay) protection scheme is based on voltage and 753 current measurements. The network metrics are similar (but not 754 identical to) Current Differential protection. 756 +-------------------------------+-----------------------------------+ 757 | Distance protection | Attribute | 758 | Requirement | | 759 +-------------------------------+-----------------------------------+ 760 | One way maximum delay | 5 ms | 761 | Asymetric delay Required | No | 762 | Maximum jitter | Not critical | 763 | Topology | Point to point, point to Multi- | 764 | | point | 765 | Bandwidth | 64 Kbps | 766 | Availability | 99.9999 | 767 | precise timing required | Yes | 768 | Recovery time on node failure | less than 50ms - hitless | 769 | performance management | Yes, Mandatory | 770 | Redundancy | Yes | 771 | Packet loss | 0.1% | 772 +-------------------------------+-----------------------------------+ 774 Table 4: Distance Protection requirements 776 3.1.1.1.8. Inter-Substation Protection Signaling 778 This use case describes the exchange of Sampled Value and/or GOOSE 779 (Generic Object Oriented Substation Events) message between 780 Intelligent Electronic Devices (IED) in two substations for 781 protection and tripping coordination. The two IEDs are in a master- 782 slave mode. 784 The Current Transformer or Voltage Transformer (CT/VT) in one 785 substation sends the sampled analog voltage or current value to the 786 Merging Unit (MU) over hard wire. The MU sends the time-synchronized 787 61850-9-2 sampled values to the slave IED. The slave IED forwards 788 the information to the Master IED in the other substation. The 789 master IED makes the determination (for example based on sampled 790 value differentials) to send a trip command to the originating IED. 791 Once the slave IED/Relay receives the GOOSE trip for breaker 792 tripping, it opens the breaker. It then sends a confirmation message 793 back to the master. All data exchanges between IEDs are either 794 through Sampled Value and/or GOOSE messages. 796 +----------------------------------+--------------------------------+ 797 | Inter-Substation protection | Attribute | 798 | Requirement | | 799 +----------------------------------+--------------------------------+ 800 | One way maximum delay | 5 ms | 801 | Asymetric delay Required | No | 802 | Maximum jitter | Not critical | 803 | Topology | Point to point, point to | 804 | | Multi-point | 805 | Bandwidth | 64 Kbps | 806 | Availability | 99.9999 | 807 | precise timing required | Yes | 808 | Recovery time on node failure | less than 50ms - hitless | 809 | performance management | Yes, Mandatory | 810 | Redundancy | Yes | 811 | Packet loss | 1% | 812 +----------------------------------+--------------------------------+ 814 Table 5: Inter-Substation Protection requirements 816 3.1.1.2. Intra-Substation Process Bus Communications 818 This use case describes the data flow from the CT/VT to the IEDs in 819 the substation via the MU. The CT/VT in the substation send the 820 sampled value (analog voltage or current) to the MU over hard wire. 821 The MU sends the time-synchronized 61850-9-2 sampled values to the 822 IEDs in the substation in GOOSE message format. The GPS Master Clock 823 can send 1PPS or IRIG-B format to the MU through a serial port or 824 IEEE 1588 protocol via a network. Process bus communication using 825 61850 simplifies connectivity within the substation and removes the 826 requirement for multiple serial connections and removes the slow 827 serial bus architectures that are typically used. This also ensures 828 increased flexibility and increased speed with the use of multicast 829 messaging between multiple devices. 831 +----------------------------------+--------------------------------+ 832 | Intra-Substation protection | Attribute | 833 | Requirement | | 834 +----------------------------------+--------------------------------+ 835 | One way maximum delay | 5 ms | 836 | Asymetric delay Required | No | 837 | Maximum jitter | Not critical | 838 | Topology | Point to point, point to | 839 | | Multi-point | 840 | Bandwidth | 64 Kbps | 841 | Availability | 99.9999 | 842 | precise timing required | Yes | 843 | Recovery time on Node failure | less than 50ms - hitless | 844 | performance management | Yes, Mandatory | 845 | Redundancy | Yes - No | 846 | Packet loss | 0.1% | 847 +----------------------------------+--------------------------------+ 849 Table 6: Intra-Substation Protection requirements 851 3.1.1.3. Wide Area Monitoring and Control Systems 853 The application of synchrophasor measurement data from Phasor 854 Measurement Units (PMU) to Wide Area Monitoring and Control Systems 855 promises to provide important new capabilities for improving system 856 stability. Access to PMU data enables more timely situational 857 awareness over larger portions of the grid than what has been 858 possible historically with normal SCADA (Supervisory Control and Data 859 Acquisition) data. Handling the volume and real-time nature of 860 synchrophasor data presents unique challenges for existing 861 application architectures. Wide Area management System (WAMS) makes 862 it possible for the condition of the bulk power system to be observed 863 and understood in real-time so that protective, preventative, or 864 corrective action can be taken. Because of the very high sampling 865 rate of measurements and the strict requirement for time 866 synchronization of the samples, WAMS has stringent telecommunications 867 requirements in an IP network that are captured in the following 868 table: 870 +----------------------+--------------------------------------------+ 871 | WAMS Requirement | Attribute | 872 +----------------------+--------------------------------------------+ 873 | One way maximum | 50 ms | 874 | delay | | 875 | Asymetric delay | No | 876 | Required | | 877 | Maximum jitter | Not critical | 878 | Topology | Point to point, point to Multi-point, | 879 | | Multi-point to Multi-point | 880 | Bandwidth | 100 Kbps | 881 | Availability | 99.9999 | 882 | precise timing | Yes | 883 | required | | 884 | Recovery time on | less than 50ms - hitless | 885 | Node failure | | 886 | performance | Yes, Mandatory | 887 | management | | 888 | Redundancy | Yes | 889 | Packet loss | 1% | 890 | Consecutive Packet | At least 1 packet per application cycle | 891 | Loss | must be received. | 892 +----------------------+--------------------------------------------+ 894 Table 7: WAMS Special Communication Requirements 896 3.1.1.4. IEC 61850 WAN engineering guidelines requirement 897 classification 899 The IEC (International Electrotechnical Commission) has recently 900 published a Technical Report which offers guidelines on how to define 901 and deploy Wide Area Networks for the interconnections of electric 902 substations, generation plants and SCADA operation centers. The IEC 903 61850-90-12 is providing a classification of WAN communication 904 requirements into 4 classes. Table 8 summarizes these requirements: 906 +----------------+------------+------------+------------+-----------+ 907 | WAN | Class WA | Class WB | Class WC | Class WD | 908 | Requirement | | | | | 909 +----------------+------------+------------+------------+-----------+ 910 | Application | EHV (Extra | HV (High | MV (Medium | General | 911 | field | High | Voltage) | Voltage) | purpose | 912 | | Voltage) | | | | 913 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms | 914 | Jitter | 10 us | 100 us | 1 ms | 10 ms | 915 | Latency | 100 us | 1 ms | 10 ms | 100 ms | 916 | Asymetry | | | | | 917 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 | 918 | | | | | ms | 919 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | | 920 | | 10-6 | 10-4 | | | 921 | Unavailability | 10-7 to | 10-5 to | 10-3 | | 922 | | 10-6 | 10-4 | | | 923 | Recovery delay | Zero | 50 ms | 5 s | 50 s | 924 | Cyber security | extremely | High | Medium | Medium | 925 | | high | | | | 926 +----------------+------------+------------+------------+-----------+ 928 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC 930 3.1.2. Generation Use Case 932 Energy generation systems are complex infrastructures that require 933 control of both the generated power and the generation 934 infrastructure. 936 3.1.2.1. Control of the Generated Power 938 The electrical power generation frequency must be maintained within a 939 very narrow band. Deviations from the acceptable frequency range are 940 detected and the required signals are sent to the power plants for 941 frequency regulation. 943 Automatic Generation Control (AGC) is a system for adjusting the 944 power output of generators at different power plants, in response to 945 changes in the load. 947 +---------------------------------------------------+---------------+ 948 | FCAG (Frequency Control Automatic Generation) | Attribute | 949 | Requirement | | 950 +---------------------------------------------------+---------------+ 951 | One way maximum delay | 500 ms | 952 | Asymetric delay Required | No | 953 | Maximum jitter | Not critical | 954 | Topology | Point to | 955 | | point | 956 | Bandwidth | 20 Kbps | 957 | Availability | 99.999 | 958 | precise timing required | Yes | 959 | Recovery time on Node failure | N/A | 960 | performance management | Yes, | 961 | | Mandatory | 962 | Redundancy | Yes | 963 | Packet loss | 1% | 964 +---------------------------------------------------+---------------+ 966 Table 9: FCAG Communication Requirements 968 3.1.2.2. Control of the Generation Infrastructure 970 The control of the generation infrastructure combines requirements 971 from industrial automation systems and energy generation systems. In 972 this section we present the use case of the control of the generation 973 infrastructure of a wind turbine. 975 | 976 | 977 | +-----------------+ 978 | | +----+ | 979 | | |WTRM| WGEN | 980 WROT x==|===| | | 981 | | +----+ WCNV| 982 | |WNAC | 983 | +---+---WYAW---+--+ 984 | | | 985 | | | +----+ 986 |WTRF | |WMET| 987 | | | | 988 Wind Turbine | +--+-+ 989 Controller | | 990 WTUR | | | 991 WREP | | | 992 WSLG | | | 993 WALG | WTOW | | 995 Figure 1: Wind Turbine Control Network 997 Figure 1 presents the subsystems that operate a wind turbine. These 998 subsystems include 1000 o WROT (Rotor Control) 1002 o WNAC (Nacelle Control) (nacelle: housing containing the generator) 1004 o WTRM (Transmission Control) 1006 o WGEN (Generator) 1008 o WYAW (Yaw Controller) (of the tower head) 1010 o WCNV (In-Turbine Power Converter) 1012 o WMET (External Meteorological Station providing real time 1013 information to the controllers of the tower) 1015 Traffic characteristics relevant for the network planning and 1016 dimensioning process in a wind turbine scenario are listed below. 1017 The values in this section are based mainly on the relevant 1018 references [Ahm14] and [Spe09]. Each logical node (Figure 1) is a 1019 part of the metering network and produces analog measurements and 1020 status information which must comply with their respective data rate 1021 constraints. 1023 +-----------+--------+--------+-------------+---------+-------------+ 1024 | Subsystem | Sensor | Analog | Data Rate | Status | Data rate | 1025 | | Count | Sample | (bytes/sec) | Sample | (bytes/sec) | 1026 | | | Count | | Count | | 1027 +-----------+--------+--------+-------------+---------+-------------+ 1028 | WROT | 14 | 9 | 642 | 5 | 10 | 1029 | WTRM | 18 | 10 | 2828 | 8 | 16 | 1030 | WGEN | 14 | 12 | 73764 | 2 | 4 | 1031 | WCNV | 14 | 12 | 74060 | 2 | 4 | 1032 | WTRF | 12 | 5 | 73740 | 2 | 4 | 1033 | WNAC | 12 | 9 | 112 | 3 | 6 | 1034 | WYAW | 7 | 8 | 220 | 4 | 8 | 1035 | WTOW | 4 | 1 | 8 | 3 | 6 | 1036 | WMET | 7 | 7 | 228 | - | - | 1037 +-----------+--------+--------+-------------+---------+-------------+ 1039 Table 10: Wind Turbine Data Rate Constraints 1041 Quality of Service (QoS) constraints for different services are 1042 presented in Table 11. These constraints are defined by IEEE 1646 1043 standard [IEEE1646] and IEC 61400 standard [IEC61400]. 1045 +---------------------+---------+-------------+---------------------+ 1046 | Service | Latency | Reliability | Packet Loss Rate | 1047 +---------------------+---------+-------------+---------------------+ 1048 | Analogue measure | 16 ms | 99.99% | < 10-6 | 1049 | Status information | 16 ms | 99.99% | < 10-6 | 1050 | Protection traffic | 4 ms | 100.00% | < 10-9 | 1051 | Reporting and | 1 s | 99.99% | < 10-6 | 1052 | logging | | | | 1053 | Video surveillance | 1 s | 99.00% | No specific | 1054 | | | | requirement | 1055 | Internet connection | 60 min | 99.00% | No specific | 1056 | | | | requirement | 1057 | Control traffic | 16 ms | 100.00% | < 10-9 | 1058 | Data polling | 16 ms | 99.99% | < 10-6 | 1059 +---------------------+---------+-------------+---------------------+ 1061 Table 11: Wind Turbine Reliability and Latency Constraints 1063 3.1.2.2.1. Intra-Domain Network Considerations 1065 A wind turbine is composed of a large set of subsystems including 1066 sensors and actuators which require time-critical operation. The 1067 reliability and latency constraints of these different subsystems is 1068 shown in Table 11. These subsystems are connected to an intra-domain 1069 network which is used to monitor and control the operation of the 1070 turbine and connect it to the SCADA subsystems. The different 1071 components are interconnected using fiber optics, industrial buses, 1072 industrial Ethernet, EtherCat, or a combination of them. Industrial 1073 signaling and control protocols such as Modbus, Profibus, Profinet 1074 and EtherCat are used directly on top of the Layer 2 transport or 1075 encapsulated over TCP/IP. 1077 The Data collected from the sensors and condition monitoring systems 1078 is multiplexed onto fiber cables for transmission to the base of the 1079 tower, and to remote control centers. The turbine controller 1080 continuously monitors the condition of the wind turbine and collects 1081 statistics on its operation. This controller also manages a large 1082 number of switches, hydraulic pumps, valves, and motors within the 1083 wind turbine. 1085 There is usually a controller both at the bottom of the tower and in 1086 the nacelle. The communication between these two controllers usually 1087 takes place using fiber optics instead of copper links. Sometimes, a 1088 third controller is installed in the hub of the rotor and manages the 1089 pitch of the blades. That unit usually communicates with the nacelle 1090 unit using serial communications. 1092 3.1.2.2.2. Inter-Domain network considerations 1094 A remote control center belonging to a grid operator regulates the 1095 power output, enables remote actuation, and monitors the health of 1096 one or more wind parks in tandem. It connects to the local control 1097 center in a wind park over the Internet (Figure 2) via firewalls at 1098 both ends. The AS path between the local control center and the Wind 1099 Park typically involves several ISPs at different tiers. For 1100 example, a remote control center in Denmark can regulate a wind park 1101 in Greece over the normal public AS path between the two locations. 1103 The remote control center is part of the SCADA system, setting the 1104 desired power output to the wind park and reading back the result 1105 once the new power output level has been set. Traffic between the 1106 remote control center and the wind park typically consists of 1107 protocols like IEC 60870-5-104 [IEC-60870-5-104], OPC XML-DA 1108 [OPCXML], Modbus [MODBUS], and SNMP [RFC3411]. Currently, traffic 1109 flows between the wind farm and the remote control center are best 1110 effort. QoS requirements are not strict, so no SLAs or service 1111 provisioning mechanisms (e.g., VPN) are employed. In case of events 1112 like equipment failure, tolerance for alarm delay is on the order of 1113 minutes, due to redundant systems already in place. 1115 +--------------+ 1116 | | 1117 | | 1118 | Wind Park #1 +----+ 1119 | | | XXXXXX 1120 | | | X XXXXXXXX +----------------+ 1121 +--------------+ | XXXX X XXXXX | | 1122 +---+ XXX | Remote Control | 1123 XXX Internet +----+ Center | 1124 +----+X XXX | | 1125 +--------------+ | XXXXXXX XX | | 1126 | | | XX XXXXXXX +----------------+ 1127 | | | XXXXX 1128 | Wind Park #2 +----+ 1129 | | 1130 | | 1131 +--------------+ 1133 Figure 2: Wind Turbine Control via Internet 1135 We expect future use cases which require bounded latency, bounded 1136 jitter and extraordinary low packet loss for inter-domain traffic 1137 flows due to the softwarization and virtualization of core wind farm 1138 equipment (e.g. switches, firewalls and SCADA server components). 1139 These factors will create opportunities for service providers to 1140 install new services and dynamically manage them from remote 1141 locations. For example, to enable fail-over of a local SCADA server, 1142 a SCADA server in another wind farm site (under the administrative 1143 control of the same operator) could be utilized temporarily 1144 (Figure 3). In that case local traffic would be forwarded to the 1145 remote SCADA server and existing intra-domain QoS and timing 1146 parameters would have to be met for inter-domain traffic flows. 1148 +--------------+ 1149 | | 1150 | | 1151 | Wind Park #1 +----+ 1152 | | | XXXXXX 1153 | | | X XXXXXXXX +----------------+ 1154 +--------------+ | XXXX XXXXX | | 1155 +---+ Operator XXX | Remote Control | 1156 XXX Administered +----+ Center | 1157 +----+X WAN XXX | | 1158 +--------------+ | XXXXXXX XX | | 1159 | | | XX XXXXXXX +----------------+ 1160 | | | XXXXX 1161 | Wind Park #2 +----+ 1162 | | 1163 | | 1164 +--------------+ 1166 Figure 3: Wind Turbine Control via Operator Administered WAN 1168 3.1.3. Distribution use case 1170 3.1.3.1. Fault Location Isolation and Service Restoration (FLISR) 1172 Fault Location, Isolation, and Service Restoration (FLISR) refers to 1173 the ability to automatically locate the fault, isolate the fault, and 1174 restore service in the distribution network. This will likely be the 1175 first widespread application of distributed intelligence in the grid. 1177 Static power switch status (open/closed) in the network dictates the 1178 power flow to secondary substations. Reconfiguring the network in 1179 the event of a fault is typically done manually on site to energize/ 1180 de-energize alternate paths. Automating the operation of substation 1181 switchgear allows the flow of power to be altered automatically under 1182 fault conditions. 1184 FLISR can be managed centrally from a Distribution Management System 1185 (DMS) or executed locally through distributed control via intelligent 1186 switches and fault sensors. 1188 +----------------------+--------------------------------------------+ 1189 | FLISR Requirement | Attribute | 1190 +----------------------+--------------------------------------------+ 1191 | One way maximum | 80 ms | 1192 | delay | | 1193 | Asymetric delay | No | 1194 | Required | | 1195 | Maximum jitter | 40 ms | 1196 | Topology | Point to point, point to Multi-point, | 1197 | | Multi-point to Multi-point | 1198 | Bandwidth | 64 Kbps | 1199 | Availability | 99.9999 | 1200 | precise timing | Yes | 1201 | required | | 1202 | Recovery time on | Depends on customer impact | 1203 | Node failure | | 1204 | performance | Yes, Mandatory | 1205 | management | | 1206 | Redundancy | Yes | 1207 | Packet loss | 0.1% | 1208 +----------------------+--------------------------------------------+ 1210 Table 12: FLISR Communication Requirements 1212 3.2. Electrical Utilities Today 1214 Many utilities still rely on complex environments formed of multiple 1215 application-specific proprietary networks, including TDM networks. 1217 In this kind of environment there is no mixing of OT and IT 1218 applications on the same network, and information is siloed between 1219 operational areas. 1221 Specific calibration of the full chain is required, which is costly. 1223 This kind of environment prevents utility operations from realizing 1224 the operational efficiency benefits, visibility, and functional 1225 integration of operational information across grid applications and 1226 data networks. 1228 In addition, there are many security-related issues as discussed in 1229 the following section. 1231 3.2.1. Security Current Practices and Limitations 1233 Grid monitoring and control devices are already targets for cyber 1234 attacks, and legacy telecommunications protocols have many intrinsic 1235 network-related vulnerabilities. For example, DNP3, Modbus, 1236 PROFIBUS/PROFINET, and other protocols are designed around a common 1237 paradigm of request and respond. Each protocol is designed for a 1238 master device such as an HMI (Human Machine Interface) system to send 1239 commands to subordinate slave devices to retrieve data (reading 1240 inputs) or control (writing to outputs). Because many of these 1241 protocols lack authentication, encryption, or other basic security 1242 measures, they are prone to network-based attacks, allowing a 1243 malicious actor or attacker to utilize the request-and-respond system 1244 as a mechanism for command-and-control like functionality. Specific 1245 security concerns common to most industrial control, including 1246 utility telecommunication protocols include the following: 1248 o Network or transport errors (e.g. malformed packets or excessive 1249 latency) can cause protocol failure. 1251 o Protocol commands may be available that are capable of forcing 1252 slave devices into inoperable states, including powering-off 1253 devices, forcing them into a listen-only state, disabling 1254 alarming. 1256 o Protocol commands may be available that are capable of restarting 1257 communications and otherwise interrupting processes. 1259 o Protocol commands may be available that are capable of clearing, 1260 erasing, or resetting diagnostic information such as counters and 1261 diagnostic registers. 1263 o Protocol commands may be available that are capable of requesting 1264 sensitive information about the controllers, their configurations, 1265 or other need-to-know information. 1267 o Most protocols are application layer protocols transported over 1268 TCP; therefore it is easy to transport commands over non-standard 1269 ports or inject commands into authorized traffic flows. 1271 o Protocol commands may be available that are capable of 1272 broadcasting messages to many devices at once (i.e. a potential 1273 DoS). 1275 o Protocol commands may be available to query the device network to 1276 obtain defined points and their values (i.e. a configuration 1277 scan). 1279 o Protocol commands may be available that will list all available 1280 function codes (i.e. a function scan). 1282 These inherent vulnerabilities, along with increasing connectivity 1283 between IT an OT networks, make network-based attacks very feasible. 1285 Simple injection of malicious protocol commands provides control over 1286 the target process. Altering legitimate protocol traffic can also 1287 alter information about a process and disrupt the legitimate controls 1288 that are in place over that process. A man-in-the-middle attack 1289 could provide both control over a process and misrepresentation of 1290 data back to operator consoles. 1292 3.3. Electrical Utilities Future 1294 The business and technology trends that are sweeping the utility 1295 industry will drastically transform the utility business from the way 1296 it has been for many decades. At the core of many of these changes 1297 is a drive to modernize the electrical grid with an integrated 1298 telecommunications infrastructure. However, interoperability 1299 concerns, legacy networks, disparate tools, and stringent security 1300 requirements all add complexity to the grid transformation. Given 1301 the range and diversity of the requirements that should be addressed 1302 by the next generation telecommunications infrastructure, utilities 1303 need to adopt a holistic architectural approach to integrate the 1304 electrical grid with digital telecommunications across the entire 1305 power delivery chain. 1307 The key to modernizing grid telecommunications is to provide a 1308 common, adaptable, multi-service network infrastructure for the 1309 entire utility organization. Such a network serves as the platform 1310 for current capabilities while enabling future expansion of the 1311 network to accommodate new applications and services. 1313 To meet this diverse set of requirements, both today and in the 1314 future, the next generation utility telecommunnications network will 1315 be based on open-standards-based IP architecture. An end-to-end IP 1316 architecture takes advantage of nearly three decades of IP technology 1317 development, facilitating interoperability and device management 1318 across disparate networks and devices, as it has been already 1319 demonstrated in many mission-critical and highly secure networks. 1321 IPv6 is seen as a future telecommunications technology for the Smart 1322 Grid; the IEC (International Electrotechnical Commission) and 1323 different National Committees have mandated a specific adhoc group 1324 (AHG8) to define the migration strategy to IPv6 for all the IEC TC57 1325 power automation standards. 1327 We expect cloud-based SCADA systems to control and monitor the 1328 critical and non-critical subsystems of generation systems, for 1329 example wind farms. 1331 3.3.1. Migration to Packet-Switched Network 1333 Throughout the world, utilities are increasingly planning for a 1334 future based on smart grid applications requiring advanced 1335 telecommunications systems. Many of these applications utilize 1336 packet connectivity for communicating information and control signals 1337 across the utility's Wide Area Network (WAN), made possible by 1338 technologies such as multiprotocol label switching (MPLS). The data 1339 that traverses the utility WAN includes: 1341 o Grid monitoring, control, and protection data 1343 o Non-control grid data (e.g. asset data for condition-based 1344 monitoring) 1346 o Physical safety and security data (e.g. voice and video) 1348 o Remote worker access to corporate applications (voice, maps, 1349 schematics, etc.) 1351 o Field area network backhaul for smart metering, and distribution 1352 grid management 1354 o Enterprise traffic (email, collaboration tools, business 1355 applications) 1357 WANs support this wide variety of traffic to and from substations, 1358 the transmission and distribution grid, generation sites, between 1359 control centers, and between work locations and data centers. To 1360 maintain this rapidly expanding set of applications, many utilities 1361 are taking steps to evolve present time-division multiplexing (TDM) 1362 based and frame relay infrastructures to packet systems. Packet- 1363 based networks are designed to provide greater functionalities and 1364 higher levels of service for applications, while continuing to 1365 deliver reliability and deterministic (real-time) traffic support. 1367 3.3.2. Telecommunications Trends 1369 These general telecommunications topics are in addition to the use 1370 cases that have been addressed so far. These include both current 1371 and future telecommunications related topics that should be factored 1372 into the network architecture and design. 1374 3.3.2.1. General Telecommunications Requirements 1376 o IP Connectivity everywhere 1378 o Monitoring services everywhere and from different remote centers 1379 o Move services to a virtual data center 1381 o Unify access to applications / information from the corporate 1382 network 1384 o Unify services 1386 o Unified Communications Solutions 1388 o Mix of fiber and microwave technologies - obsolescence of SONET/ 1389 SDH or TDM 1391 o Standardize grid telecommunications protocol to opened standard to 1392 ensure interoperability 1394 o Reliable Telecommunications for Transmission and Distribution 1395 Substations 1397 o IEEE 1588 time synchronization Client / Server Capabilities 1399 o Integration of Multicast Design 1401 o QoS Requirements Mapping 1403 o Enable Future Network Expansion 1405 o Substation Network Resilience 1407 o Fast Convergence Design 1409 o Scalable Headend Design 1411 o Define Service Level Agreements (SLA) and Enable SLA Monitoring 1413 o Integration of 3G/4G Technologies and future technologies 1415 o Ethernet Connectivity for Station Bus Architecture 1417 o Ethernet Connectivity for Process Bus Architecture 1419 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP 1421 3.3.2.2. Specific Network topologies of Smart Grid Applications 1423 Utilities often have very large private telecommunications networks. 1424 It covers an entire territory / country. The main purpose of the 1425 network, until now, has been to support transmission network 1426 monitoring, control, and automation, remote control of generation 1427 sites, and providing FCAPS (Fault, Configuration, Accounting, 1428 Performance, Security) services from centralized network operation 1429 centers. 1431 Going forward, one network will support operation and maintenance of 1432 electrical networks (generation, transmission, and distribution), 1433 voice and data services for ten of thousands of employees and for 1434 exchange with neighboring interconnections, and administrative 1435 services. To meet those requirements, utility may deploy several 1436 physical networks leveraging different technologies across the 1437 country: an optical network and a microwave network for instance. 1438 Each protection and automatism system between two points has two 1439 telecommunications circuits, one on each network. Path diversity 1440 between two substations is key. Regardless of the event type 1441 (hurricane, ice storm, etc.), one path shall stay available so the 1442 system can still operate. 1444 In the optical network, signals are transmitted over more than tens 1445 of thousands of circuits using fiber optic links, microwave and 1446 telephone cables. This network is the nervous system of the 1447 utility's power transmission operations. The optical network 1448 represents ten of thousands of km of cable deployed along the power 1449 lines, with individual runs as long as 280 km. 1451 3.3.2.3. Precision Time Protocol 1453 Some utilities do not use GPS clocks in generation substations. One 1454 of the main reasons is that some of the generation plants are 30 to 1455 50 meters deep under ground and the GPS signal can be weak and 1456 unreliable. Instead, atomic clocks are used. Clocks are 1457 synchronized amongst each other. Rubidium clocks provide clock and 1458 1ms timestamps for IRIG-B. 1460 Some companies plan to transition to the Precision Time Protocol 1461 (PTP, [IEEE1588]), distributing the synchronization signal over the 1462 IP/MPLS network. PTP provides a mechanism for synchronizing the 1463 clocks of participating nodes to a high degree of accuracy and 1464 precision. 1466 PTP operates based on the following assumptions: 1468 It is assumed that the network eliminates cyclic forwarding of PTP 1469 messages within each communication path (e.g. by using a spanning 1470 tree protocol). 1472 PTP is tolerant of an occasional missed message, duplicated 1473 message, or message that arrived out of order. However, PTP 1474 assumes that such impairments are relatively rare. 1476 PTP was designed assuming a multicast communication model, however 1477 PTP also supports a unicast communication model as long as the 1478 behavior of the protocol is preserved. 1480 Like all message-based time transfer protocols, PTP time accuracy 1481 is degraded by delay asymmetry in the paths taken by event 1482 messages. Asymmetry is not detectable by PTP, however, if such 1483 delays are known a priori, PTP can correct for asymmetry. 1485 IEC 61850 will recommend the use of the IEEE PTP 1588 Utility Profile 1486 (as defined in [IEC62439-3:2012] Annex B) which offers the support of 1487 redundant attachment of clocks to Parallel Redundancy Protcol (PRP) 1488 and High-availability Seamless Redundancy (HSR) networks. 1490 3.3.3. Security Trends in Utility Networks 1492 Although advanced telecommunications networks can assist in 1493 transforming the energy industry by playing a critical role in 1494 maintaining high levels of reliability, performance, and 1495 manageability, they also introduce the need for an integrated 1496 security infrastructure. Many of the technologies being deployed to 1497 support smart grid projects such as smart meters and sensors can 1498 increase the vulnerability of the grid to attack. Top security 1499 concerns for utilities migrating to an intelligent smart grid 1500 telecommunications platform center on the following trends: 1502 o Integration of distributed energy resources 1504 o Proliferation of digital devices to enable management, automation, 1505 protection, and control 1507 o Regulatory mandates to comply with standards for critical 1508 infrastructure protection 1510 o Migration to new systems for outage management, distribution 1511 automation, condition-based maintenance, load forecasting, and 1512 smart metering 1514 o Demand for new levels of customer service and energy management 1516 This development of a diverse set of networks to support the 1517 integration of microgrids, open-access energy competition, and the 1518 use of network-controlled devices is driving the need for a converged 1519 security infrastructure for all participants in the smart grid, 1520 including utilities, energy service providers, large commercial and 1521 industrial, as well as residential customers. Securing the assets of 1522 electric power delivery systems (from the control center to the 1523 substation, to the feeders and down to customer meters) requires an 1524 end-to-end security infrastructure that protects the myriad of 1525 telecommunications assets used to operate, monitor, and control power 1526 flow and measurement. 1528 "Cyber security" refers to all the security issues in automation and 1529 telecommunications that affect any functions related to the operation 1530 of the electric power systems. Specifically, it involves the 1531 concepts of: 1533 o Integrity : data cannot be altered undetectably 1535 o Authenticity : the telecommunications parties involved must be 1536 validated as genuine 1538 o Authorization : only requests and commands from the authorized 1539 users can be accepted by the system 1541 o Confidentiality : data must not be accessible to any 1542 unauthenticated users 1544 When designing and deploying new smart grid devices and 1545 telecommunications systems, it is imperative to understand the 1546 various impacts of these new components under a variety of attack 1547 situations on the power grid. Consequences of a cyber attack on the 1548 grid telecommunications network can be catastrophic. This is why 1549 security for smart grid is not just an ad hoc feature or product, 1550 it's a complete framework integrating both physical and Cyber 1551 security requirements and covering the entire smart grid networks 1552 from generation to distribution. Security has therefore become one 1553 of the main foundations of the utility telecom network architecture 1554 and must be considered at every layer with a defense-in-depth 1555 approach. Migrating to IP based protocols is key to address these 1556 challenges for two reasons: 1558 o IP enables a rich set of features and capabilities to enhance the 1559 security posture 1561 o IP is based on open standards, which allows interoperability 1562 between different vendors and products, driving down the costs 1563 associated with implementing security solutions in OT networks. 1565 Securing OT (Operation technology) telecommunications over packet- 1566 switched IP networks follow the same principles that are foundational 1567 for securing the IT infrastructure, i.e., consideration must be given 1568 to enforcing electronic access control for both person-to-machine and 1569 machine-to-machine communications, and providing the appropriate 1570 levels of data privacy, device and platform integrity, and threat 1571 detection and mitigation. 1573 3.4. Electrical Utilities Asks 1575 o Mixed L2 and L3 topologies 1577 o Deterministic behavior 1579 o Bounded latency and jitter 1581 o Tight feedback intervals 1583 o High availability, low recovery time 1585 o Redundancy, low packet loss 1587 o Precise timing 1589 o Centralized computing of deterministic paths 1591 o Distributed configuration may also be useful 1593 4. Building Automation Systems 1595 4.1. Use Case Description 1597 A Building Automation System (BAS) manages equipment and sensors in a 1598 building for improving residents' comfort, reducing energy 1599 consumption, and responding to failures and emergencies. For 1600 example, the BAS measures the temperature of a room using sensors and 1601 then controls the HVAC (heating, ventilating, and air conditioning) 1602 to maintain a set temperature and minimize energy consumption. 1604 A BAS primarily performs the following functions: 1606 o Periodically measures states of devices, for example humidity and 1607 illuminance of rooms, open/close state of doors, FAN speed, etc. 1609 o Stores the measured data. 1611 o Provides the measured data to BAS systems and operators. 1613 o Generates alarms for abnormal state of devices. 1615 o Controls devices (e.g. turn off room lights at 10:00 PM). 1617 4.2. Building Automation Systems Today 1619 4.2.1. BAS Architecture 1621 A typical BAS architecture of today is shown in Figure 4. 1623 +----------------------------+ 1624 | | 1625 | BMS HMI | 1626 | | | | 1627 | +----------------------+ | 1628 | | Management Network | | 1629 | +----------------------+ | 1630 | | | | 1631 | LC LC | 1632 | | | | 1633 | +----------------------+ | 1634 | | Field Network | | 1635 | +----------------------+ | 1636 | | | | | | 1637 | Dev Dev Dev Dev | 1638 | | 1639 +----------------------------+ 1641 BMS := Building Management Server 1642 HMI := Human Machine Interface 1643 LC := Local Controller 1645 Figure 4: BAS architecture 1647 There are typically two layers of network in a BAS. The upper one is 1648 called the Management Network and the lower one is called the Field 1649 Network. In management networks an IP-based communication protocol 1650 is used, while in field networks non-IP based communication protocols 1651 ("field protocols") are mainly used. Field networks have specific 1652 timing requirements, whereas management networks can be best-effort. 1654 A Human Machine Interface (HMI) is typically a desktop PC used by 1655 operators to monitor and display device states, send device control 1656 commands to Local Controllers (LCs), and configure building schedules 1657 (for example "turn off all room lights in the building at 10:00 PM"). 1659 A Building Management Server (BMS) performs the following operations. 1661 o Collect and store device states from LCs at regular intervals. 1663 o Send control values to LCs according to a building schedule. 1665 o Send an alarm signal to operators if it detects abnormal devices 1666 states. 1668 The BMS and HMI communicate with LCs via IP-based "management 1669 protocols" (see standards [bacnetip], [knx]). 1671 A LC is typically a Programmable Logic Controller (PLC) which is 1672 connected to several tens or hundreds of devices using "field 1673 protocols". An LC performs the following kinds of operations: 1675 o Measure device states and provide the information to BMS or HMI. 1677 o Send control values to devices, unilaterally or as part of a 1678 feedback control loop. 1680 There are many field protocols used today; some are standards-based 1681 and others are proprietary (see standards [lontalk], [modbus], 1682 [profibus] and [flnet]). The result is that BASs have multiple MAC/ 1683 PHY modules and interfaces. This makes BASs more expensive, slower 1684 to develop, and can result in "vendor lock-in" with multiple types of 1685 management applications. 1687 4.2.2. BAS Deployment Model 1689 An example BAS for medium or large buildings is shown in Figure 5. 1690 The physical layout spans multiple floors, and there is a monitoring 1691 room where the BAS management entities are located. Each floor will 1692 have one or more LCs depending upon the number of devices connected 1693 to the field network. 1695 +--------------------------------------------------+ 1696 | Floor 3 | 1697 | +----LC~~~~+~~~~~+~~~~~+ | 1698 | | | | | | 1699 | | Dev Dev Dev | 1700 | | | 1701 |--- | ------------------------------------------| 1702 | | Floor 2 | 1703 | +----LC~~~~+~~~~~+~~~~~+ Field Network | 1704 | | | | | | 1705 | | Dev Dev Dev | 1706 | | | 1707 |--- | ------------------------------------------| 1708 | | Floor 1 | 1709 | +----LC~~~~+~~~~~+~~~~~+ +-----------------| 1710 | | | | | | Monitoring Room | 1711 | | Dev Dev Dev | | 1712 | | | BMS HMI | 1713 | | Management Network | | | | 1714 | +--------------------------------+-----+ | 1715 | | | 1716 +--------------------------------------------------+ 1718 Figure 5: BAS Deployment model for Medium/Large Buildings 1720 Each LC is connected to the monitoring room via the Management 1721 network, and the management functions are performed within the 1722 building. In most cases, fast Ethernet (e.g. 100BASE-T) is used for 1723 the management network. Since the management network is non- 1724 realtime, use of Ethernet without quality of service is sufficient 1725 for today's deployment. 1727 In the field network a variety of physical interfaces such as RS232C 1728 and RS485 are used, which have specific timing requirements. Thus if 1729 a field network is to be replaced with an Ethernet or wireless 1730 network, such networks must support time-critical deterministic 1731 flows. 1733 In Figure 6, another deployment model is presented in which the 1734 management system is hosted remotely. This is becoming popular for 1735 small office and residential buildings in which a standalone 1736 monitoring system is not cost-effective. 1738 +---------------+ 1739 | Remote Center | 1740 | | 1741 | BMS HMI | 1742 +------------------------------------+ | | | | 1743 | Floor 2 | | +---+---+ | 1744 | +----LC~~~~+~~~~~+ Field Network| | | | 1745 | | | | | | Router | 1746 | | Dev Dev | +-------|-------+ 1747 | | | | 1748 |--- | ------------------------------| | 1749 | | Floor 1 | | 1750 | +----LC~~~~+~~~~~+ | | 1751 | | | | | | 1752 | | Dev Dev | | 1753 | | | | 1754 | | Management Network | WAN | 1755 | +------------------------Router-------------+ 1756 | | 1757 +------------------------------------+ 1759 Figure 6: Deployment model for Small Buildings 1761 Some interoperability is possible today in the Management Network, 1762 but not in today's field networks due to their non-IP-based design. 1764 4.2.3. Use Cases for Field Networks 1766 Below are use cases for Environmental Monitoring, Fire Detection, and 1767 Feedback Control, and their implications for field network 1768 performance. 1770 4.2.3.1. Environmental Monitoring 1772 The BMS polls each LC at a maximum measurement interval of 100ms (for 1773 example to draw a historical chart of 1 second granularity with a 10x 1774 sampling interval) and then performs the operations as specified by 1775 the operator. Each LC needs to measure each of its several hundred 1776 sensors once per measurement interval. Latency is not critical in 1777 this scenario as long as all sensor values are completed in the 1778 measurement interval. Availability is expected to be 99.999 %. 1780 4.2.3.2. Fire Detection 1782 On detection of a fire, the BMS must stop the HVAC, close the fire 1783 shutters, turn on the fire sprinklers, send an alarm, etc. There are 1784 typically ~10s of sensors per LC that BMS needs to manage. In this 1785 scenario the measurement interval is 10-50ms, the communication delay 1786 is 10ms, and the availability must be 99.9999 %. 1788 4.2.3.3. Feedback Control 1790 BAS systems utilize feedback control in various ways; the most time- 1791 critial is control of DC motors, which require a short feedback 1792 interval (1-5ms) with low communication delay (10ms) and jitter 1793 (1ms). The feedback interval depends on the characteristics of the 1794 device and a target quality of control value. There are typically 1795 ~10s of such devices per LC. 1797 Communication delay is expected to be less than 10 ms, jitter less 1798 than 1 sec while the availability must be 99.9999% . 1800 4.2.4. Security Considerations 1802 When BAS field networks were developed it was assumed that the field 1803 networks would always be physically isolated from external networks 1804 and therefore security was not a concern. In today's world many BASs 1805 are managed remotely and are thus connected to shared IP networks and 1806 so security is definitely a concern, yet security features are not 1807 available in the majority of BAS field network deployments . 1809 The management network, being an IP-based network, has the protocols 1810 available to enable network security, but in practice many BAS 1811 systems do not implement even the available security features such as 1812 device authentication or encryption for data in transit. 1814 4.3. BAS Future 1816 In the future we expect more fine-grained environmental monitoring 1817 and lower energy consumption, which will require more sensors and 1818 devices, thus requiring larger and more complex building networks. 1820 We expect building networks to be connected to or converged with 1821 other networks (Enterprise network, Home network, and Internet). 1823 Therefore better facilities for network management, control, 1824 reliability and security are critical in order to improve resident 1825 and operator convenience and comfort. For example the ability to 1826 monitor and control building devices via the internet would enable 1827 (for example) control of room lights or HVAC from a resident's 1828 desktop PC or phone application. 1830 4.4. BAS Asks 1832 The community would like to see an interoperable protocol 1833 specification that can satisfy the timing, security, availability and 1834 QoS constraints described above, such that the resulting converged 1835 network can replace the disparate field networks. Ideally this 1836 connectivity could extend to the open Internet. 1838 This would imply an architecture that can guarantee 1840 o Low communication delays (from <10ms to 100ms in a network of 1841 several hundred devices) 1843 o Low jitter (< 1 ms) 1845 o Tight feedback intervals (1ms - 10ms) 1847 o High network availability (up to 99.9999% ) 1849 o Availability of network data in disaster scenario 1851 o Authentication between management and field devices (both local 1852 and remote) 1854 o Integrity and data origin authentication of communication data 1855 between field and management devices 1857 o Confidentiality of data when communicated to a remote device 1859 5. Wireless for Industrial 1861 5.1. Use Case Description 1863 Wireless networks are useful for industrial applications, for example 1864 when portable, fast-moving or rotating objects are involved, and for 1865 the resource-constrained devices found in the Internet of Things 1866 (IoT). 1868 Such network-connected sensors, actuators, control loops (etc.) 1869 typically require that the underlying network support real-time 1870 quality of service (QoS), as well as specific classes of other 1871 network properties such as reliability, redundancy, and security. 1873 These networks may also contain very large numbers of devices, for 1874 example for factories, "big data" acquisition, and the IoT. Given 1875 the large numbers of devices installed, and the potential 1876 pervasiveness of the IoT, this is a huge and very cost-sensitive 1877 market. For example, a 1% cost reduction in some areas could save 1878 $100B 1880 5.1.1. Network Convergence using 6TiSCH 1882 Some wireless network technologies support real-time QoS, and are 1883 thus useful for these kinds of networks, but others do not. For 1884 example WiFi is pervasive but does not provide guaranteed timing or 1885 delivery of packets, and thus is not useful in this context. 1887 In this use case we focus on one specific wireless network technology 1888 which does provide the required deterministic QoS, which is "IPv6 1889 over the TSCH mode of IEEE 802.15.4e" (6TiSCH, where TSCH stands for 1890 "Time-Slotted Channel Hopping", see [I-D.ietf-6tisch-architecture], 1891 [IEEE802154], [IEEE802154e], and [RFC7554]). 1893 There are other deterministic wireless busses and networks available 1894 today, however they are imcompatible with each other, and 1895 incompatible with IP traffic (for example [ISA100], [WirelessHART]). 1897 Thus the primary goal of this use case is to apply 6TiSH as a 1898 converged IP- and standards-based wireless network for industrial 1899 applications, i.e. to replace multiple proprietary and/or 1900 incompatible wireless networking and wireless network management 1901 standards. 1903 5.1.2. Common Protocol Development for 6TiSCH 1905 Today there are a number of protocols required by 6TiSCH which are 1906 still in development, and a second intent of this use case is to 1907 highlight the ways in which these "missing" protocols share goals in 1908 common with DetNet. Thus it is possible that some of the protocol 1909 technology developed for DetNet will also be applicable to 6TiSCH. 1911 These protocol goals are identified here, along with their 1912 relationship to DetNet. It is likely that ultimately the resulting 1913 protocols will not be identical, but will share design principles 1914 which contribute to the eficiency of enabling both DetNet and 6TiSCH. 1916 One such commonality is that although at a different time scale, in 1917 both TSN [IEEE802.1TSNTG] and TSCH a packet crosses the network from 1918 node to node follows a precise schedule, as a train that leaves 1919 intermediate stations at precise times along its path. This kind of 1920 operation reduces collisions, saves energy, and enables engineering 1921 the network for deterministic properties. 1923 Another commonality is remote monitoring and scheduling management of 1924 a TSCH network by a Path Computation Element (PCE) and Network 1925 Management Entity (NME). The PCE/NME manage timeslots and device 1926 resources in a manner that minimizes the interaction with and the 1927 load placed on resource-constrained devices. For example, a tiny IoT 1928 device may have just enough buffers to store one or a few IPv6 1929 packets, and will have limited bandwidth between peers such that it 1930 can maintain only a small amount of peer information, and will not be 1931 able to store many packets waiting to be forwarded. It is 1932 advantageous then for it to only be required to carry out the 1933 specific behavior assigned to it by the PCE/NME (as opposed to 1934 maintaining its own IP stack, for example). 1936 Note: Current WG discussion indicates that some peer-to-peer 1937 communication must be assumed, i.e. the PCE may communicate only 1938 indirectly with any given device, enabling hierarchical configuration 1939 of the system. 1941 6TiSCH depends on [PCE] and [I-D.finn-detnet-architecture]. 1943 6TiSCH also depends on the fact that DetNet will maintain consistency 1944 with [IEEE802.1TSNTG]. 1946 5.2. Wireless Industrial Today 1948 Today industrial wireless is accomplished using multiple 1949 deterministic wireless networks which are incompatible with each 1950 other and with IP traffic. 1952 6TiSCH is not yet fully specified, so it cannot be used in today's 1953 applications. 1955 5.3. Wireless Industrial Future 1957 5.3.1. Unified Wireless Network and Management 1959 We expect DetNet and 6TiSCH together to enable converged transport of 1960 deterministic and best-effort traffic flows between real-time 1961 industrial devices and wide area networks via IP routing. A high 1962 level view of a basic such network is shown in Figure 7. 1964 ---+-------- ............ ------------ 1965 | External Network | 1966 | +-----+ 1967 +-----+ | NME | 1968 | | LLN Border | | 1969 | | router +-----+ 1970 +-----+ 1971 o o o 1972 o o o o 1973 o o LLN o o o 1974 o o o o 1975 o 1977 Figure 7: Basic 6TiSCH Network 1979 Figure 8 shows a backbone router federating multiple synchronized 1980 6TiSCH subnets into a single subnet connected to the external 1981 network. 1983 ---+-------- ............ ------------ 1984 | External Network | 1985 | +-----+ 1986 | +-----+ | NME | 1987 +-----+ | +-----+ | | 1988 | | Router | | PCE | +-----+ 1989 | | +--| | 1990 +-----+ +-----+ 1991 | | 1992 | Subnet Backbone | 1993 +--------------------+------------------+ 1994 | | | 1995 +-----+ +-----+ +-----+ 1996 | | Backbone | | Backbone | | Backbone 1997 o | | router | | router | | router 1998 +-----+ +-----+ +-----+ 1999 o o o o o 2000 o o o o o o o o o o o 2001 o o o LLN o o o o 2002 o o o o o o o o o o o o 2004 Figure 8: Extended 6TiSCH Network 2006 The backbone router must ensure end-to-end deterministic behavior 2007 between the LLN and the backbone. We would like to see this 2008 accomplished in conformance with the work done in 2009 [I-D.finn-detnet-architecture] with respect to Layer-3 aspects of 2010 deterministic networks that span multiple Layer-2 domains. 2012 The PCE must compute a deterministic path end-to-end across the TSCH 2013 network and IEEE802.1 TSN Ethernet backbone, and DetNet protocols are 2014 expected to enable end-to-end deterministic forwarding. 2016 +-----+ 2017 | IoT | 2018 | G/W | 2019 +-----+ 2020 ^ <---- Elimination 2021 | | 2022 Track branch | | 2023 +-------+ +--------+ Subnet Backbone 2024 | | 2025 +--|--+ +--|--+ 2026 | | | Backbone | | | Backbone 2027 o | | | router | | | router 2028 +--/--+ +--|--+ 2029 o / o o---o----/ o 2030 o o---o--/ o o o o o 2031 o \ / o o LLN o 2032 o v <---- Replication 2033 o 2035 Figure 9: 6TiSCH Network with PRE 2037 5.3.1.1. PCE and 6TiSCH ARQ Retries 2039 Note: The possible use of ARQ techniques in DetNet is currently 2040 considered a possible design alternative. 2042 6TiSCH uses the IEEE802.15.4 Automatic Repeat-reQuest (ARQ) mechanism 2043 to provide higher reliability of packet delivery. ARQ is related to 2044 packet replication and elimination because there are two independent 2045 paths for packets to arrive at the destination, and if an expected 2046 packed does not arrive on one path then it checks for the packet on 2047 the second path. 2049 Although to date this mechanism is only used by wireless networks, 2050 this may be a technique that would be appropriate for DetNet and so 2051 aspects of the enabling protocol could be co-developed. 2053 For example, in Figure 9, a Track is laid out from a field device in 2054 a 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN 2055 backbone. 2057 In ARQ the Replication function in the field device sends a copy of 2058 each packet over two different branches, and the PCE schedules each 2059 hop of both branches so that the two copies arrive in due time at the 2060 gateway. In case of a loss on one branch, hopefully the other copy 2061 of the packet still arrives within the allocated time. If two copies 2062 make it to the IoT gateway, the Elimination function in the gateway 2063 ignores the extra packet and presents only one copy to upper layers. 2065 At each 6TiSCH hop along the Track, the PCE may schedule more than 2066 one timeSlot for a packet, so as to support Layer-2 retries (ARQ). 2068 In current deployments, a TSCH Track does not necessarily support PRE 2069 but is systematically multi-path. This means that a Track is 2070 scheduled so as to ensure that each hop has at least two forwarding 2071 solutions, and the forwarding decision is to try the preferred one 2072 and use the other in case of Layer-2 transmission failure as detected 2073 by ARQ. 2075 5.3.2. Schedule Management by a PCE 2077 A common feature of 6TiSCH and DetNet is the action of a PCE to 2078 configure paths through the network. Specifically, what is needed is 2079 a protocol and data model that the PCE will use to get/set the 2080 relevant configuration from/to the devices, as well as perform 2081 operations on the devices. We expect that this protocol will be 2082 developed by DetNet with consideration for its reuse by 6TiSCH. The 2083 remainder of this section provides a bit more context from the 6TiSCH 2084 side. 2086 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests 2088 The 6TiSCH device does not expect to place the request for bandwidth 2089 between itself and another device in the network. Rather, an 2090 operation control system invoked through a human interface specifies 2091 the required traffic specification and the end nodes (in terms of 2092 latency and reliability). Based on this information, the PCE must 2093 compute a path between the end nodes and provision the network with 2094 per-flow state that describes the per-hop operation for a given 2095 packet, the corresponding timeslots, and the flow identification that 2096 enables recognizing that a certain packet belongs to a certain path, 2097 etc. 2099 For a static configuration that serves a certain purpose for a long 2100 period of time, it is expected that a node will be provisioned in one 2101 shot with a full schedule, which incorporates the aggregation of its 2102 behavior for multiple paths. 6TiSCH expects that the programing of 2103 the schedule will be done over COAP as discussed in 2104 [I-D.ietf-6tisch-coap]. 2106 6TiSCH expects that the PCE commands will be mapped back and forth 2107 into CoAP by a gateway function at the edge of the 6TiSCH network. 2108 For instance, it is possible that a mapping entity on the backbone 2109 transforms a non-CoAP protocol such as PCEP into the RESTful 2110 interfaces that the 6TiSCH devices support. This architecture will 2111 be refined to comply with DetNet [I-D.finn-detnet-architecture] when 2112 the work is formalized. Related information about 6TiSCH can be 2113 found at [I-D.ietf-6tisch-6top-interface] and RPL [RFC6550]. 2115 A protocol may be used to update the state in the devices during 2116 runtime, for example if it appears that a path through the network 2117 has ceased to perform as expected, but in 6TiSCH that flow was not 2118 designed and no protocol was selected. We would like to see DetNet 2119 define the appropriate end-to-end protocols to be used in that case. 2120 The implication is that these state updates take place once the 2121 system is configured and running, i.e. they are not limited to the 2122 initial communication of the configuration of the system. 2124 A "slotFrame" is the base object that a PCE would manipulate to 2125 program a schedule into an LLN node ([I-D.ietf-6tisch-architecture]). 2127 We would like to see the PCE read energy data from devices, and 2128 compute paths that will implement policies on how energy in devices 2129 is consumed, for instance to ensure that the spent energy does not 2130 exceeded the available energy over a period of time. Note: this 2131 statement implies that an extensible protocol for communicating 2132 device info to the PCE and enabling the PCE to act on it will be part 2133 of the DetNet architecture, however for subnets with specific 2134 protocols (e.g. CoAP) a gateway may be required. 2136 6TiSCH devices can discover their neighbors over the radio using a 2137 mechanism such as beacons, but even though the neighbor information 2138 is available in the 6TiSCH interface data model, 6TiSCH does not 2139 describe a protocol to proactively push the neighborhood information 2140 to a PCE. We would like to see DetNet define such a protocol; one 2141 possible design alternative is that it could operate over CoAP, 2142 alternatively it could be converted to/from CoAP by a gateway. We 2143 would like to see such a protocol carry multiple metrics, for example 2144 similar to those used for RPL operations [RFC6551] 2146 5.3.2.2. 6TiSCH IP Interface 2148 "6top" ([I-D.wang-6tisch-6top-sublayer]) is a logical link control 2149 sitting between the IP layer and the TSCH MAC layer which provides 2150 the link abstraction that is required for IP operations. The 6top 2151 data model and management interfaces are further discussed in 2152 [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap]. 2154 An IP packet that is sent along a 6TiSCH path uses the Differentiated 2155 Services Per-Hop-Behavior Group called Deterministic Forwarding, as 2156 described in [I-D.svshah-tsvwg-deterministic-forwarding]. 2158 5.3.3. 6TiSCH Security Considerations 2160 On top of the classical requirements for protection of control 2161 signaling, it must be noted that 6TiSCH networks operate on limited 2162 resources that can be depleted rapidly in a DoS attack on the system, 2163 for instance by placing a rogue device in the network, or by 2164 obtaining management control and setting up unexpected additional 2165 paths. 2167 5.4. Wireless Industrial Asks 2169 6TiSCH depends on DetNet to define: 2171 o Configuration (state) and operations for deterministic paths 2173 o End-to-end protocols for deterministic forwarding (tagging, IP) 2175 o Protocol for packet replication and elimination 2177 6. Cellular Radio 2179 6.1. Use Case Description 2181 This use case describes the application of deterministic networking 2182 in the context of cellular telecom transport networks. Important 2183 elements include time synchronization, clock distribution, and ways 2184 of establishing time-sensitive streams for both Layer-2 and Layer-3 2185 user plane traffic. 2187 6.1.1. Network Architecture 2189 Figure 10 illustrates a typical 3GPP-defined cellular network 2190 architecture, which includes "Fronthaul" and "Midhaul" network 2191 segments. The "Fronthaul" is the network connecting base stations 2192 (baseband processing units) to the remote radio heads (antennas). 2193 The "Midhaul" is the network inter-connecting base stations (or small 2194 cell sites). 2196 In Figure 10 "eNB" ("E-UTRAN Node B") is the hardware that is 2197 connected to the mobile phone network which communicates directly 2198 with mobile handsets ([TS36300]). 2200 Y (remote radio heads (antennas)) 2201 \ 2202 Y__ \.--. .--. +------+ 2203 \_( `. +---+ _(Back`. | 3GPP | 2204 Y------( Front )----|eNB|----( Haul )----| core | 2205 ( ` .Haul ) +---+ ( ` . ) ) | netw | 2206 /`--(___.-' \ `--(___.-' +------+ 2207 Y_/ / \.--. \ 2208 Y_/ _( Mid`. \ 2209 ( Haul ) \ 2210 ( ` . ) ) \ 2211 `--(___.-'\_____+---+ (small cell sites) 2212 \ |SCe|__Y 2213 +---+ +---+ 2214 Y__|eNB|__Y 2215 +---+ 2216 Y_/ \_Y ("local" radios) 2218 Figure 10: Generic 3GPP-based Cellular Network Architecture 2220 6.1.2. Delay Constraints 2222 The available processing time for Fronthaul networking overhead is 2223 limited to the available time after the baseband processing of the 2224 radio frame has completed. For example in Long Term Evolution (LTE) 2225 radio, processing of a radio frame is allocated 3ms but typically the 2226 processing uses most of it, allowing only a small fraction to be used 2227 by the Fronthaul network (e.g. up to 250us one-way delay, though the 2228 existing spec ([NGMN-fronth]) supports delay only up to 100us). This 2229 ultimately determines the distance the remote radio heads can be 2230 located from the base stations (e.g., 100us equals roughly 20 km of 2231 optical fiber-based transport). Allocation options of the available 2232 time budget between processing and transport are under heavy 2233 discussions in the mobile industry. 2235 For packet-based transport the allocated transport time (e.g. CPRI 2236 would allow for 100us delay [CPRI]) is consumed by all nodes and 2237 buffering between the remote radio head and the baseband processing 2238 unit, plus the distance-incurred delay. 2240 The baseband processing time and the available "delay budget" for the 2241 fronthaul is likely to change in the forthcoming "5G" due to reduced 2242 radio round trip times and other architectural and service 2243 requirements [NGMN]. 2245 [METIS] documents the fundamental challenges as well as overall 2246 technical goals of the future 5G mobile and wireless system as the 2247 starting point. These future systems should support much higher data 2248 volumes and rates and significantly lower end-to-end latency for 100x 2249 more connected devices (at similar cost and energy consumption levels 2250 as today's system). 2252 For Midhaul connections, delay constraints are driven by Inter-Site 2253 radio functions like Coordinated Multipoint Processing (CoMP, see 2254 [CoMP]). CoMP reception and transmission is a framework in which 2255 multiple geographically distributed antenna nodes cooperate to 2256 improve the performance of the users served in the common cooperation 2257 area. The design principal of CoMP is to extend the current single- 2258 cell to multi-UE (User Equipment) transmission to a multi-cell-to- 2259 multi-UEs transmission by base station cooperation. 2261 CoMP has delay-sensitive performance parameters, which are "midhaul 2262 latency" and "CSI (Channel State Information) reporting and 2263 accuracy". The essential feature of CoMP is signaling between eNBs, 2264 so Midhaul latency is the dominating limitation of CoMP performance. 2265 Generally, CoMP can benefit from coordinated scheduling (either 2266 distributed or centralized) of different cells if the signaling delay 2267 between eNBs is within 1-10ms. This delay requirement is both rigid 2268 and absolute because any uncertainty in delay will degrade the 2269 performance significantly. 2271 Inter-site CoMP is one of the key requirements for 5G and is also a 2272 near-term goal for the current 4.5G network architecture. 2274 6.1.3. Time Synchronization Constraints 2276 Fronthaul time synchronization requirements are given by [TS25104], 2277 [TS36104], [TS36211], and [TS36133]. These can be summarized for the 2278 current 3GPP LTE-based networks as: 2280 Delay Accuracy: 2281 +-8ns (i.e. +-1/32 Tc, where Tc is the UMTS Chip time of 1/3.84 2282 MHz) resulting in a round trip accuracy of +-16ns. The value is 2283 this low to meet the 3GPP Timing Alignment Error (TAE) measurement 2284 requirements. Note: performance guarantees of low nanosecond 2285 values such as these are considered to be below the DetNet layer - 2286 it is assumed that the underlying implementation, e.g. the 2287 hardware, will provide sufficient support (e.g. buffering) to 2288 enable this level of accuracy. These values are maintained in the 2289 use case to give an indication of the overall application. 2291 Timing Alignment Error: 2292 Timing Alignment Error (TAE) is problematic to Fronthaul networks 2293 and must be minimized. If the transport network cannot guarantee 2294 low enough TAE then additional buffering has to be introduced at 2295 the edges of the network to buffer out the jitter. Buffering is 2296 not desirable as it reduces the total available delay budget. 2297 Packet Delay Variation (PDV) requirements can be derived from TAE 2298 for packet based Fronthaul networks. 2300 * For multiple input multiple output (MIMO) or TX diversity 2301 transmissions, at each carrier frequency, TAE shall not exceed 2302 65 ns (i.e. 1/4 Tc). 2304 * For intra-band contiguous carrier aggregation, with or without 2305 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2 2306 Tc). 2308 * For intra-band non-contiguous carrier aggregation, with or 2309 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e. 2310 one Tc). 2312 * For inter-band carrier aggregation, with or without MIMO or TX 2313 diversity, TAE shall not exceed 260 ns. 2315 Transport link contribution to radio frequency error: 2316 +-2 PPB. This value is considered to be "available" for the 2317 Fronthaul link out of the total 50 PPB budget reserved for the 2318 radio interface. Note: the reason that the transport link 2319 contributes to radio frequency error is as follows. The current 2320 way of doing Fronthaul is from the radio unit to remote radio head 2321 directly. The remote radio head is essentially a passive device 2322 (without buffering etc.) The transport drives the antenna 2323 directly by feeding it with samples and everything the transport 2324 adds will be introduced to radio as-is. So if the transport 2325 causes additional frequency error that shows immediately on the 2326 radio as well. Note: performance guarantees of low nanosecond 2327 values such as these are considered to be below the DetNet layer - 2328 it is assumed that the underlying implementation, e.g. the 2329 hardware, will provide sufficient support to enable this level of 2330 performance. These values are maintained in the use case to give 2331 an indication of the overall application. 2333 The above listed time synchronization requirements are difficult to 2334 meet with point-to-point connected networks, and more difficult when 2335 the network includes multiple hops. It is expected that networks 2336 must include buffering at the ends of the connections as imposed by 2337 the jitter requirements, since trying to meet the jitter requirements 2338 in every intermediate node is likely to be too costly. However, 2339 every measure to reduce jitter and delay on the path makes it easier 2340 to meet the end-to-end requirements. 2342 In order to meet the timing requirements both senders and receivers 2343 must remain time synchronized, demanding very accurate clock 2344 distribution, for example support for IEEE 1588 transparent clocks in 2345 every intermediate node. 2347 In cellular networks from the LTE radio era onward, phase 2348 synchronization is needed in addition to frequency synchronization 2349 ([TS36300], [TS23401]). 2351 6.1.4. Transport Loss Constraints 2353 Fronthaul and Midhaul networks assume almost error-free transport. 2354 Errors can result in a reset of the radio interfaces, which can cause 2355 reduced throughput or broken radio connectivity for mobile customers. 2357 For packetized Fronthaul and Midhaul connections packet loss may be 2358 caused by BER, congestion, or network failure scenarios. Current 2359 tools for elminating packet loss for Fronthaul and Midhaul networks 2360 have serious challenges, for example retransmitting lost packets and/ 2361 or using forward error correction (FEC) to circumvent bit errors is 2362 practically impossible due to the additional delay incurred. Using 2363 redundant streams for better guarantees for delivery is also 2364 practically impossible in many cases due to high bandwidth 2365 requirements of Fronthaul and Midhaul networks. Protection switching 2366 is also a candidate but current technologies for the path switch are 2367 too slow to avoid reset of mobile interfaces. 2369 Fronthaul links are assumed to be symmetric, and all Fronthaul 2370 streams (i.e. those carrying radio data) have equal priority and 2371 cannot delay or pre-empt each other. This implies that the network 2372 must guarantee that each time-sensitive flow meets their schedule. 2374 6.1.5. Security Considerations 2376 Establishing time-sensitive streams in the network entails reserving 2377 networking resources for long periods of time. It is important that 2378 these reservation requests be authenticated to prevent malicious 2379 reservation attempts from hostile nodes (or accidental 2380 misconfiguration). This is particularly important in the case where 2381 the reservation requests span administrative domains. Furthermore, 2382 the reservation information itself should be digitally signed to 2383 reduce the risk of a legitimate node pushing a stale or hostile 2384 configuration into another networking node. 2386 Note: This is considered important for the security policy of the 2387 network, but does not affect the core DetNet architecture and design. 2389 6.2. Cellular Radio Networks Today 2391 6.2.1. Fronthaul 2393 Today's Fronthaul networks typically consist of: 2395 o Dedicated point-to-point fiber connection is common 2397 o Proprietary protocols and framings 2399 o Custom equipment and no real networking 2401 Current solutions for Fronthaul are direct optical cables or 2402 Wavelength-Division Multiplexing (WDM) connections. 2404 6.2.2. Midhaul and Backhaul 2406 Today's Midhaul and Backhaul networks typically consist of: 2408 o Mostly normal IP networks, MPLS-TP, etc. 2410 o Clock distribution and sync using 1588 and SyncE 2412 Telecommunication networks in the Mid- and Backhaul are already 2413 heading towards transport networks where precise time synchronization 2414 support is one of the basic building blocks. While the transport 2415 networks themselves have practically transitioned to all-IP packet- 2416 based networks to meet the bandwidth and cost requirements, highly 2417 accurate clock distribution has become a challenge. 2419 In the past, Mid- and Backhaul connections were typically based on 2420 Time Division Multiplexing (TDM-based) and provided frequency 2421 synchronization capabilities as a part of the transport media. 2422 Alternatively other technologies such as Global Positioning System 2423 (GPS) or Synchronous Ethernet (SyncE) are used [SyncE]. 2425 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985] 2426 for legacy transport support) have become popular tools to build and 2427 manage new all-IP Radio Access Networks (RANs) 2428 [I-D.kh-spring-ip-ran-use-case]. Although various timing and 2429 synchronization optimizations have already been proposed and 2430 implemented including 1588 PTP enhancements 2431 [I-D.ietf-tictoc-1588overmpls] and [I-D.ietf-mpls-residence-time], 2432 these solution are not necessarily sufficient for the forthcoming RAN 2433 architectures nor do they guarantee the more stringent time- 2434 synchronization requirements such as [CPRI]. 2436 There are also existing solutions for TDM over IP such as [RFC5087] 2437 and [RFC4553], as well as TDM over Ethernet transports such as 2438 [RFC5086]. 2440 6.3. Cellular Radio Networks Future 2442 Future Cellular Radio Networks will be based on a mix of different 2443 xHaul networks (xHaul = front-, mid- and backhaul), and future 2444 transport networks should be able to support all of them 2445 simultaneously. It is already envisioned today that: 2447 o Not all "cellular radio network" traffic will be IP, for example 2448 some will remain at Layer 2 (e.g. Ethernet based). DetNet 2449 solutions must address all traffic types (Layer 2, Layer 3) with 2450 the same tools and allow their transport simultaneously. 2452 o All form of xHaul networks will need some form of DetNet 2453 solutions. For example with the advent of 5G some Backhaul 2454 traffic will also have DetNet requirements (e.g. traffic belonging 2455 to time-critical 5G applications). 2457 We would like to see the following in future Cellular Radio networks: 2459 o Unified standards-based transport protocols and standard 2460 networking equipment that can make use of underlying deterministic 2461 link-layer services 2463 o Unified and standards-based network management systems and 2464 protocols in all parts of the network (including Fronthaul) 2466 New radio access network deployment models and architectures may 2467 require time- sensitive networking services with strict requirements 2468 on other parts of the network that previously were not considered to 2469 be packetized at all. Time and synchronization support are already 2470 topical for Backhaul and Midhaul packet networks [MEF] and are 2471 becoming a real issue for Fronthaul networks also. Specifically in 2472 Fronthaul networks the timing and synchronization requirements can be 2473 extreme for packet based technologies, for example, on the order of 2474 sub +-20 ns packet delay variation (PDV) and frequency accuracy of 2475 +0.002 PPM [Fronthaul]. 2477 The actual transport protocols and/or solutions to establish required 2478 transport "circuits" (pinned-down paths) for Fronthaul traffic are 2479 still undefined. Those are likely to include (but are not limited 2480 to) solutions directly over Ethernet, over IP, and using MPLS/ 2481 PseudoWire transport. 2483 Even the current time-sensitive networking features may not be 2484 sufficient for Fronthaul traffic. Therefore, having specific 2485 profiles that take the requirements of Fronthaul into account is 2486 desirable [IEEE8021CM]. 2488 Interesting and important work for time-sensitive networking has been 2489 done for Ethernet [TSNTG], which specifies the use of IEEE 1588 time 2490 precision protocol (PTP) [IEEE1588] in the context of IEEE 802.1D and 2491 IEEE 802.1Q. [IEEE8021AS] specifies a Layer 2 time synchronizing 2492 service, and other specifications such as IEEE 1722 [IEEE1722] 2493 specify Ethernet-based Layer-2 transport for time-sensitive streams. 2495 New promising work seeks to enable the transport of time-sensitive 2496 fronthaul streams in Ethernet bridged networks [IEEE8021CM]. 2497 Analogous to IEEE 1722 there is an ongoing standardization effort to 2498 define the Layer-2 transport encapsulation format for transporting 2499 radio over Ethernet (RoE) in the IEEE 1904.3 Task Force [IEEE19043]. 2501 All-IP RANs and xHhaul networks would benefit from time 2502 synchronization and time-sensitive transport services. Although 2503 Ethernet appears to be the unifying technology for the transport, 2504 there is still a disconnect providing Layer 3 services. The protocol 2505 stack typically has a number of layers below the Ethernet Layer 2 2506 that shows up to the Layer 3 IP transport. It is not uncommon that 2507 on top of the lowest layer (optical) transport there is the first 2508 layer of Ethernet followed one or more layers of MPLS, PseudoWires 2509 and/or other tunneling protocols finally carrying the Ethernet layer 2510 visible to the user plane IP traffic. 2512 While there are existing technologies to establish circuits through 2513 the routed and switched networks (especially in MPLS/PWE space), 2514 there is still no way to signal the time synchronization and time- 2515 sensitive stream requirements/reservations for Layer-3 flows in a way 2516 that addresses the entire transport stack, including the Ethernet 2517 layers that need to be configured. 2519 Furthermore, not all "user plane" traffic will be IP. Therefore, the 2520 same solution also must address the use cases where the user plane 2521 traffic is a different layer, for example Ethernet frames. 2523 There is existing work describing the problem statement 2524 [I-D.finn-detnet-problem-statement] and the architecture 2525 [I-D.finn-detnet-architecture] for deterministic networking (DetNet) 2526 that targets solutions for time-sensitive (IP/transport) streams with 2527 deterministic properties over Ethernet-based switched networks. 2529 6.4. Cellular Radio Networks Asks 2531 A standard for data plane transport specification which is: 2533 o Unified among all xHauls (meaning that different flows with 2534 diverse DetNet requirements can coexist in the same network and 2535 traverse the same nodes without interfering with each other) 2537 o Deployed in a highly deterministic network environment 2539 A standard for data flow information models that are: 2541 o Aware of the time sensitivity and constraints of the target 2542 networking environment 2544 o Aware of underlying deterministic networking services (e.g., on 2545 the Ethernet layer) 2547 7. Industrial M2M 2549 7.1. Use Case Description 2551 Industrial Automation in general refers to automation of 2552 manufacturing, quality control and material processing. In this 2553 "machine to machine" (M2M) use case we consider machine units in a 2554 plant floor which periodically exchange data with upstream or 2555 downstream machine modules and/or a supervisory controller within a 2556 local area network. 2558 The actors of M2M communication are Programmable Logic Controllers 2559 (PLCs). Communication between PLCs and between PLCs and the 2560 supervisory PLC (S-PLC) is achieved via critical control/data streams 2561 Figure 11. 2563 S (Sensor) 2564 \ +-----+ 2565 PLC__ \.--. .--. ---| MES | 2566 \_( `. _( `./ +-----+ 2567 A------( Local )-------------( L2 ) 2568 ( Net ) ( Net ) +-------+ 2569 /`--(___.-' `--(___.-' ----| S-PLC | 2570 S_/ / PLC .--. / +-------+ 2571 A_/ \_( `. 2572 (Actuator) ( Local ) 2573 ( Net ) 2574 /`--(___.-'\ 2575 / \ A 2576 S A 2578 Figure 11: Current Generic Industrial M2M Network Architecture 2580 This use case focuses on PLC-related communications; communication to 2581 Manufacturing-Execution-Systems (MESs) are not addressed. 2583 This use case covers only critical control/data streams; non-critical 2584 traffic between industrial automation applications (such as 2585 communication of state, configuration, set-up, and database 2586 communication) are adequately served by currently available 2587 prioritizing techniques. Such traffic can use up to 80% of the total 2588 bandwidth required. There is also a subset of non-time-critical 2589 traffic that must be reliable even though it is not time sensitive. 2591 In this use case the primary need for deterministic networking is to 2592 provide end-to-end delivery of M2M messages within specific timing 2593 constraints, for example in closed loop automation control. Today 2594 this level of determinism is provided by proprietary networking 2595 technologies. In addition, standard networking technologies are used 2596 to connect the local network to remote industrial automation sites, 2597 e.g. over an enterprise or metro network which also carries other 2598 types of traffic. Therefore, flows that should be forwarded with 2599 deterministic guarantees need to be sustained regardless of the 2600 amount of other flows in those networks. 2602 7.2. Industrial M2M Communication Today 2604 Today, proprietary networks fulfill the needed timing and 2605 availability for M2M networks. 2607 The network topologies used today by industrial automation are 2608 similar to those used by telecom networks: Daisy Chain, Ring, Hub and 2609 Spoke, and Comb (a subset of Daisy Chain). 2611 PLC-related control/data streams are transmitted periodically and 2612 carry either a pre-configured payload or a payload configured during 2613 runtime. 2615 Some industrial applications require time synchronization at the end 2616 nodes. For such time-coordinated PLCs, accuracy of 1 microsecond is 2617 required. Even in the case of "non-time-coordinated" PLCs time sync 2618 may be needed e.g. for timestamping of sensor data. 2620 Industrial network scenarios require advanced security solutions. 2621 Many of the current industrial production networks are physically 2622 separated. Preventing critical flows from be leaked outside a domain 2623 is handled today by filtering policies that are typically enforced in 2624 firewalls. 2626 7.2.1. Transport Parameters 2628 The Cycle Time defines the frequency of message(s) between industrial 2629 actors. The Cycle Time is application dependent, in the range of 1ms 2630 - 100ms for critical control/data streams. 2632 Because industrial applications assume deterministic transport for 2633 critical Control-Data-Stream parameters (instead of defining latency 2634 and delay variation parameters) it is sufficient to fulfill the upper 2635 bound of latency (maximum latency). The underlying networking 2636 infrastructure must ensure a maximum end-to-end delivery time of 2637 messages in the range of 100 microseconds to 50 milliseconds 2638 depending on the control loop application. 2640 The bandwidth requirements of control/data streams are usually 2641 calculated directly from the bytes-per-cycle parameter of the control 2642 loop. For PLC-to-PLC communication one can expect 2 - 32 streams 2643 with packet size in the range of 100 - 700 bytes. For S-PLC to PLCs 2644 the number of streams is higher - up to 256 streams. Usually no more 2645 than 20% of available bandwidth is used for critical control/data 2646 streams. In today's networks 1Gbps links are commonly used. 2648 Most PLC control loops are rather tolerant of packet loss, however 2649 critical control/data streams accept no more than 1 packet loss per 2650 consecutive communication cycle (i.e. if a packet gets lost in cycle 2651 "n", then the next cycle ("n+1") must be lossless). After two or 2652 more consecutive packet losses the network may be considered to be 2653 "down" by the Application. 2655 As network downtime may impact the whole production system the 2656 required network availability is rather high (99,999%). 2658 Based on the above parameters we expect that some form of redundancy 2659 will be required for M2M communications, however any individual 2660 solution depends on several parameters including cycle time, delivery 2661 time, etc. 2663 7.2.2. Stream Creation and Destruction 2665 In an industrial environment, critical control/data streams are 2666 created rather infrequently, on the order of ~10 times per day / week 2667 / month. Most of these critical control/data streams get created at 2668 machine startup, however flexibility is also needed during runtime, 2669 for example when adding or removing a machine. Going forward as 2670 production systems become more flexible, we expect a significant 2671 increase in the rate at which streams are created, changed and 2672 destroyed. 2674 7.3. Industrial M2M Future 2676 We would like to see a converged IP-standards-based network with 2677 deterministic properties that can satisfy the timing, security and 2678 reliability constraints described above. Today's proprietary 2679 networks could then be interfaced to such a network via gateways or, 2680 in the case of new installations, devices could be connected directly 2681 to the converged network. 2683 For this use case we expect time synchronization accuracy on the 2684 order of 1us. 2686 7.4. Industrial M2M Asks 2688 o Converged IP-based network 2690 o Deterministic behavior (bounded latency and jitter ) 2692 o High availability (presumably through redundancy) (99.999 %) 2694 o Low message delivery time (100us - 50ms) 2696 o Low packet loss (burstless, 0.1-1 %) 2698 o Security (e.g. prevent critical flows from being leaked between 2699 physically separated networks) 2701 8. Use Case Common Elements 2703 Looking at the use cases collectively, the following common desires 2704 for the DetNet-based networks of the future emerge: 2706 o Open standards-based network (replace various proprietary 2707 networks, reduce cost, create multi-vendor market) 2709 o Centrally administered (though such administration may be 2710 distributed for scale and resiliency) 2712 o Integrates L2 (bridged) and L3 (routed) environments (independent 2713 of the Link layer, e.g. can be used with Ethernet, 6TiSCH, etc.) 2715 o Carries both deterministic and best-effort traffic (guaranteed 2716 end-to-end delivery of deterministic flows, deterministic flows 2717 isolated from each other and from best-effort traffic congestion, 2718 unused deterministic BW available to best-effort traffic) 2720 o Ability to add or remove systems from the network with minimal, 2721 bounded service interruption (applications include replacement of 2722 failed devices as well as plug and play) 2724 o Uses standardized data flow information models capable of 2725 expressing deterministic properties (models express device 2726 capabilities, flow properties. Protocols for pushing models from 2727 controller to devices, devices to controller) 2729 o Scalable size (long distances (many km) and short distances 2730 (within a single machine), many hops (radio repeaters, microwave 2731 links, fiber links...) and short hops (single machine)) 2733 o Scalable timing parameters and accuracy (bounded latency, 2734 guaranteed worst case maximum, minimum. Low latency, e.g. control 2735 loops may be less than 1ms, but larger for wide area networks) 2737 o High availability (99.9999 percent up time requested, but may be 2738 up to twelve 9s) 2740 o Reliability, redundancy (lives at stake) 2742 o Security (from failures, attackers, misbehaving devices - 2743 sensitive to both packet content and arrival time) 2745 9. Use Cases Explicitly Out of Scope for DetNet 2747 This section contains use case text that has been determined to be 2748 outside of the scope of the present DetNet work. 2750 9.1. DetNet Scope Limitations 2752 The scope of DetNet is deliberately limited to specific use cases 2753 that are consistent with the WG charter, subject to the 2754 interpretation of the WG. At the time the DetNet Use Cases were 2755 solicited and provided by the authors the scope of DetNet was not 2756 clearly defined, and as that clarity has emerged, certain of the use 2757 cases have been determined to be outside the scope of the present 2758 DetNet work. Such text has been moved into this section to clarify 2759 that these use cases will not be supported by the DetNet work. 2761 The text in this section was moved here based on the following 2762 "exclusion" principles. Or, as an alternative to moving all such 2763 text to this section, some draft text has been modified in situ to 2764 reflect these same principles. 2766 The following principles have been established to clarify the scope 2767 of the present DetNet work. 2769 o The scope of network addressed by DetNet is limited to networks 2770 that can be centrally controlled, i.e. an "enterprise" aka 2771 "corporate" network. This explicitly excludes "the open 2772 Internet". 2774 o Maintaining synchronized time across a DetNet network is crucial 2775 to its operation, however DetNet assumes that time is to be 2776 maintained using other means, for example (but not limited to) 2777 Precision Time Protocol ([IEEE1588]). A use case may state the 2778 accuracy and reliability that it expects from the DetNet network 2779 as part of a whole system, however it is understood that such 2780 timing properties are not guaranteed by DetNet itself. It is 2781 currently an open question as to whether DetNet protocols will 2782 include a way for an application to communicate such timing 2783 expectations to the network, and if so whether they would be 2784 expected to materially affect the performance they would receive 2785 from the network as a result. 2787 9.2. Internet-based Applications 2789 9.2.1. Use Case Description 2791 There are many applications that communicate across the open Internet 2792 that could benefit from guaranteed delivery and bounded latency. The 2793 following are some representative examples. 2795 9.2.1.1. Media Content Delivery 2797 Media content delivery continues to be an important use of the 2798 Internet, yet users often experience poor quality audio and video due 2799 to the delay and jitter inherent in today's Internet. 2801 9.2.1.2. Online Gaming 2803 Online gaming is a significant part of the gaming market, however 2804 latency can degrade the end user experience. For example "First 2805 Person Shooter" (FPS) games are highly delay-sensitive. 2807 9.2.1.3. Virtual Reality 2809 Virtual reality (VR) has many commercial applications including real 2810 estate presentations, remote medical procedures, and so on. Low 2811 latency is critical to interacting with the virtual world because 2812 perceptual delays can cause motion sickness. 2814 9.2.2. Internet-Based Applications Today 2816 Internet service today is by definition "best effort", with no 2817 guarantees on delivery or bandwidth. 2819 9.2.3. Internet-Based Applications Future 2821 We imagine an Internet from which we will be able to play a video 2822 without glitches and play games without lag. 2824 For online gaming, the maximum round-trip delay can be 100ms and 2825 stricter for FPS gaming which can be 10-50ms. Transport delay is the 2826 dominate part with a 5-20ms budget. 2828 For VR, 1-10ms maximum delay is needed and total network budget is 2829 1-5ms if doing remote VR. 2831 Flow identification can be used for gaming and VR, i.e. it can 2832 recognize a critical flow and provide appropriate latency bounds. 2834 9.2.4. Internet-Based Applications Asks 2836 o Unified control and management protocols to handle time-critical 2837 data flow 2839 o Application-aware flow filtering mechanism to recognize the timing 2840 critical flow without doing 5-tuple matching 2842 o Unified control plane to provide low latency service on Layer-3 2843 without changing the data plane 2845 o OAM system and protocols which can help to provide E2E-delay 2846 sensitive service provisioning 2848 9.3. Pro Audio and Video - Digital Rights Management (DRM) 2850 This section was moved here because this is considered a Link layer 2851 topic, not direct responsibility of DetNet. 2853 Digital Rights Management (DRM) is very important to the audio and 2854 video industries. Any time protected content is introduced into a 2855 network there are DRM concerns that must be maintained (see 2856 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of 2857 network technology, however there are cases when a secure link 2858 supporting authentication and encryption is required by content 2859 owners to carry their audio or video content when it is outside their 2860 own secure environment (for example see [DCI]). 2862 As an example, two techniques are Digital Transmission Content 2863 Protection (DTCP) and High-Bandwidth Digital Content Protection 2864 (HDCP). HDCP content is not approved for retransmission within any 2865 other type of DRM, while DTCP may be retransmitted under HDCP. 2866 Therefore if the source of a stream is outside of the network and it 2867 uses HDCP protection it is only allowed to be placed on the network 2868 with that same HDCP protection. 2870 9.4. Pro Audio and Video - Link Aggregation 2872 Note: The term "Link Aggregation" is used here as defined by the text 2873 in the following paragraph, i.e. not following a more common Network 2874 Industry definition. Current WG consensus is that this item won't be 2875 directly supported by the DetNet architecture, for example because it 2876 implies guarantee of in-order delivery of packets which conflicts 2877 with the core goal of achieving the lowest possible latency. 2879 For transmitting streams that require more bandwidth than a single 2880 link in the target network can support, link aggregation is a 2881 technique for combining (aggregating) the bandwidth available on 2882 multiple physical links to create a single logical link of the 2883 required bandwidth. However, if aggregation is to be used, the 2884 network controller (or equivalent) must be able to determine the 2885 maximum latency of any path through the aggregate link. 2887 10. Acknowledgments 2889 10.1. Pro Audio 2891 This section was derived from draft-gunther-detnet-proaudio-req-01. 2893 The editors would like to acknowledge the help of the following 2894 individuals and the companies they represent: 2896 Jeff Koftinoff, Meyer Sound 2898 Jouni Korhonen, Associate Technical Director, Broadcom 2900 Pascal Thubert, CTAO, Cisco 2902 Kieran Tyrrell, Sienda New Media Technologies GmbH 2904 10.2. Utility Telecom 2906 This section was derived from draft-wetterwald-detnet-utilities-reqs- 2907 02. 2909 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy 2910 Practice Cisco 2912 Pascal Thubert, CTAO Cisco 2914 10.3. Building Automation Systems 2916 This section was derived from draft-bas-usecase-detnet-00. 2918 10.4. Wireless for Industrial 2920 This section was derived from draft-thubert-6tisch-4detnet-01. 2922 This specification derives from the 6TiSCH architecture, which is the 2923 result of multiple interactions, in particular during the 6TiSCH 2924 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at 2925 the IETF. 2927 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier 2928 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael 2929 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon, 2930 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey, 2931 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria 2932 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation 2933 and various contributions. 2935 10.5. Cellular Radio 2937 This section was derived from draft-korhonen-detnet-telreq-00. 2939 10.6. Industrial M2M 2941 The authors would like to thank Feng Chen and Marcel Kiessling for 2942 their comments and suggestions. 2944 10.7. Internet Applications and CoMP 2946 This section was derived from draft-zha-detnet-use-case-00. 2948 This document has benefited from reviews, suggestions, comments and 2949 proposed text provided by the following members, listed in 2950 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver 2951 Huang. 2953 10.8. Electrical Utilities 2955 The wind power generation use case has been extracted from the study 2956 of Wind Farms conducted within the 5GPPP Virtuwind Project. The 2957 project is funded by the European Union's Horizon 2020 research and 2958 innovation programme under grant agreement No 671648 (VirtuWind). 2960 11. Informative References 2962 [ACE] IETF, "Authentication and Authorization for Constrained 2963 Environments", . 2966 [Ahm14] Ahmed, M. and R. Kim, "Communication network architectures 2967 for smart-wind power farms.", Energies, p. 3900-3921. , 2968 June 2014. 2970 [bacnetip] 2971 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP", 2972 January 1999. 2974 [CCAMP] IETF, "Common Control and Measurement Plane", 2975 . 2977 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND 2978 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_ 2979 and_Enhancement_v2.0, March 2015, 2980 . 2983 [CONTENT_PROTECTION] 2984 Olsen, D., "1722a Content Protection", 2012, 2985 . 2988 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI); 2989 Interface Specification", CPRI Specification V6.1, July 2990 2014, . 2993 [CPRI-transp] 2994 CPRI TWG, "CPRI requirements for Ethernet Fronthaul", 2995 November 2015, 2996 . 2999 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification, 3000 Version 1.2", 2012, . 3002 [DICE] IETF, "DTLS In Constrained Environments", 3003 . 3005 [EA12] Evans, P. and M. Annunziata, "Industrial Internet: Pushing 3006 the Boundaries of Minds and Machines", November 2012. 3008 [ESPN_DC2] 3009 Daley, D., "ESPN's DC2 Scales AVB Large", 2014, 3010 . 3013 [flnet] Japan Electrical Manufacturers Association, "JEMA 1479 - 3014 English Edition", September 2012. 3016 [Fronthaul] 3017 Chen, D. and T. Mustala, "Ethernet Fronthaul 3018 Considerations", IEEE 1904.3, February 2015, 3019 . 3022 [HART] www.hartcomm.org, "Highway Addressable remote Transducer, 3023 a group of specifications for industrial process and 3024 control devices administered by the HART Foundation". 3026 [I-D.finn-detnet-architecture] 3027 Finn, N. and P. Thubert, "Deterministic Networking 3028 Architecture", draft-finn-detnet-architecture-08 (work in 3029 progress), August 2016. 3031 [I-D.finn-detnet-problem-statement] 3032 Finn, N. and P. Thubert, "Deterministic Networking Problem 3033 Statement", draft-finn-detnet-problem-statement-05 (work 3034 in progress), March 2016. 3036 [I-D.ietf-6tisch-6top-interface] 3037 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer 3038 (6top) Interface", draft-ietf-6tisch-6top-interface-04 3039 (work in progress), July 2015. 3041 [I-D.ietf-6tisch-architecture] 3042 Thubert, P., "An Architecture for IPv6 over the TSCH mode 3043 of IEEE 802.15.4", draft-ietf-6tisch-architecture-10 (work 3044 in progress), June 2016. 3046 [I-D.ietf-6tisch-coap] 3047 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and 3048 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work 3049 in progress), March 2015. 3051 [I-D.ietf-6tisch-terminology] 3052 Palattella, M., Thubert, P., Watteyne, T., and Q. Wang, 3053 "Terminology in IPv6 over the TSCH mode of IEEE 3054 802.15.4e", draft-ietf-6tisch-terminology-07 (work in 3055 progress), March 2016. 3057 [I-D.ietf-ipv6-multilink-subnets] 3058 Thaler, D. and C. Huitema, "Multi-link Subnet Support in 3059 IPv6", draft-ietf-ipv6-multilink-subnets-00 (work in 3060 progress), July 2002. 3062 [I-D.ietf-mpls-residence-time] 3063 Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S., 3064 and S. Vainshtein, "Residence Time Measurement in MPLS 3065 network", draft-ietf-mpls-residence-time-11 (work in 3066 progress), July 2016. 3068 [I-D.ietf-roll-rpl-industrial-applicability] 3069 Phinney, T., Thubert, P., and R. Assimiti, "RPL 3070 applicability in industrial networks", draft-ietf-roll- 3071 rpl-industrial-applicability-02 (work in progress), 3072 October 2013. 3074 [I-D.ietf-tictoc-1588overmpls] 3075 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L. 3076 Montini, "Transporting Timing messages over MPLS 3077 Networks", draft-ietf-tictoc-1588overmpls-07 (work in 3078 progress), October 2015. 3080 [I-D.kh-spring-ip-ran-use-case] 3081 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing 3082 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02 3083 (work in progress), November 2014. 3085 [I-D.svshah-tsvwg-deterministic-forwarding] 3086 Shah, S. and P. Thubert, "Deterministic Forwarding PHB", 3087 draft-svshah-tsvwg-deterministic-forwarding-04 (work in 3088 progress), August 2015. 3090 [I-D.thubert-6lowpan-backbone-router] 3091 Thubert, P., "6LoWPAN Backbone Router", draft-thubert- 3092 6lowpan-backbone-router-03 (work in progress), February 3093 2013. 3095 [I-D.wang-6tisch-6top-sublayer] 3096 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer 3097 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in 3098 progress), November 2015. 3100 [IEC-60870-5-104] 3101 International Electrotechnical Commission, "International 3102 Standard IEC 60870-5-104: Network access for IEC 3103 60870-5-101 using standard transport profiles", June 2006. 3105 [IEC61400] 3106 "International standard 61400-25: Communications for 3107 monitoring and control of wind power plants", June 2013. 3109 [IEC61850-90-12] 3110 TC57 WG10, IEC., "IEC 61850-90-12 TR: Communication 3111 networks and systems for power utility automation - Part 3112 90-12: Wide area network engineering guidelines", 2015. 3114 [IEC62439-3:2012] 3115 TC65, IEC., "IEC 62439-3: Industrial communication 3116 networks - High availability automation networks - Part 3: 3117 Parallel Redundancy Protocol (PRP) and High-availability 3118 Seamless Redundancy (HSR)", 2012. 3120 [IEEE1588] 3121 IEEE, "IEEE Standard for a Precision Clock Synchronization 3122 Protocol for Networked Measurement and Control Systems", 3123 IEEE Std 1588-2008, 2008, 3124 . 3127 [IEEE1646] 3128 "Communication Delivery Time Performance Requirements for 3129 Electric Power Substation Automation", IEEE Standard 3130 1646-2004 , Apr 2004. 3132 [IEEE1722] 3133 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport 3134 Protocol for Time Sensitive Applications in a Bridged 3135 Local Area Network", IEEE Std 1722-2011, 2011, 3136 . 3139 [IEEE19043] 3140 IEEE Standards Association, "IEEE 1904.3 TF", IEEE 1904.3, 3141 2015, . 3143 [IEEE802.1TSNTG] 3144 IEEE Standards Association, "IEEE 802.1 Time-Sensitive 3145 Networks Task Group", March 2013, 3146 . 3148 [IEEE802154] 3149 IEEE standard for Information Technology, "IEEE std. 3150 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC) 3151 and Physical Layer (PHY) Specifications for Low-Rate 3152 Wireless Personal Area Networks". 3154 [IEEE802154e] 3155 IEEE standard for Information Technology, "IEEE standard 3156 for Information Technology, IEEE std. 802.15.4, Part. 3157 15.4: Wireless Medium Access Control (MAC) and Physical 3158 Layer (PHY) Specifications for Low-Rate Wireless Personal 3159 Area Networks, June 2011 as amended by IEEE std. 3160 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area 3161 Networks (LR-WPANs) Amendment 1: MAC sublayer", April 3162 2012. 3164 [IEEE8021AS] 3165 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)", 3166 IEEE 802.1AS-2001, 2011, 3167 . 3170 [IEEE8021CM] 3171 Farkas, J., "Time-Sensitive Networking for Fronthaul", 3172 Unapproved PAR, PAR for a New IEEE Standard; 3173 IEEE P802.1CM, April 2015, 3174 . 3177 [IEEE8021TSN] 3178 IEEE 802.1, "The charter of the TG is to provide the 3179 specifications that will allow time-synchronized low 3180 latency streaming services through 802 networks.", 2016, 3181 . 3183 [IETFDetNet] 3184 IETF, "Charter for IETF DetNet Working Group", 2015, 3185 . 3187 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation", 3188 . 3190 [ISA100.11a] 3191 ISA/ANSI, "Wireless Systems for Industrial Automation: 3192 Process Control and Related Applications - ISA100.11a-2011 3193 - IEC 62734", 2011, . 3196 [ISO7240-16] 3197 ISO, "ISO 7240-16:2007 Fire detection and alarm systems -- 3198 Part 16: Sound system control and indicating equipment", 3199 2007, . 3202 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006. 3204 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0", 3205 1994. 3207 [LTE-Latency] 3208 Johnston, S., "LTE Latency: How does it compare to other 3209 technologies", March 2014, 3210 . 3213 [MEF] MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells", 3214 MEF 22.1.1, July 2014, 3215 . 3218 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and 3219 wireless system", ICT-317669-METIS/D1.1 ICT- 3220 317669-METIS/D1.1, April 2013, . 3223 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL 3224 SPECIFICATION V1.1b", December 2006. 3226 [MODBUS] Modbus Organization, Inc., "MODBUS Application Protocol 3227 Specification", Apr 2012. 3229 [net5G] Ericsson, "5G Radio Access, Challenges for 2020 and 3230 Beyond", Ericsson white paper wp-5g, June 2013, 3231 . 3233 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0, 3234 February 2015, . 3237 [NGMN-fronth] 3238 NGMN Alliance, "Fronthaul Requirements for C-RAN", March 3239 2015, . 3242 [OPCXML] OPC Foundation, "OPC XML-Data Access Specification", Dec 3243 2004. 3245 [PCE] IETF, "Path Computation Element", 3246 . 3248 [profibus] 3249 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001. 3251 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 3252 Requirement Levels", BCP 14, RFC 2119, 3253 DOI 10.17487/RFC2119, March 1997, 3254 . 3256 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6 3257 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460, 3258 December 1998, . 3260 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 3261 "Definition of the Differentiated Services Field (DS 3262 Field) in the IPv4 and IPv6 Headers", RFC 2474, 3263 DOI 10.17487/RFC2474, December 1998, 3264 . 3266 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 3267 Label Switching Architecture", RFC 3031, 3268 DOI 10.17487/RFC3031, January 2001, 3269 . 3271 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 3272 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 3273 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 3274 . 3276 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 3277 Metric for IP Performance Metrics (IPPM)", RFC 3393, 3278 DOI 10.17487/RFC3393, November 2002, 3279 . 3281 [RFC3411] Harrington, D., Presuhn, R., and B. Wijnen, "An 3282 Architecture for Describing Simple Network Management 3283 Protocol (SNMP) Management Frameworks", STD 62, RFC 3411, 3284 DOI 10.17487/RFC3411, December 2002, 3285 . 3287 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between 3288 Information Models and Data Models", RFC 3444, 3289 DOI 10.17487/RFC3444, January 2003, 3290 . 3292 [RFC3972] Aura, T., "Cryptographically Generated Addresses (CGA)", 3293 RFC 3972, DOI 10.17487/RFC3972, March 2005, 3294 . 3296 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation 3297 Edge-to-Edge (PWE3) Architecture", RFC 3985, 3298 DOI 10.17487/RFC3985, March 2005, 3299 . 3301 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 3302 Architecture", RFC 4291, DOI 10.17487/RFC4291, February 3303 2006, . 3305 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure- 3306 Agnostic Time Division Multiplexing (TDM) over Packet 3307 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006, 3308 . 3310 [RFC4903] Thaler, D., "Multi-Link Subnet Issues", RFC 4903, 3311 DOI 10.17487/RFC4903, June 2007, 3312 . 3314 [RFC4919] Kushalnagar, N., Montenegro, G., and C. Schumacher, "IPv6 3315 over Low-Power Wireless Personal Area Networks (6LoWPANs): 3316 Overview, Assumptions, Problem Statement, and Goals", 3317 RFC 4919, DOI 10.17487/RFC4919, August 2007, 3318 . 3320 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and 3321 P. Pate, "Structure-Aware Time Division Multiplexed (TDM) 3322 Circuit Emulation Service over Packet Switched Network 3323 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007, 3324 . 3326 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi, 3327 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087, 3328 DOI 10.17487/RFC5087, December 2007, 3329 . 3331 [RFC6282] Hui, J., Ed. and P. Thubert, "Compression Format for IPv6 3332 Datagrams over IEEE 802.15.4-Based Networks", RFC 6282, 3333 DOI 10.17487/RFC6282, September 2011, 3334 . 3336 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J., 3337 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur, 3338 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for 3339 Low-Power and Lossy Networks", RFC 6550, 3340 DOI 10.17487/RFC6550, March 2012, 3341 . 3343 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N., 3344 and D. Barthel, "Routing Metrics Used for Path Calculation 3345 in Low-Power and Lossy Networks", RFC 6551, 3346 DOI 10.17487/RFC6551, March 2012, 3347 . 3349 [RFC6775] Shelby, Z., Ed., Chakrabarti, S., Nordmark, E., and C. 3350 Bormann, "Neighbor Discovery Optimization for IPv6 over 3351 Low-Power Wireless Personal Area Networks (6LoWPANs)", 3352 RFC 6775, DOI 10.17487/RFC6775, November 2012, 3353 . 3355 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using 3356 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the 3357 Internet of Things (IoT): Problem Statement", RFC 7554, 3358 DOI 10.17487/RFC7554, May 2015, 3359 . 3361 [Spe09] Sperotto, A., Sadre, R., Vliet, F., and A. Pras, "A First 3362 Look into SCADA Network Traffic", IP Operations and 3363 Management, p. 518-521. , June 2009. 3365 [SRP_LATENCY] 3366 Gunther, C., "Specifying SRP Latency", 2014, 3367 . 3370 [STUDIO_IP] 3371 Mace, G., "IP Networked Studio Infrastructure for 3372 Synchronized & Real-Time Multimedia Transmissions", 2007, 3373 . 3376 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in 3377 packet networks", Recommendation G.8261, August 2013, 3378 . 3380 [TEAS] IETF, "Traffic Engineering Architecture and Signaling", 3381 . 3383 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements 3384 for Evolved Universal Terrestrial Radio Access Network 3385 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013. 3387 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception 3388 (FDD)", 3GPP TS 25.104 3.14.0, March 2007. 3390 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access 3391 (E-UTRA); Base Station (BS) radio transmission and 3392 reception", 3GPP TS 36.104 10.11.0, July 2013. 3394 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access 3395 (E-UTRA); Requirements for support of radio resource 3396 management", 3GPP TS 36.133 12.7.0, April 2015. 3398 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access 3399 (E-UTRA); Physical channels and modulation", 3GPP 3400 TS 36.211 10.7.0, March 2013. 3402 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA) 3403 and Evolved Universal Terrestrial Radio Access Network 3404 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300 3405 10.11.0, September 2013. 3407 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive 3408 Networks Task Group", 2013, 3409 . 3411 [UHD-video] 3412 Holub, P., "Ultra-High Definition Videos and Their 3413 Applications over the Network", The 7th International 3414 Symposium on VICTORIES Project PetrHolub_presentation, 3415 October 2014, . 3418 [WirelessHART] 3419 www.hartcomm.org, "Industrial Communication Networks - 3420 Wireless Communication Network and Communication Profiles 3421 - WirelessHART - IEC 62591", 2010. 3423 Authors' Addresses 3425 Ethan Grossman (editor) 3426 Dolby Laboratories, Inc. 3427 1275 Market Street 3428 San Francisco, CA 94103 3429 USA 3431 Phone: +1 415 645 4726 3432 Email: ethan.grossman@dolby.com 3433 URI: http://www.dolby.com 3435 Craig Gunther 3436 Harman International 3437 10653 South River Front Parkway 3438 South Jordan, UT 84095 3439 USA 3441 Phone: +1 801 568-7675 3442 Email: craig.gunther@harman.com 3443 URI: http://www.harman.com 3444 Pascal Thubert 3445 Cisco Systems, Inc 3446 Building D 3447 45 Allee des Ormes - BP1200 3448 MOUGINS - Sophia Antipolis 06254 3449 FRANCE 3451 Phone: +33 497 23 26 34 3452 Email: pthubert@cisco.com 3454 Patrick Wetterwald 3455 Cisco Systems 3456 45 Allees des Ormes 3457 Mougins 06250 3458 FRANCE 3460 Phone: +33 4 97 23 26 36 3461 Email: pwetterw@cisco.com 3463 Jean Raymond 3464 Hydro-Quebec 3465 1500 University 3466 Montreal H3A3S7 3467 Canada 3469 Phone: +1 514 840 3000 3470 Email: raymond.jean@hydro.qc.ca 3472 Jouni Korhonen 3473 Broadcom Corporation 3474 3151 Zanker Road 3475 San Jose, CA 95134 3476 USA 3478 Email: jouni.nospam@gmail.com 3480 Yu Kaneko 3481 Toshiba 3482 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi 3483 Kanagawa, Japan 3485 Email: yu1.kaneko@toshiba.co.jp 3486 Subir Das 3487 Applied Communication Sciences 3488 150 Mount Airy Road, Basking Ridge 3489 New Jersey, 07920, USA 3491 Email: sdas@appcomsci.com 3493 Yiyong Zha 3494 Huawei Technologies 3496 Email: zhayiyong@huawei.com 3498 Balazs Varga 3499 Ericsson 3500 Konyves Kalman krt. 11/B 3501 Budapest 1097 3502 Hungary 3504 Email: balazs.a.varga@ericsson.com 3506 Janos Farkas 3507 Ericsson 3508 Konyves Kalman krt. 11/B 3509 Budapest 1097 3510 Hungary 3512 Email: janos.farkas@ericsson.com 3514 Franz-Josef Goetz 3515 Siemens 3516 Gleiwitzerstr. 555 3517 Nurnberg 90475 3518 Germany 3520 Email: franz-josef.goetz@siemens.com 3522 Juergen Schmitt 3523 Siemens 3524 Gleiwitzerstr. 555 3525 Nurnberg 90475 3526 Germany 3528 Email: juergen.jues.schmitt@siemens.com 3529 Xavier Vilajosana 3530 Worldsensing 3531 483 Arago 3532 Barcelona, Catalonia 08013 3533 Spain 3535 Email: xvilajosana@worldsensing.com 3537 Toktam Mahmoodi 3538 King's College London 3539 Strand, London WC2R 2LS 3540 London, London WC2R 2LS 3541 United Kingdom 3543 Email: toktam.mahmoodi@kcl.ac.uk 3545 Spiros Spirou 3546 Intracom Telecom 3547 19.7 km Markopoulou Ave. 3548 Peania, Attiki 19002 3549 Greece 3551 Email: spis@intracom-telecom.com 3553 Petra Vizarreta 3554 Technical University of Munich, TUM 3555 Maxvorstadt, ArcisstraBe 21 3556 Munich, Germany 80333 3557 Germany 3559 Email: petra.vizarreta@lkn.ei.tum.de