idnits 2.17.1
draft-ietf-detnet-use-cases-13.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
** The document seems to lack an IANA Considerations section. (See Section
2.2 of https://www.ietf.org/id-info/checklist for how to handle the case
when there are no actions for IANA.)
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- Couldn't find a document date in the document -- date freshness check
skipped.
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Unused Reference: 'ACE' is defined on line 3514, but no explicit
reference was found in the text
== Unused Reference: 'CCAMP' is defined on line 3526, but no explicit
reference was found in the text
== Unused Reference: 'CPRI-transp' is defined on line 3545, but no explicit
reference was found in the text
== Unused Reference: 'DICE' is defined on line 3554, but no explicit
reference was found in the text
== Unused Reference: 'EA12' is defined on line 3557, but no explicit
reference was found in the text
== Unused Reference: 'HART' is defined on line 3574, but no explicit
reference was found in the text
== Unused Reference: 'I-D.ietf-6tisch-terminology' is defined on line 3603,
but no explicit reference was found in the text
== Unused Reference: 'I-D.ietf-ipv6-multilink-subnets' is defined on line
3609, but no explicit reference was found in the text
== Unused Reference: 'I-D.ietf-roll-rpl-industrial-applicability' is
defined on line 3620, but no explicit reference was found in the text
== Unused Reference: 'I-D.thubert-6lowpan-backbone-router' is defined on
line 3642, but no explicit reference was found in the text
== Unused Reference: 'IEC61850-90-12' is defined on line 3661, but no
explicit reference was found in the text
== Unused Reference: 'IEEE8021TSN' is defined on line 3729, but no explicit
reference was found in the text
== Unused Reference: 'IETFDetNet' is defined on line 3735, but no explicit
reference was found in the text
== Unused Reference: 'ISO7240-16' is defined on line 3748, but no explicit
reference was found in the text
== Unused Reference: 'LTE-Latency' is defined on line 3759, but no explicit
reference was found in the text
== Unused Reference: 'RFC2119' is defined on line 3803, but no explicit
reference was found in the text
== Unused Reference: 'RFC2460' is defined on line 3808, but no explicit
reference was found in the text
== Unused Reference: 'RFC2474' is defined on line 3812, but no explicit
reference was found in the text
== Unused Reference: 'RFC3209' is defined on line 3823, but no explicit
reference was found in the text
== Unused Reference: 'RFC3393' is defined on line 3828, but no explicit
reference was found in the text
== Unused Reference: 'RFC3444' is defined on line 3839, but no explicit
reference was found in the text
== Unused Reference: 'RFC3972' is defined on line 3844, but no explicit
reference was found in the text
== Unused Reference: 'RFC4291' is defined on line 3853, but no explicit
reference was found in the text
== Unused Reference: 'RFC4903' is defined on line 3862, but no explicit
reference was found in the text
== Unused Reference: 'RFC4919' is defined on line 3866, but no explicit
reference was found in the text
== Unused Reference: 'RFC6282' is defined on line 3883, but no explicit
reference was found in the text
== Unused Reference: 'RFC6775' is defined on line 3901, but no explicit
reference was found in the text
== Unused Reference: 'TEAS' is defined on line 3932, but no explicit
reference was found in the text
== Unused Reference: 'UHD-video' is defined on line 3963, but no explicit
reference was found in the text
== Outdated reference: A later version (-30) exists of
draft-ietf-6tisch-architecture-12
== Outdated reference: A later version (-10) exists of
draft-ietf-6tisch-terminology-09
-- Obsolete informational reference (is this intentional?): RFC 2460
(Obsoleted by RFC 8200)
Summary: 1 error (**), 0 flaws (~~), 32 warnings (==), 2 comments (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Internet Engineering Task Force E. Grossman, Ed.
3 Internet-Draft DOLBY
4 Intended status: Informational C. Gunther
5 Expires: March 22, 2018 HARMAN
6 P. Thubert
7 P. Wetterwald
8 CISCO
9 J. Raymond
10 HYDRO-QUEBEC
11 J. Korhonen
12 BROADCOM
13 Y. Kaneko
14 Toshiba
15 S. Das
16 Applied Communication Sciences
17 Y. Zha
18 HUAWEI
19 B. Varga
20 J. Farkas
21 Ericsson
22 F. Goetz
23 J. Schmitt
24 Siemens
25 X. Vilajosana
26 Worldsensing
27 T. Mahmoodi
28 King's College London
29 S. Spirou
30 Intracom Telecom
31 P. Vizarreta
32 Technical University of Munich, TUM
33 D. Huang
34 ZTE Corporation, Inc.
35 X. Geng
36 HUAWEI
37 D. Dujovne
38 UDP
39 M. Seewald
40 CISCO
41 September 18, 2017
43 Deterministic Networking Use Cases
44 draft-ietf-detnet-use-cases-13
46 Abstract
48 This draft documents requirements in several diverse industries to
49 establish multi-hop paths for characterized flows with deterministic
50 properties. In this context deterministic implies that streams can
51 be established which provide guaranteed bandwidth and latency which
52 can be established from either a Layer 2 or Layer 3 (IP) interface,
53 and which can co-exist on an IP network with best-effort traffic.
55 Additional requirements include optional redundant paths, very high
56 reliability paths, time synchronization, and clock distribution.
57 Industries considered include professional audio, electrical
58 utilities, building automation systems, wireless for industrial,
59 cellular radio, industrial machine-to-machine, mining, private
60 blockchain, and network slicing.
62 For each case, this document will identify the application, identify
63 representative solutions used today, and the improvements IETF DetNet
64 solutions may enable.
66 Status of This Memo
68 This Internet-Draft is submitted in full conformance with the
69 provisions of BCP 78 and BCP 79.
71 Internet-Drafts are working documents of the Internet Engineering
72 Task Force (IETF). Note that other groups may also distribute
73 working documents as Internet-Drafts. The list of current Internet-
74 Drafts is at https://datatracker.ietf.org/drafts/current/.
76 Internet-Drafts are draft documents valid for a maximum of six months
77 and may be updated, replaced, or obsoleted by other documents at any
78 time. It is inappropriate to use Internet-Drafts as reference
79 material or to cite them other than as "work in progress."
81 This Internet-Draft will expire on March 22, 2018.
83 Copyright Notice
85 Copyright (c) 2017 IETF Trust and the persons identified as the
86 document authors. All rights reserved.
88 This document is subject to BCP 78 and the IETF Trust's Legal
89 Provisions Relating to IETF Documents
90 (https://trustee.ietf.org/license-info) in effect on the date of
91 publication of this document. Please review these documents
92 carefully, as they describe your rights and restrictions with respect
93 to this document. Code Components extracted from this document must
94 include Simplified BSD License text as described in Section 4.e of
95 the Trust Legal Provisions and are provided without warranty as
96 described in the Simplified BSD License.
98 Table of Contents
100 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 6
101 2. Pro Audio and Video . . . . . . . . . . . . . . . . . . . . . 7
102 2.1. Use Case Description . . . . . . . . . . . . . . . . . . 7
103 2.1.1. Uninterrupted Stream Playback . . . . . . . . . . . . 8
104 2.1.2. Synchronized Stream Playback . . . . . . . . . . . . 8
105 2.1.3. Sound Reinforcement . . . . . . . . . . . . . . . . . 9
106 2.1.4. Deterministic Time to Establish Streaming . . . . . . 9
107 2.1.5. Secure Transmission . . . . . . . . . . . . . . . . . 9
108 2.1.5.1. Safety . . . . . . . . . . . . . . . . . . . . . 9
109 2.2. Pro Audio Today . . . . . . . . . . . . . . . . . . . . . 9
110 2.3. Pro Audio Future . . . . . . . . . . . . . . . . . . . . 10
111 2.3.1. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 10
112 2.3.2. High Reliability Stream Paths . . . . . . . . . . . . 10
113 2.3.3. Integration of Reserved Streams into IT Networks . . 10
114 2.3.4. Use of Unused Reservations by Best-Effort Traffic . . 11
115 2.3.5. Traffic Segregation . . . . . . . . . . . . . . . . . 11
116 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets . . . 11
117 2.3.5.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 12
118 2.3.6. Latency Optimization by a Central Controller . . . . 12
119 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory . . 12
120 2.4. Pro Audio Asks . . . . . . . . . . . . . . . . . . . . . 13
121 3. Electrical Utilities . . . . . . . . . . . . . . . . . . . . 13
122 3.1. Use Case Description . . . . . . . . . . . . . . . . . . 13
123 3.1.1. Transmission Use Cases . . . . . . . . . . . . . . . 13
124 3.1.1.1. Protection . . . . . . . . . . . . . . . . . . . 13
125 3.1.1.2. Intra-Substation Process Bus Communications . . . 19
126 3.1.1.3. Wide Area Monitoring and Control Systems . . . . 20
127 3.1.1.4. IEC 61850 WAN engineering guidelines requirement
128 classification . . . . . . . . . . . . . . . . . 21
129 3.1.2. Generation Use Case . . . . . . . . . . . . . . . . . 22
130 3.1.2.1. Control of the Generated Power . . . . . . . . . 22
131 3.1.2.2. Control of the Generation Infrastructure . . . . 23
132 3.1.3. Distribution use case . . . . . . . . . . . . . . . . 28
133 3.1.3.1. Fault Location Isolation and Service Restoration
134 (FLISR) . . . . . . . . . . . . . . . . . . . . . 28
135 3.2. Electrical Utilities Today . . . . . . . . . . . . . . . 29
136 3.2.1. Security Current Practices and Limitations . . . . . 29
137 3.3. Electrical Utilities Future . . . . . . . . . . . . . . . 31
138 3.3.1. Migration to Packet-Switched Network . . . . . . . . 32
139 3.3.2. Telecommunications Trends . . . . . . . . . . . . . . 32
140 3.3.2.1. General Telecommunications Requirements . . . . . 32
141 3.3.2.2. Specific Network topologies of Smart Grid
142 Applications . . . . . . . . . . . . . . . . . . 33
143 3.3.2.3. Precision Time Protocol . . . . . . . . . . . . . 34
144 3.3.3. Security Trends in Utility Networks . . . . . . . . . 35
145 3.4. Electrical Utilities Asks . . . . . . . . . . . . . . . . 37
146 4. Building Automation Systems . . . . . . . . . . . . . . . . . 37
147 4.1. Use Case Description . . . . . . . . . . . . . . . . . . 37
148 4.2. Building Automation Systems Today . . . . . . . . . . . . 38
149 4.2.1. BAS Architecture . . . . . . . . . . . . . . . . . . 38
150 4.2.2. BAS Deployment Model . . . . . . . . . . . . . . . . 39
151 4.2.3. Use Cases for Field Networks . . . . . . . . . . . . 41
152 4.2.3.1. Environmental Monitoring . . . . . . . . . . . . 41
153 4.2.3.2. Fire Detection . . . . . . . . . . . . . . . . . 41
154 4.2.3.3. Feedback Control . . . . . . . . . . . . . . . . 42
155 4.2.4. Security Considerations . . . . . . . . . . . . . . . 42
156 4.3. BAS Future . . . . . . . . . . . . . . . . . . . . . . . 42
157 4.4. BAS Asks . . . . . . . . . . . . . . . . . . . . . . . . 43
158 5. Wireless for Industrial . . . . . . . . . . . . . . . . . . . 43
159 5.1. Use Case Description . . . . . . . . . . . . . . . . . . 43
160 5.1.1. Network Convergence using 6TiSCH . . . . . . . . . . 44
161 5.1.2. Common Protocol Development for 6TiSCH . . . . . . . 44
162 5.2. Wireless Industrial Today . . . . . . . . . . . . . . . . 45
163 5.3. Wireless Industrial Future . . . . . . . . . . . . . . . 45
164 5.3.1. Unified Wireless Network and Management . . . . . . . 45
165 5.3.1.1. PCE and 6TiSCH ARQ Retries . . . . . . . . . . . 47
166 5.3.2. Schedule Management by a PCE . . . . . . . . . . . . 48
167 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests . . . . . . 48
168 5.3.2.2. 6TiSCH IP Interface . . . . . . . . . . . . . . . 49
169 5.3.3. 6TiSCH Security Considerations . . . . . . . . . . . 50
170 5.4. Wireless Industrial Asks . . . . . . . . . . . . . . . . 50
171 6. Cellular Radio . . . . . . . . . . . . . . . . . . . . . . . 50
172 6.1. Use Case Description . . . . . . . . . . . . . . . . . . 50
173 6.1.1. Network Architecture . . . . . . . . . . . . . . . . 50
174 6.1.2. Delay Constraints . . . . . . . . . . . . . . . . . . 51
175 6.1.3. Time Synchronization Constraints . . . . . . . . . . 53
176 6.1.4. Transport Loss Constraints . . . . . . . . . . . . . 55
177 6.1.5. Security Considerations . . . . . . . . . . . . . . . 55
178 6.2. Cellular Radio Networks Today . . . . . . . . . . . . . . 56
179 6.2.1. Fronthaul . . . . . . . . . . . . . . . . . . . . . . 56
180 6.2.2. Midhaul and Backhaul . . . . . . . . . . . . . . . . 56
181 6.3. Cellular Radio Networks Future . . . . . . . . . . . . . 57
182 6.4. Cellular Radio Networks Asks . . . . . . . . . . . . . . 59
183 7. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 59
184 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 59
185 7.2. Industrial M2M Communication Today . . . . . . . . . . . 60
186 7.2.1. Transport Parameters . . . . . . . . . . . . . . . . 61
187 7.2.2. Stream Creation and Destruction . . . . . . . . . . . 62
188 7.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 62
189 7.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 62
191 8. Mining Industry . . . . . . . . . . . . . . . . . . . . . . . 63
192 8.1. Use Case Description . . . . . . . . . . . . . . . . . . 63
193 8.2. Mining Industry Today . . . . . . . . . . . . . . . . . . 63
194 8.3. Mining Industry Future . . . . . . . . . . . . . . . . . 64
195 8.4. Mining Industry Asks . . . . . . . . . . . . . . . . . . 65
196 9. Private Blockchain . . . . . . . . . . . . . . . . . . . . . 65
197 9.1. Use Case Description . . . . . . . . . . . . . . . . . . 65
198 9.1.1. Blockchain Operation . . . . . . . . . . . . . . . . 65
199 9.1.2. Blockchain Network Architecture . . . . . . . . . . . 66
200 9.1.3. Security Considerations . . . . . . . . . . . . . . . 66
201 9.2. Private Blockchain Today . . . . . . . . . . . . . . . . 66
202 9.3. Private Blockchain Future . . . . . . . . . . . . . . . . 67
203 9.4. Private Blockchain Asks . . . . . . . . . . . . . . . . . 67
204 10. Network Slicing . . . . . . . . . . . . . . . . . . . . . . . 67
205 10.1. Use Case Description . . . . . . . . . . . . . . . . . . 67
206 10.2. Network Slicing Use Cases . . . . . . . . . . . . . . . 68
207 10.2.1. Enhanced Mobile Broadband (eMBB) . . . . . . . . . . 68
208 10.2.2. Ultra-Reliable and Low Latency Communications
209 (URLLC) . . . . . . . . . . . . . . . . . . . . . . 68
210 10.2.3. massive Machine Type Communications (mMTC) . . . . . 68
211 10.3. Using DetNet in Network Slicing . . . . . . . . . . . . 68
212 10.4. Network Slicing Today and Future . . . . . . . . . . . . 69
213 10.5. Network Slicing Asks . . . . . . . . . . . . . . . . . . 69
214 11. Use Case Common Themes . . . . . . . . . . . . . . . . . . . 69
215 11.1. Unified, standards-based network . . . . . . . . . . . . 69
216 11.1.1. Extensions to Ethernet . . . . . . . . . . . . . . . 69
217 11.1.2. Centrally Administered . . . . . . . . . . . . . . . 69
218 11.1.3. Standardized Data Flow Information Models . . . . . 70
219 11.1.4. L2 and L3 Integration . . . . . . . . . . . . . . . 70
220 11.1.5. Guaranteed End-to-End Delivery . . . . . . . . . . . 70
221 11.1.6. Replacement for Multiple Proprietary Deterministic
222 Networks . . . . . . . . . . . . . . . . . . . . . . 70
223 11.1.7. Mix of Deterministic and Best-Effort Traffic . . . . 70
224 11.1.8. Unused Reserved BW to be Available to Best Effort
225 Traffic . . . . . . . . . . . . . . . . . . . . . . 70
226 11.1.9. Lower Cost, Multi-Vendor Solutions . . . . . . . . . 71
227 11.2. Scalable Size . . . . . . . . . . . . . . . . . . . . . 71
228 11.3. Scalable Timing Parameters and Accuracy . . . . . . . . 71
229 11.3.1. Bounded Latency . . . . . . . . . . . . . . . . . . 71
230 11.3.2. Low Latency . . . . . . . . . . . . . . . . . . . . 71
231 11.3.3. Symmetrical Path Delays . . . . . . . . . . . . . . 72
232 11.4. High Reliability and Availability . . . . . . . . . . . 72
233 11.5. Security . . . . . . . . . . . . . . . . . . . . . . . . 72
234 11.6. Deterministic Flows . . . . . . . . . . . . . . . . . . 72
235 12. Use Cases Explicitly Out of Scope for DetNet . . . . . . . . 72
236 12.1. DetNet Scope Limitations . . . . . . . . . . . . . . . . 73
237 12.2. Internet-based Applications . . . . . . . . . . . . . . 73
238 12.2.1. Use Case Description . . . . . . . . . . . . . . . . 73
239 12.2.1.1. Media Content Delivery . . . . . . . . . . . . . 74
240 12.2.1.2. Online Gaming . . . . . . . . . . . . . . . . . 74
241 12.2.1.3. Virtual Reality . . . . . . . . . . . . . . . . 74
242 12.2.2. Internet-Based Applications Today . . . . . . . . . 74
243 12.2.3. Internet-Based Applications Future . . . . . . . . . 74
244 12.2.4. Internet-Based Applications Asks . . . . . . . . . . 74
245 12.3. Pro Audio and Video - Digital Rights Management (DRM) . 75
246 12.4. Pro Audio and Video - Link Aggregation . . . . . . . . . 75
247 13. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 76
248 13.1. Pro Audio . . . . . . . . . . . . . . . . . . . . . . . 76
249 13.2. Utility Telecom . . . . . . . . . . . . . . . . . . . . 76
250 13.3. Building Automation Systems . . . . . . . . . . . . . . 76
251 13.4. Wireless for Industrial . . . . . . . . . . . . . . . . 76
252 13.5. Cellular Radio . . . . . . . . . . . . . . . . . . . . . 77
253 13.6. Industrial M2M . . . . . . . . . . . . . . . . . . . . . 77
254 13.7. Internet Applications and CoMP . . . . . . . . . . . . . 77
255 13.8. Electrical Utilities . . . . . . . . . . . . . . . . . . 77
256 13.9. Network Slicing . . . . . . . . . . . . . . . . . . . . 77
257 13.10. Mining . . . . . . . . . . . . . . . . . . . . . . . . . 77
258 13.11. Private Blockchain . . . . . . . . . . . . . . . . . . . 77
259 14. Informative References . . . . . . . . . . . . . . . . . . . 77
260 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 87
262 1. Introduction
264 This draft presents use cases from diverse industries which have in
265 common a need for deterministic streams, but which also differ
266 notably in their network topologies and specific desired behavior.
267 Together, they provide broad industry context for DetNet and a
268 yardstick against which proposed DetNet designs can be measured (to
269 what extent does a proposed design satisfy these various use cases?)
271 For DetNet, use cases explicitly do not define requirements; The
272 DetNet WG will consider the use cases, decide which elements are in
273 scope for DetNet, and the results will be incorporated into future
274 drafts. Similarly, the DetNet use case draft explicitly does not
275 suggest any specific design, architecture or protocols, which will be
276 topics of future drafts.
278 We present for each use case the answers to the following questions:
280 o What is the use case?
282 o How is it addressed today?
284 o How would you like it to be addressed in the future?
286 o What do you want the IETF to deliver?
287 The level of detail in each use case should be sufficient to express
288 the relevant elements of the use case, but not more.
290 At the end we consider the use cases collectively, and examine the
291 most significant goals they have in common.
293 2. Pro Audio and Video
295 2.1. Use Case Description
297 The professional audio and video industry ("ProAV") includes:
299 o Music and film content creation
301 o Broadcast
303 o Cinema
305 o Live sound
307 o Public address, media and emergency systems at large venues
308 (airports, stadiums, churches, theme parks).
310 These industries have already transitioned audio and video signals
311 from analog to digital. However, the digital interconnect systems
312 remain primarily point-to-point with a single (or small number of)
313 signals per link, interconnected with purpose-built hardware.
315 These industries are now transitioning to packet-based infrastructure
316 to reduce cost, increase routing flexibility, and integrate with
317 existing IT infrastructure.
319 Today ProAV applications have no way to establish deterministic
320 streams from a standards-based Layer 3 (IP) interface, which is a
321 fundamental limitation to the use cases described here. Today
322 deterministic streams can be created within standards-based layer 2
323 LANs (e.g. using IEEE 802.1 AVB) however these are not routable via
324 IP and thus are not effective for distribution over wider areas (for
325 example broadcast events that span wide geographical areas).
327 It would be highly desirable if such streams could be routed over the
328 open Internet, however solutions with more limited scope (e.g.
329 enterprise networks) would still provide a substantial improvement.
331 The following sections describe specific ProAV use cases.
333 2.1.1. Uninterrupted Stream Playback
335 Transmitting audio and video streams for live playback is unlike
336 common file transfer because uninterrupted stream playback in the
337 presence of network errors cannot be achieved by re-trying the
338 transmission; by the time the missing or corrupt packet has been
339 identified it is too late to execute a re-try operation. Buffering
340 can be used to provide enough delay to allow time for one or more
341 retries, however this is not an effective solution in applications
342 where large delays (latencies) are not acceptable (as discussed
343 below).
345 Streams with guaranteed bandwidth can eliminate congestion on the
346 network as a cause of transmission errors that would lead to playback
347 interruption. Use of redundant paths can further mitigate
348 transmission errors to provide greater stream reliability.
350 2.1.2. Synchronized Stream Playback
352 Latency in this context is the time between when a signal is
353 initially sent over a stream and when it is received. A common
354 example in ProAV is time-synchronizing audio and video when they take
355 separate paths through the playback system. In this case the latency
356 of both the audio and video streams must be bounded and consistent if
357 the sound is to remain matched to the movement in the video. A
358 common tolerance for audio/video sync is one NTSC video frame (about
359 33ms) and to maintain the audience perception of correct lip sync the
360 latency needs to be consistent within some reasonable tolerance, for
361 example 10%.
363 A common architecture for synchronizing multiple streams that have
364 different paths through the network (and thus potentially different
365 latencies) is to enable measurement of the latency of each path, and
366 have the data sinks (for example speakers) delay (buffer) all packets
367 on all but the slowest path. Each packet of each stream is assigned
368 a presentation time which is based on the longest required delay.
369 This implies that all sinks must maintain a common time reference of
370 sufficient accuracy, which can be achieved by any of various
371 techniques.
373 This type of architecture is commonly implemented using a central
374 controller that determines path delays and arbitrates buffering
375 delays.
377 2.1.3. Sound Reinforcement
379 Consider the latency (delay) from when a person speaks into a
380 microphone to when their voice emerges from the speaker. If this
381 delay is longer than about 10-15 milliseconds it is noticeable and
382 can make a sound reinforcement system unusable (see slide 6 of
383 [SRP_LATENCY]). (If you have ever tried to speak in the presence of
384 a delayed echo of your voice you may know this experience).
386 Note that the 15ms latency bound includes all parts of the signal
387 path, not just the network, so the network latency must be
388 significantly less than 15ms.
390 In some cases local performers must perform in synchrony with a
391 remote broadcast. In such cases the latencies of the broadcast
392 stream and the local performer must be adjusted to match each other,
393 with a worst case of one video frame (33ms for NTSC video).
395 In cases where audio phase is a consideration, for example beam-
396 forming using multiple speakers, latency requirements can be in the
397 10 microsecond range (1 audio sample at 96kHz).
399 2.1.4. Deterministic Time to Establish Streaming
401 Note: The WG has decided that guidelines for deterministic time to
402 establish stream startup is not within scope of DetNet. If bounded
403 timing of establishing or re-establish streams is required in a given
404 use case, it is up to the application/system to achieve this. (The
405 supporting text from this section has been removed as of draft 12).
407 2.1.5. Secure Transmission
409 2.1.5.1. Safety
411 Professional audio systems can include amplifiers that are capable of
412 generating hundreds or thousands of watts of audio power which if
413 used incorrectly can cause hearing damage to those in the vicinity.
414 Apart from the usual care required by the systems operators to
415 prevent such incidents, the network traffic that controls these
416 devices must be secured (as with any sensitive application traffic).
418 2.2. Pro Audio Today
420 Some proprietary systems have been created which enable deterministic
421 streams at Layer 3 however they are "engineered networks" which
422 require careful configuration to operate, often require that the
423 system be over-provisioned, and it is implied that all devices on the
424 network voluntarily play by the rules of that network. To enable
425 these industries to successfully transition to an interoperable
426 multi-vendor packet-based infrastructure requires effective open
427 standards, and we believe that establishing relevant IETF standards
428 is a crucial factor.
430 2.3. Pro Audio Future
432 2.3.1. Layer 3 Interconnecting Layer 2 Islands
434 It would be valuable to enable IP to connect multiple Layer 2 LANs.
436 As an example, ESPN recently constructed a state-of-the-art 194,000
437 sq ft, $125 million broadcast studio called DC2. The DC2 network is
438 capable of handling 46 Tbps of throughput with 60,000 simultaneous
439 signals. Inside the facility are 1,100 miles of fiber feeding four
440 audio control rooms (see [ESPN_DC2] ).
442 In designing DC2 they replaced as much point-to-point technology as
443 they could with packet-based technology. They constructed seven
444 individual studios using layer 2 LANS (using IEEE 802.1 AVB) that
445 were entirely effective at routing audio within the LANs. However to
446 interconnect these layer 2 LAN islands together they ended up using
447 dedicated paths in a custom SDN (Software Defined Networking) router
448 because there is no standards-based routing solution available.
450 2.3.2. High Reliability Stream Paths
452 On-air and other live media streams are often backed up with
453 redundant links that seamlessly act to deliver the content when the
454 primary link fails for any reason. In point-to-point systems this is
455 provided by an additional point-to-point link; the analogous
456 requirement in a packet-based system is to provide an alternate path
457 through the network such that no individual link can bring down the
458 system.
460 2.3.3. Integration of Reserved Streams into IT Networks
462 A commonly cited goal of moving to a packet based media
463 infrastructure is that costs can be reduced by using off the shelf,
464 commodity network hardware. In addition, economy of scale can be
465 realized by combining media infrastructure with IT infrastructure.
466 In keeping with these goals, stream reservation technology should be
467 compatible with existing protocols, and not compromise use of the
468 network for best effort (non-time-sensitive) traffic.
470 2.3.4. Use of Unused Reservations by Best-Effort Traffic
472 In cases where stream bandwidth is reserved but not currently used
473 (or is under-utilized) that bandwidth must be available to best-
474 effort (i.e. non-time-sensitive) traffic. For example a single
475 stream may be nailed up (reserved) for specific media content that
476 needs to be presented at different times of the day, ensuring timely
477 delivery of that content, yet in between those times the full
478 bandwidth of the network can be utilized for best-effort tasks such
479 as file transfers.
481 This also addresses a concern of IT network administrators that are
482 considering adding reserved bandwidth traffic to their networks that
483 ("users will reserve large quantities of bandwidth and then never un-
484 reserve it even though they are not using it, and soon the network
485 will have no bandwidth left").
487 2.3.5. Traffic Segregation
489 Note: It is still under WG discussion whether this topic will be
490 addressed by DetNet.
492 Sink devices may be low cost devices with limited processing power.
493 In order to not overwhelm the CPUs in these devices it is important
494 to limit the amount of traffic that these devices must process.
496 As an example, consider the use of individual seat speakers in a
497 cinema. These speakers are typically required to be cost reduced
498 since the quantities in a single theater can reach hundreds of seats.
499 Discovery protocols alone in a one thousand seat theater can generate
500 enough broadcast traffic to overwhelm a low powered CPU. Thus an
501 installation like this will benefit greatly from some type of traffic
502 segregation that can define groups of seats to reduce traffic within
503 each group. All seats in the theater must still be able to
504 communicate with a central controller.
506 There are many techniques that can be used to support this
507 requirement including (but not limited to) the following examples.
509 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets
511 Packet forwarding rules can be used to eliminate some extraneous
512 streaming traffic from reaching potentially low powered sink devices,
513 however there may be other types of broadcast traffic that should be
514 eliminated using other means for example VLANs or IP subnets.
516 2.3.5.2. Multicast Addressing (IPv4 and IPv6)
518 Multicast addressing is commonly used to keep bandwidth utilization
519 of shared links to a minimum.
521 Because of the MAC Address forwarding nature of Layer 2 bridges it is
522 important that a multicast MAC address is only associated with one
523 stream. This will prevent reservations from forwarding packets from
524 one stream down a path that has no interested sinks simply because
525 there is another stream on that same path that shares the same
526 multicast MAC address.
528 Since each multicast MAC Address can represent 32 different IPv4
529 multicast addresses there must be a process put in place to make sure
530 this does not occur. Requiring use of IPv6 address can achieve this,
531 however due to their continued prevalence, solutions that are
532 effective for IPv4 installations are also required.
534 2.3.6. Latency Optimization by a Central Controller
536 A central network controller might also perform optimizations based
537 on the individual path delays, for example sinks that are closer to
538 the source can inform the controller that they can accept greater
539 latency since they will be buffering packets to match presentation
540 times of farther away sinks. The controller might then move a stream
541 reservation on a short path to a longer path in order to free up
542 bandwidth for other critical streams on that short path. See slides
543 3-5 of [SRP_LATENCY].
545 Additional optimization can be achieved in cases where sinks have
546 differing latency requirements, for example in a live outdoor concert
547 the speaker sinks have stricter latency requirements than the
548 recording hardware sinks. See slide 7 of [SRP_LATENCY].
550 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory
552 Device cost can be reduced in a system with guaranteed reservations
553 with a small bounded latency due to the reduced requirements for
554 buffering (i.e. memory) on sink devices. For example, a theme park
555 might broadcast a live event across the globe via a layer 3 protocol;
556 in such cases the size of the buffers required is proportional to the
557 latency bounds and jitter caused by delivery, which depends on the
558 worst case segment of the end-to-end network path. For example on
559 todays open internet the latency is typically unacceptable for audio
560 and video streaming without many seconds of buffering. In such
561 scenarios a single gateway device at the local network that receives
562 the feed from the remote site would provide the expensive buffering
563 required to mask the latency and jitter issues associated with long
564 distance delivery. Sink devices in the local location would have no
565 additional buffering requirements, and thus no additional costs,
566 beyond those required for delivery of local content. The sink device
567 would be receiving the identical packets as those sent by the source
568 and would be unaware that there were any latency or jitter issues
569 along the path.
571 2.4. Pro Audio Asks
573 o Layer 3 routing on top of AVB (and/or other high QoS networks)
575 o Content delivery with bounded, lowest possible latency
577 o IntServ and DiffServ integration with AVB (where practical)
579 o Single network for A/V and IT traffic
581 o Standards-based, interoperable, multi-vendor
583 o IT department friendly
585 o Enterprise-wide networks (e.g. size of San Francisco but not the
586 whole Internet (yet...))
588 3. Electrical Utilities
590 3.1. Use Case Description
592 Many systems that an electrical utility deploys today rely on high
593 availability and deterministic behavior of the underlying networks.
594 Here we present use cases in Transmission, Generation and
595 Distribution, including key timing and reliability metrics. We also
596 discuss security issues and industry trends which affect the
597 architecture of next generation utility networks
599 3.1.1. Transmission Use Cases
601 3.1.1.1. Protection
603 Protection means not only the protection of human operators but also
604 the protection of the electrical equipment and the preservation of
605 the stability and frequency of the grid. If a fault occurs in the
606 transmission or distribution of electricity then severe damage can
607 occur to human operators, electrical equipment and the grid itself,
608 leading to blackouts.
610 Communication links in conjunction with protection relays are used to
611 selectively isolate faults on high voltage lines, transformers,
612 reactors and other important electrical equipment. The role of the
613 teleprotection system is to selectively disconnect a faulty part by
614 transferring command signals within the shortest possible time.
616 3.1.1.1.1. Key Criteria
618 The key criteria for measuring teleprotection performance are command
619 transmission time, dependability and security. These criteria are
620 defined by the IEC standard 60834 as follows:
622 o Transmission time (Speed): The time between the moment where state
623 changes at the transmitter input and the moment of the
624 corresponding change at the receiver output, including propagation
625 delay. Overall operating time for a teleprotection system
626 includes the time for initiating the command at the transmitting
627 end, the propagation delay over the network (including equipments)
628 and the selection and decision time at the receiving end,
629 including any additional delay due to a noisy environment.
631 o Dependability: The ability to issue and receive valid commands in
632 the presence of interference and/or noise, by minimizing the
633 probability of missing command (PMC). Dependability targets are
634 typically set for a specific bit error rate (BER) level.
636 o Security: The ability to prevent false tripping due to a noisy
637 environment, by minimizing the probability of unwanted commands
638 (PUC). Security targets are also set for a specific bit error
639 rate (BER) level.
641 Additional elements of the the teleprotection system that impact its
642 performance include:
644 o Network bandwidth
646 o Failure recovery capacity (aka resiliency)
648 3.1.1.1.2. Fault Detection and Clearance Timing
650 Most power line equipment can tolerate short circuits or faults for
651 up to approximately five power cycles before sustaining irreversible
652 damage or affecting other segments in the network. This translates
653 to total fault clearance time of 100ms. As a safety precaution,
654 however, actual operation time of protection systems is limited to
655 70- 80 percent of this period, including fault recognition time,
656 command transmission time and line breaker switching time.
658 Some system components, such as large electromechanical switches,
659 require particularly long time to operate and take up the majority of
660 the total clearance time, leaving only a 10ms window for the
661 telecommunications part of the protection scheme, independent of the
662 distance to travel. Given the sensitivity of the issue, new networks
663 impose requirements that are even more stringent: IEC standard 61850
664 limits the transfer time for protection messages to 1/4 - 1/2 cycle
665 or 4 - 8ms (for 60Hz lines) for the most critical messages.
667 3.1.1.1.3. Symmetric Channel Delay
669 Note: It is currently under WG discussion whether symmetric path
670 delays are to be guaranteed by DetNet.
672 Teleprotection channels which are differential must be synchronous,
673 which means that any delays on the transmit and receive paths must
674 match each other. Teleprotection systems ideally support zero
675 asymmetric delay; typical legacy relays can tolerate delay
676 discrepancies of up to 750us.
678 Some tools available for lowering delay variation below this
679 threshold are:
681 o For legacy systems using Time Division Multiplexing (TDM), jitter
682 buffers at the multiplexers on each end of the line can be used to
683 offset delay variation by queuing sent and received packets. The
684 length of the queues must balance the need to regulate the rate of
685 transmission with the need to limit overall delay, as larger
686 buffers result in increased latency.
688 o For jitter-prone IP packet networks, traffic management tools can
689 ensure that the teleprotection signals receive the highest
690 transmission priority to minimize jitter.
692 o Standard packet-based synchronization technologies, such as
693 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet
694 (Sync-E), can help keep networks stable by maintaining a highly
695 accurate clock source on the various network devices.
697 3.1.1.1.4. Teleprotection Network Requirements (IEC 61850)
699 The following table captures the main network metrics as based on the
700 IEC 61850 standard.
702 +-----------------------------+-------------------------------------+
703 | Teleprotection Requirement | Attribute |
704 +-----------------------------+-------------------------------------+
705 | One way maximum delay | 4-10 ms |
706 | Asymetric delay required | Yes |
707 | Maximum jitter | less than 250 us (750 us for legacy |
708 | | IED) |
709 | Topology | Point to point, point to Multi- |
710 | | point |
711 | Availability | 99.9999 |
712 | precise timing required | Yes |
713 | Recovery time on node | less than 50ms - hitless |
714 | failure | |
715 | performance management | Yes, Mandatory |
716 | Redundancy | Yes |
717 | Packet loss | 0.1% to 1% |
718 +-----------------------------+-------------------------------------+
720 Table 1: Teleprotection network requirements
722 3.1.1.1.5. Inter-Trip Protection scheme
724 "Inter-tripping" is the signal-controlled tripping of a circuit
725 breaker to complete the isolation of a circuit or piece of apparatus
726 in concert with the tripping of other circuit breakers.
728 +--------------------------------+----------------------------------+
729 | Inter-Trip protection | Attribute |
730 | Requirement | |
731 +--------------------------------+----------------------------------+
732 | One way maximum delay | 5 ms |
733 | Asymetric delay required | No |
734 | Maximum jitter | Not critical |
735 | Topology | Point to point, point to Multi- |
736 | | point |
737 | Bandwidth | 64 Kbps |
738 | Availability | 99.9999 |
739 | precise timing required | Yes |
740 | Recovery time on node failure | less than 50ms - hitless |
741 | performance management | Yes, Mandatory |
742 | Redundancy | Yes |
743 | Packet loss | 0.1% |
744 +--------------------------------+----------------------------------+
746 Table 2: Inter-Trip protection network requirements
748 3.1.1.1.6. Current Differential Protection Scheme
750 Current differential protection is commonly used for line protection,
751 and is typical for protecting parallel circuits. At both end of the
752 lines the current is measured by the differential relays, and both
753 relays will trip the circuit breaker if the current going into the
754 line does not equal the current going out of the line. This type of
755 protection scheme assumes some form of communications being present
756 between the relays at both end of the line, to allow both relays to
757 compare measured current values. Line differential protection
758 schemes assume a very low telecommunications delay between both
759 relays, often as low as 5ms. Moreover, as those systems are often
760 not time-synchronized, they also assume symmetric telecommunications
761 paths with constant delay, which allows comparing current measurement
762 values taken at the exact same time.
764 +----------------------------------+--------------------------------+
765 | Current Differential protection | Attribute |
766 | Requirement | |
767 +----------------------------------+--------------------------------+
768 | One way maximum delay | 5 ms |
769 | Asymetric delay Required | Yes |
770 | Maximum jitter | less than 250 us (750us for |
771 | | legacy IED) |
772 | Topology | Point to point, point to |
773 | | Multi-point |
774 | Bandwidth | 64 Kbps |
775 | Availability | 99.9999 |
776 | precise timing required | Yes |
777 | Recovery time on node failure | less than 50ms - hitless |
778 | performance management | Yes, Mandatory |
779 | Redundancy | Yes |
780 | Packet loss | 0.1% |
781 +----------------------------------+--------------------------------+
783 Table 3: Current Differential Protection metrics
785 3.1.1.1.7. Distance Protection Scheme
787 Distance (Impedance Relay) protection scheme is based on voltage and
788 current measurements. The network metrics are similar (but not
789 identical to) Current Differential protection.
791 +-------------------------------+-----------------------------------+
792 | Distance protection | Attribute |
793 | Requirement | |
794 +-------------------------------+-----------------------------------+
795 | One way maximum delay | 5 ms |
796 | Asymetric delay Required | No |
797 | Maximum jitter | Not critical |
798 | Topology | Point to point, point to Multi- |
799 | | point |
800 | Bandwidth | 64 Kbps |
801 | Availability | 99.9999 |
802 | precise timing required | Yes |
803 | Recovery time on node failure | less than 50ms - hitless |
804 | performance management | Yes, Mandatory |
805 | Redundancy | Yes |
806 | Packet loss | 0.1% |
807 +-------------------------------+-----------------------------------+
809 Table 4: Distance Protection requirements
811 3.1.1.1.8. Inter-Substation Protection Signaling
813 This use case describes the exchange of Sampled Value and/or GOOSE
814 (Generic Object Oriented Substation Events) message between
815 Intelligent Electronic Devices (IED) in two substations for
816 protection and tripping coordination. The two IEDs are in a master-
817 slave mode.
819 The Current Transformer or Voltage Transformer (CT/VT) in one
820 substation sends the sampled analog voltage or current value to the
821 Merging Unit (MU) over hard wire. The MU sends the time-synchronized
822 61850-9-2 sampled values to the slave IED. The slave IED forwards
823 the information to the Master IED in the other substation. The
824 master IED makes the determination (for example based on sampled
825 value differentials) to send a trip command to the originating IED.
826 Once the slave IED/Relay receives the GOOSE trip for breaker
827 tripping, it opens the breaker. It then sends a confirmation message
828 back to the master. All data exchanges between IEDs are either
829 through Sampled Value and/or GOOSE messages.
831 +----------------------------------+--------------------------------+
832 | Inter-Substation protection | Attribute |
833 | Requirement | |
834 +----------------------------------+--------------------------------+
835 | One way maximum delay | 5 ms |
836 | Asymetric delay Required | No |
837 | Maximum jitter | Not critical |
838 | Topology | Point to point, point to |
839 | | Multi-point |
840 | Bandwidth | 64 Kbps |
841 | Availability | 99.9999 |
842 | precise timing required | Yes |
843 | Recovery time on node failure | less than 50ms - hitless |
844 | performance management | Yes, Mandatory |
845 | Redundancy | Yes |
846 | Packet loss | 1% |
847 +----------------------------------+--------------------------------+
849 Table 5: Inter-Substation Protection requirements
851 3.1.1.2. Intra-Substation Process Bus Communications
853 This use case describes the data flow from the CT/VT to the IEDs in
854 the substation via the MU. The CT/VT in the substation send the
855 analog voltage or current values to the MU over hard wire. The MU
856 converts the analog values into digital format (typically time-
857 synchronized Sampled Values as specified by IEC 61850-9-2) and sends
858 them to the IEDs in the substation. The GPS Master Clock can send
859 1PPS or IRIG-B format to the MU through a serial port or IEEE 1588
860 protocol via a network. Process bus communication using 61850
861 simplifies connectivity within the substation and removes the
862 requirement for multiple serial connections and removes the slow
863 serial bus architectures that are typically used. This also ensures
864 increased flexibility and increased speed with the use of multicast
865 messaging between multiple devices.
867 +----------------------------------+--------------------------------+
868 | Intra-Substation protection | Attribute |
869 | Requirement | |
870 +----------------------------------+--------------------------------+
871 | One way maximum delay | 5 ms |
872 | Asymetric delay Required | No |
873 | Maximum jitter | Not critical |
874 | Topology | Point to point, point to |
875 | | Multi-point |
876 | Bandwidth | 64 Kbps |
877 | Availability | 99.9999 |
878 | precise timing required | Yes |
879 | Recovery time on Node failure | less than 50ms - hitless |
880 | performance management | Yes, Mandatory |
881 | Redundancy | Yes - No |
882 | Packet loss | 0.1% |
883 +----------------------------------+--------------------------------+
885 Table 6: Intra-Substation Protection requirements
887 3.1.1.3. Wide Area Monitoring and Control Systems
889 The application of synchrophasor measurement data from Phasor
890 Measurement Units (PMU) to Wide Area Monitoring and Control Systems
891 promises to provide important new capabilities for improving system
892 stability. Access to PMU data enables more timely situational
893 awareness over larger portions of the grid than what has been
894 possible historically with normal SCADA (Supervisory Control and Data
895 Acquisition) data. Handling the volume and real-time nature of
896 synchrophasor data presents unique challenges for existing
897 application architectures. Wide Area management System (WAMS) makes
898 it possible for the condition of the bulk power system to be observed
899 and understood in real-time so that protective, preventative, or
900 corrective action can be taken. Because of the very high sampling
901 rate of measurements and the strict requirement for time
902 synchronization of the samples, WAMS has stringent telecommunications
903 requirements in an IP network that are captured in the following
904 table:
906 +----------------------+--------------------------------------------+
907 | WAMS Requirement | Attribute |
908 +----------------------+--------------------------------------------+
909 | One way maximum | 50 ms |
910 | delay | |
911 | Asymetric delay | No |
912 | Required | |
913 | Maximum jitter | Not critical |
914 | Topology | Point to point, point to Multi-point, |
915 | | Multi-point to Multi-point |
916 | Bandwidth | 100 Kbps |
917 | Availability | 99.9999 |
918 | precise timing | Yes |
919 | required | |
920 | Recovery time on | less than 50ms - hitless |
921 | Node failure | |
922 | performance | Yes, Mandatory |
923 | management | |
924 | Redundancy | Yes |
925 | Packet loss | 1% |
926 | Consecutive Packet | At least 1 packet per application cycle |
927 | Loss | must be received. |
928 +----------------------+--------------------------------------------+
930 Table 7: WAMS Special Communication Requirements
932 3.1.1.4. IEC 61850 WAN engineering guidelines requirement
933 classification
935 The IEC (International Electrotechnical Commission) has recently
936 published a Technical Report which offers guidelines on how to define
937 and deploy Wide Area Networks for the interconnections of electric
938 substations, generation plants and SCADA operation centers. The IEC
939 61850-90-12 is providing a classification of WAN communication
940 requirements into 4 classes. Table 8 summarizes these requirements:
942 +----------------+------------+------------+------------+-----------+
943 | WAN | Class WA | Class WB | Class WC | Class WD |
944 | Requirement | | | | |
945 +----------------+------------+------------+------------+-----------+
946 | Application | EHV (Extra | HV (High | MV (Medium | General |
947 | field | High | Voltage) | Voltage) | purpose |
948 | | Voltage) | | | |
949 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms |
950 | Jitter | 10 us | 100 us | 1 ms | 10 ms |
951 | Latency | 100 us | 1 ms | 10 ms | 100 ms |
952 | Asymetry | | | | |
953 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 |
954 | | | | | ms |
955 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | |
956 | | 10-6 | 10-4 | | |
957 | Unavailability | 10-7 to | 10-5 to | 10-3 | |
958 | | 10-6 | 10-4 | | |
959 | Recovery delay | Zero | 50 ms | 5 s | 50 s |
960 | Cyber security | extremely | High | Medium | Medium |
961 | | high | | | |
962 +----------------+------------+------------+------------+-----------+
964 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC
966 3.1.2. Generation Use Case
968 Energy generation systems are complex infrastructures that require
969 control of both the generated power and the generation
970 infrastructure.
972 3.1.2.1. Control of the Generated Power
974 The electrical power generation frequency must be maintained within a
975 very narrow band. Deviations from the acceptable frequency range are
976 detected and the required signals are sent to the power plants for
977 frequency regulation.
979 Automatic Generation Control (AGC) is a system for adjusting the
980 power output of generators at different power plants, in response to
981 changes in the load.
983 +---------------------------------------------------+---------------+
984 | FCAG (Frequency Control Automatic Generation) | Attribute |
985 | Requirement | |
986 +---------------------------------------------------+---------------+
987 | One way maximum delay | 500 ms |
988 | Asymetric delay Required | No |
989 | Maximum jitter | Not critical |
990 | Topology | Point to |
991 | | point |
992 | Bandwidth | 20 Kbps |
993 | Availability | 99.999 |
994 | precise timing required | Yes |
995 | Recovery time on Node failure | N/A |
996 | performance management | Yes, |
997 | | Mandatory |
998 | Redundancy | Yes |
999 | Packet loss | 1% |
1000 +---------------------------------------------------+---------------+
1002 Table 9: FCAG Communication Requirements
1004 3.1.2.2. Control of the Generation Infrastructure
1006 The control of the generation infrastructure combines requirements
1007 from industrial automation systems and energy generation systems. In
1008 this section we present the use case of the control of the generation
1009 infrastructure of a wind turbine.
1011 |
1012 |
1013 | +-----------------+
1014 | | +----+ |
1015 | | |WTRM| WGEN |
1016 WROT x==|===| | |
1017 | | +----+ WCNV|
1018 | |WNAC |
1019 | +---+---WYAW---+--+
1020 | | |
1021 | | | +----+
1022 |WTRF | |WMET|
1023 | | | |
1024 Wind Turbine | +--+-+
1025 Controller | |
1026 WTUR | | |
1027 WREP | | |
1028 WSLG | | |
1029 WALG | WTOW | |
1031 Figure 1: Wind Turbine Control Network
1033 Figure 1 presents the subsystems that operate a wind turbine. These
1034 subsystems include
1036 o WROT (Rotor Control)
1038 o WNAC (Nacelle Control) (nacelle: housing containing the generator)
1040 o WTRM (Transmission Control)
1042 o WGEN (Generator)
1044 o WYAW (Yaw Controller) (of the tower head)
1046 o WCNV (In-Turbine Power Converter)
1048 o WMET (External Meteorological Station providing real time
1049 information to the controllers of the tower)
1051 Traffic characteristics relevant for the network planning and
1052 dimensioning process in a wind turbine scenario are listed below.
1053 The values in this section are based mainly on the relevant
1054 references [Ahm14] and [Spe09]. Each logical node (Figure 1) is a
1055 part of the metering network and produces analog measurements and
1056 status information which must comply with their respective data rate
1057 constraints.
1059 +-----------+--------+--------+-------------+---------+-------------+
1060 | Subsystem | Sensor | Analog | Data Rate | Status | Data rate |
1061 | | Count | Sample | (bytes/sec) | Sample | (bytes/sec) |
1062 | | | Count | | Count | |
1063 +-----------+--------+--------+-------------+---------+-------------+
1064 | WROT | 14 | 9 | 642 | 5 | 10 |
1065 | WTRM | 18 | 10 | 2828 | 8 | 16 |
1066 | WGEN | 14 | 12 | 73764 | 2 | 4 |
1067 | WCNV | 14 | 12 | 74060 | 2 | 4 |
1068 | WTRF | 12 | 5 | 73740 | 2 | 4 |
1069 | WNAC | 12 | 9 | 112 | 3 | 6 |
1070 | WYAW | 7 | 8 | 220 | 4 | 8 |
1071 | WTOW | 4 | 1 | 8 | 3 | 6 |
1072 | WMET | 7 | 7 | 228 | - | - |
1073 +-----------+--------+--------+-------------+---------+-------------+
1075 Table 10: Wind Turbine Data Rate Constraints
1077 Quality of Service (QoS) constraints for different services are
1078 presented in Table 11. These constraints are defined by IEEE 1646
1079 standard [IEEE1646] and IEC 61400 standard [IEC61400].
1081 +---------------------+---------+-------------+---------------------+
1082 | Service | Latency | Reliability | Packet Loss Rate |
1083 +---------------------+---------+-------------+---------------------+
1084 | Analogue measure | 16 ms | 99.99% | < 10-6 |
1085 | Status information | 16 ms | 99.99% | < 10-6 |
1086 | Protection traffic | 4 ms | 100.00% | < 10-9 |
1087 | Reporting and | 1 s | 99.99% | < 10-6 |
1088 | logging | | | |
1089 | Video surveillance | 1 s | 99.00% | No specific |
1090 | | | | requirement |
1091 | Internet connection | 60 min | 99.00% | No specific |
1092 | | | | requirement |
1093 | Control traffic | 16 ms | 100.00% | < 10-9 |
1094 | Data polling | 16 ms | 99.99% | < 10-6 |
1095 +---------------------+---------+-------------+---------------------+
1097 Table 11: Wind Turbine Reliability and Latency Constraints
1099 3.1.2.2.1. Intra-Domain Network Considerations
1101 A wind turbine is composed of a large set of subsystems including
1102 sensors and actuators which require time-critical operation. The
1103 reliability and latency constraints of these different subsystems is
1104 shown in Table 11. These subsystems are connected to an intra-domain
1105 network which is used to monitor and control the operation of the
1106 turbine and connect it to the SCADA subsystems. The different
1107 components are interconnected using fiber optics, industrial buses,
1108 industrial Ethernet, EtherCat, or a combination of them. Industrial
1109 signaling and control protocols such as Modbus, Profibus, Profinet
1110 and EtherCat are used directly on top of the Layer 2 transport or
1111 encapsulated over TCP/IP.
1113 The Data collected from the sensors and condition monitoring systems
1114 is multiplexed onto fiber cables for transmission to the base of the
1115 tower, and to remote control centers. The turbine controller
1116 continuously monitors the condition of the wind turbine and collects
1117 statistics on its operation. This controller also manages a large
1118 number of switches, hydraulic pumps, valves, and motors within the
1119 wind turbine.
1121 There is usually a controller both at the bottom of the tower and in
1122 the nacelle. The communication between these two controllers usually
1123 takes place using fiber optics instead of copper links. Sometimes, a
1124 third controller is installed in the hub of the rotor and manages the
1125 pitch of the blades. That unit usually communicates with the nacelle
1126 unit using serial communications.
1128 3.1.2.2.2. Inter-Domain network considerations
1130 A remote control center belonging to a grid operator regulates the
1131 power output, enables remote actuation, and monitors the health of
1132 one or more wind parks in tandem. It connects to the local control
1133 center in a wind park over the Internet (Figure 2) via firewalls at
1134 both ends. The AS path between the local control center and the Wind
1135 Park typically involves several ISPs at different tiers. For
1136 example, a remote control center in Denmark can regulate a wind park
1137 in Greece over the normal public AS path between the two locations.
1139 The remote control center is part of the SCADA system, setting the
1140 desired power output to the wind park and reading back the result
1141 once the new power output level has been set. Traffic between the
1142 remote control center and the wind park typically consists of
1143 protocols like IEC 60870-5-104 [IEC-60870-5-104], OPC XML-DA
1144 [OPCXML], Modbus [MODBUS], and SNMP [RFC3411]. Currently, traffic
1145 flows between the wind farm and the remote control center are best
1146 effort. QoS requirements are not strict, so no SLAs or service
1147 provisioning mechanisms (e.g., VPN) are employed. In case of events
1148 like equipment failure, tolerance for alarm delay is on the order of
1149 minutes, due to redundant systems already in place.
1151 +--------------+
1152 | |
1153 | |
1154 | Wind Park #1 +----+
1155 | | | XXXXXX
1156 | | | X XXXXXXXX +----------------+
1157 +--------------+ | XXXX X XXXXX | |
1158 +---+ XXX | Remote Control |
1159 XXX Internet +----+ Center |
1160 +----+X XXX | |
1161 +--------------+ | XXXXXXX XX | |
1162 | | | XX XXXXXXX +----------------+
1163 | | | XXXXX
1164 | Wind Park #2 +----+
1165 | |
1166 | |
1167 +--------------+
1169 Figure 2: Wind Turbine Control via Internet
1171 We expect future use cases which require bounded latency, bounded
1172 jitter and extraordinary low packet loss for inter-domain traffic
1173 flows due to the softwarization and virtualization of core wind farm
1174 equipment (e.g. switches, firewalls and SCADA server components).
1175 These factors will create opportunities for service providers to
1176 install new services and dynamically manage them from remote
1177 locations. For example, to enable fail-over of a local SCADA server,
1178 a SCADA server in another wind farm site (under the administrative
1179 control of the same operator) could be utilized temporarily
1180 (Figure 3). In that case local traffic would be forwarded to the
1181 remote SCADA server and existing intra-domain QoS and timing
1182 parameters would have to be met for inter-domain traffic flows.
1184 +--------------+
1185 | |
1186 | |
1187 | Wind Park #1 +----+
1188 | | | XXXXXX
1189 | | | X XXXXXXXX +----------------+
1190 +--------------+ | XXXX XXXXX | |
1191 +---+ Operator XXX | Remote Control |
1192 XXX Administered +----+ Center |
1193 +----+X WAN XXX | |
1194 +--------------+ | XXXXXXX XX | |
1195 | | | XX XXXXXXX +----------------+
1196 | | | XXXXX
1197 | Wind Park #2 +----+
1198 | |
1199 | |
1200 +--------------+
1202 Figure 3: Wind Turbine Control via Operator Administered WAN
1204 3.1.3. Distribution use case
1206 3.1.3.1. Fault Location Isolation and Service Restoration (FLISR)
1208 Fault Location, Isolation, and Service Restoration (FLISR) refers to
1209 the ability to automatically locate the fault, isolate the fault, and
1210 restore service in the distribution network. This will likely be the
1211 first widespread application of distributed intelligence in the grid.
1213 Static power switch status (open/closed) in the network dictates the
1214 power flow to secondary substations. Reconfiguring the network in
1215 the event of a fault is typically done manually on site to energize/
1216 de-energize alternate paths. Automating the operation of substation
1217 switchgear allows the flow of power to be altered automatically under
1218 fault conditions.
1220 FLISR can be managed centrally from a Distribution Management System
1221 (DMS) or executed locally through distributed control via intelligent
1222 switches and fault sensors.
1224 +----------------------+--------------------------------------------+
1225 | FLISR Requirement | Attribute |
1226 +----------------------+--------------------------------------------+
1227 | One way maximum | 80 ms |
1228 | delay | |
1229 | Asymetric delay | No |
1230 | Required | |
1231 | Maximum jitter | 40 ms |
1232 | Topology | Point to point, point to Multi-point, |
1233 | | Multi-point to Multi-point |
1234 | Bandwidth | 64 Kbps |
1235 | Availability | 99.9999 |
1236 | precise timing | Yes |
1237 | required | |
1238 | Recovery time on | Depends on customer impact |
1239 | Node failure | |
1240 | performance | Yes, Mandatory |
1241 | management | |
1242 | Redundancy | Yes |
1243 | Packet loss | 0.1% |
1244 +----------------------+--------------------------------------------+
1246 Table 12: FLISR Communication Requirements
1248 3.2. Electrical Utilities Today
1250 Many utilities still rely on complex environments formed of multiple
1251 application-specific proprietary networks, including TDM networks.
1253 In this kind of environment there is no mixing of OT and IT
1254 applications on the same network, and information is siloed between
1255 operational areas.
1257 Specific calibration of the full chain is required, which is costly.
1259 This kind of environment prevents utility operations from realizing
1260 the operational efficiency benefits, visibility, and functional
1261 integration of operational information across grid applications and
1262 data networks.
1264 In addition, there are many security-related issues as discussed in
1265 the following section.
1267 3.2.1. Security Current Practices and Limitations
1269 Grid monitoring and control devices are already targets for cyber
1270 attacks, and legacy telecommunications protocols have many intrinsic
1271 network-related vulnerabilities. For example, DNP3, Modbus,
1272 PROFIBUS/PROFINET, and other protocols are designed around a common
1273 paradigm of request and respond. Each protocol is designed for a
1274 master device such as an HMI (Human Machine Interface) system to send
1275 commands to subordinate slave devices to retrieve data (reading
1276 inputs) or control (writing to outputs). Because many of these
1277 protocols lack authentication, encryption, or other basic security
1278 measures, they are prone to network-based attacks, allowing a
1279 malicious actor or attacker to utilize the request-and-respond system
1280 as a mechanism for command-and-control like functionality. Specific
1281 security concerns common to most industrial control, including
1282 utility telecommunication protocols include the following:
1284 o Network or transport errors (e.g. malformed packets or excessive
1285 latency) can cause protocol failure.
1287 o Protocol commands may be available that are capable of forcing
1288 slave devices into inoperable states, including powering-off
1289 devices, forcing them into a listen-only state, disabling
1290 alarming.
1292 o Protocol commands may be available that are capable of restarting
1293 communications and otherwise interrupting processes.
1295 o Protocol commands may be available that are capable of clearing,
1296 erasing, or resetting diagnostic information such as counters and
1297 diagnostic registers.
1299 o Protocol commands may be available that are capable of requesting
1300 sensitive information about the controllers, their configurations,
1301 or other need-to-know information.
1303 o Most protocols are application layer protocols transported over
1304 TCP; therefore it is easy to transport commands over non-standard
1305 ports or inject commands into authorized traffic flows.
1307 o Protocol commands may be available that are capable of
1308 broadcasting messages to many devices at once (i.e. a potential
1309 DoS).
1311 o Protocol commands may be available to query the device network to
1312 obtain defined points and their values (i.e. a configuration
1313 scan).
1315 o Protocol commands may be available that will list all available
1316 function codes (i.e. a function scan).
1318 These inherent vulnerabilities, along with increasing connectivity
1319 between IT an OT networks, make network-based attacks very feasible.
1321 Simple injection of malicious protocol commands provides control over
1322 the target process. Altering legitimate protocol traffic can also
1323 alter information about a process and disrupt the legitimate controls
1324 that are in place over that process. A man-in-the-middle attack
1325 could provide both control over a process and misrepresentation of
1326 data back to operator consoles.
1328 3.3. Electrical Utilities Future
1330 The business and technology trends that are sweeping the utility
1331 industry will drastically transform the utility business from the way
1332 it has been for many decades. At the core of many of these changes
1333 is a drive to modernize the electrical grid with an integrated
1334 telecommunications infrastructure. However, interoperability
1335 concerns, legacy networks, disparate tools, and stringent security
1336 requirements all add complexity to the grid transformation. Given
1337 the range and diversity of the requirements that should be addressed
1338 by the next generation telecommunications infrastructure, utilities
1339 need to adopt a holistic architectural approach to integrate the
1340 electrical grid with digital telecommunications across the entire
1341 power delivery chain.
1343 The key to modernizing grid telecommunications is to provide a
1344 common, adaptable, multi-service network infrastructure for the
1345 entire utility organization. Such a network serves as the platform
1346 for current capabilities while enabling future expansion of the
1347 network to accommodate new applications and services.
1349 To meet this diverse set of requirements, both today and in the
1350 future, the next generation utility telecommunnications network will
1351 be based on open-standards-based IP architecture. An end-to-end IP
1352 architecture takes advantage of nearly three decades of IP technology
1353 development, facilitating interoperability and device management
1354 across disparate networks and devices, as it has been already
1355 demonstrated in many mission-critical and highly secure networks.
1357 IPv6 is seen as a future telecommunications technology for the Smart
1358 Grid; the IEC (International Electrotechnical Commission) and
1359 different National Committees have mandated a specific adhoc group
1360 (AHG8) to define the migration strategy to IPv6 for all the IEC TC57
1361 power automation standards. The AHG8 has recently finalised the work
1362 on the migration strategy and the following Technical Report has been
1363 issued: IEC TR 62357-200:2015: Guidelines for migration from Internet
1364 Protocol version 4 (IPv4) to Internet Protocol version 6 (IPv6).
1366 We expect cloud-based SCADA systems to control and monitor the
1367 critical and non-critical subsystems of generation systems, for
1368 example wind farms.
1370 3.3.1. Migration to Packet-Switched Network
1372 Throughout the world, utilities are increasingly planning for a
1373 future based on smart grid applications requiring advanced
1374 telecommunications systems. Many of these applications utilize
1375 packet connectivity for communicating information and control signals
1376 across the utility's Wide Area Network (WAN), made possible by
1377 technologies such as multiprotocol label switching (MPLS). The data
1378 that traverses the utility WAN includes:
1380 o Grid monitoring, control, and protection data
1382 o Non-control grid data (e.g. asset data for condition-based
1383 monitoring)
1385 o Physical safety and security data (e.g. voice and video)
1387 o Remote worker access to corporate applications (voice, maps,
1388 schematics, etc.)
1390 o Field area network backhaul for smart metering, and distribution
1391 grid management
1393 o Enterprise traffic (email, collaboration tools, business
1394 applications)
1396 WANs support this wide variety of traffic to and from substations,
1397 the transmission and distribution grid, generation sites, between
1398 control centers, and between work locations and data centers. To
1399 maintain this rapidly expanding set of applications, many utilities
1400 are taking steps to evolve present time-division multiplexing (TDM)
1401 based and frame relay infrastructures to packet systems. Packet-
1402 based networks are designed to provide greater functionalities and
1403 higher levels of service for applications, while continuing to
1404 deliver reliability and deterministic (real-time) traffic support.
1406 3.3.2. Telecommunications Trends
1408 These general telecommunications topics are in addition to the use
1409 cases that have been addressed so far. These include both current
1410 and future telecommunications related topics that should be factored
1411 into the network architecture and design.
1413 3.3.2.1. General Telecommunications Requirements
1415 o IP Connectivity everywhere
1417 o Monitoring services everywhere and from different remote centers
1418 o Move services to a virtual data center
1420 o Unify access to applications / information from the corporate
1421 network
1423 o Unify services
1425 o Unified Communications Solutions
1427 o Mix of fiber and microwave technologies - obsolescence of SONET/
1428 SDH or TDM
1430 o Standardize grid telecommunications protocol to opened standard to
1431 ensure interoperability
1433 o Reliable Telecommunications for Transmission and Distribution
1434 Substations
1436 o IEEE 1588 time synchronization Client / Server Capabilities
1438 o Integration of Multicast Design
1440 o QoS Requirements Mapping
1442 o Enable Future Network Expansion
1444 o Substation Network Resilience
1446 o Fast Convergence Design
1448 o Scalable Headend Design
1450 o Define Service Level Agreements (SLA) and Enable SLA Monitoring
1452 o Integration of 3G/4G Technologies and future technologies
1454 o Ethernet Connectivity for Station Bus Architecture
1456 o Ethernet Connectivity for Process Bus Architecture
1458 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP
1460 3.3.2.2. Specific Network topologies of Smart Grid Applications
1462 Utilities often have very large private telecommunications networks.
1463 It covers an entire territory / country. The main purpose of the
1464 network, until now, has been to support transmission network
1465 monitoring, control, and automation, remote control of generation
1466 sites, and providing FCAPS (Fault, Configuration, Accounting,
1467 Performance, Security) services from centralized network operation
1468 centers.
1470 Going forward, one network will support operation and maintenance of
1471 electrical networks (generation, transmission, and distribution),
1472 voice and data services for ten of thousands of employees and for
1473 exchange with neighboring interconnections, and administrative
1474 services. To meet those requirements, utility may deploy several
1475 physical networks leveraging different technologies across the
1476 country: an optical network and a microwave network for instance.
1477 Each protection and automatism system between two points has two
1478 telecommunications circuits, one on each network. Path diversity
1479 between two substations is key. Regardless of the event type
1480 (hurricane, ice storm, etc.), one path shall stay available so the
1481 system can still operate.
1483 In the optical network, signals are transmitted over more than tens
1484 of thousands of circuits using fiber optic links, microwave and
1485 telephone cables. This network is the nervous system of the
1486 utility's power transmission operations. The optical network
1487 represents ten of thousands of km of cable deployed along the power
1488 lines, with individual runs as long as 280 km.
1490 3.3.2.3. Precision Time Protocol
1492 Some utilities do not use GPS clocks in generation substations. One
1493 of the main reasons is that some of the generation plants are 30 to
1494 50 meters deep under ground and the GPS signal can be weak and
1495 unreliable. Instead, atomic clocks are used. Clocks are
1496 synchronized amongst each other. Rubidium clocks provide clock and
1497 1ms timestamps for IRIG-B.
1499 Some companies plan to transition to the Precision Time Protocol
1500 (PTP, [IEEE1588]), distributing the synchronization signal over the
1501 IP/MPLS network. PTP provides a mechanism for synchronizing the
1502 clocks of participating nodes to a high degree of accuracy and
1503 precision.
1505 PTP operates based on the following assumptions:
1507 It is assumed that the network eliminates cyclic forwarding of PTP
1508 messages within each communication path (e.g. by using a spanning
1509 tree protocol).
1511 PTP is tolerant of an occasional missed message, duplicated
1512 message, or message that arrived out of order. However, PTP
1513 assumes that such impairments are relatively rare.
1515 PTP was designed assuming a multicast communication model, however
1516 PTP also supports a unicast communication model as long as the
1517 behavior of the protocol is preserved.
1519 Like all message-based time transfer protocols, PTP time accuracy
1520 is degraded by delay asymmetry in the paths taken by event
1521 messages. Asymmetry is not detectable by PTP, however, if such
1522 delays are known a priori, PTP can correct for asymmetry.
1524 IEC 61850 defines the use of IEC/IEEE 61850-9-3:2016. The title is:
1525 Precision time protocol profile for power utility automation. It is
1526 based on Annex B/IEC 62439 which offers the support of redundant
1527 attachment of clocks to Parallel Redundancy Protocol (PRP) and High-
1528 availability Seamless Redundancy (HSR) networks.
1530 3.3.3. Security Trends in Utility Networks
1532 Although advanced telecommunications networks can assist in
1533 transforming the energy industry by playing a critical role in
1534 maintaining high levels of reliability, performance, and
1535 manageability, they also introduce the need for an integrated
1536 security infrastructure. Many of the technologies being deployed to
1537 support smart grid projects such as smart meters and sensors can
1538 increase the vulnerability of the grid to attack. Top security
1539 concerns for utilities migrating to an intelligent smart grid
1540 telecommunications platform center on the following trends:
1542 o Integration of distributed energy resources
1544 o Proliferation of digital devices to enable management, automation,
1545 protection, and control
1547 o Regulatory mandates to comply with standards for critical
1548 infrastructure protection
1550 o Migration to new systems for outage management, distribution
1551 automation, condition-based maintenance, load forecasting, and
1552 smart metering
1554 o Demand for new levels of customer service and energy management
1556 This development of a diverse set of networks to support the
1557 integration of microgrids, open-access energy competition, and the
1558 use of network-controlled devices is driving the need for a converged
1559 security infrastructure for all participants in the smart grid,
1560 including utilities, energy service providers, large commercial and
1561 industrial, as well as residential customers. Securing the assets of
1562 electric power delivery systems (from the control center to the
1563 substation, to the feeders and down to customer meters) requires an
1564 end-to-end security infrastructure that protects the myriad of
1565 telecommunications assets used to operate, monitor, and control power
1566 flow and measurement.
1568 "Cyber security" refers to all the security issues in automation and
1569 telecommunications that affect any functions related to the operation
1570 of the electric power systems. Specifically, it involves the
1571 concepts of:
1573 o Integrity : data cannot be altered undetectably
1575 o Authenticity : the telecommunications parties involved must be
1576 validated as genuine
1578 o Authorization : only requests and commands from the authorized
1579 users can be accepted by the system
1581 o Confidentiality : data must not be accessible to any
1582 unauthenticated users
1584 When designing and deploying new smart grid devices and
1585 telecommunications systems, it is imperative to understand the
1586 various impacts of these new components under a variety of attack
1587 situations on the power grid. Consequences of a cyber attack on the
1588 grid telecommunications network can be catastrophic. This is why
1589 security for smart grid is not just an ad hoc feature or product,
1590 it's a complete framework integrating both physical and Cyber
1591 security requirements and covering the entire smart grid networks
1592 from generation to distribution. Security has therefore become one
1593 of the main foundations of the utility telecom network architecture
1594 and must be considered at every layer with a defense-in-depth
1595 approach. Migrating to IP based protocols is key to address these
1596 challenges for two reasons:
1598 o IP enables a rich set of features and capabilities to enhance the
1599 security posture
1601 o IP is based on open standards, which allows interoperability
1602 between different vendors and products, driving down the costs
1603 associated with implementing security solutions in OT networks.
1605 Securing OT (Operation technology) telecommunications over packet-
1606 switched IP networks follow the same principles that are foundational
1607 for securing the IT infrastructure, i.e., consideration must be given
1608 to enforcing electronic access control for both person-to-machine and
1609 machine-to-machine communications, and providing the appropriate
1610 levels of data privacy, device and platform integrity, and threat
1611 detection and mitigation.
1613 3.4. Electrical Utilities Asks
1615 o Mixed L2 and L3 topologies
1617 o Deterministic behavior
1619 o Bounded latency and jitter
1621 o Tight feedback intervals
1623 o High availability, low recovery time
1625 o Redundancy, low packet loss
1627 o Precise timing
1629 o Centralized computing of deterministic paths
1631 o Distributed configuration may also be useful
1633 4. Building Automation Systems
1635 4.1. Use Case Description
1637 A Building Automation System (BAS) manages equipment and sensors in a
1638 building for improving residents' comfort, reducing energy
1639 consumption, and responding to failures and emergencies. For
1640 example, the BAS measures the temperature of a room using sensors and
1641 then controls the HVAC (heating, ventilating, and air conditioning)
1642 to maintain a set temperature and minimize energy consumption.
1644 A BAS primarily performs the following functions:
1646 o Periodically measures states of devices, for example humidity and
1647 illuminance of rooms, open/close state of doors, FAN speed, etc.
1649 o Stores the measured data.
1651 o Provides the measured data to BAS systems and operators.
1653 o Generates alarms for abnormal state of devices.
1655 o Controls devices (e.g. turn off room lights at 10:00 PM).
1657 4.2. Building Automation Systems Today
1659 4.2.1. BAS Architecture
1661 A typical BAS architecture of today is shown in Figure 4.
1663 +----------------------------+
1664 | |
1665 | BMS HMI |
1666 | | | |
1667 | +----------------------+ |
1668 | | Management Network | |
1669 | +----------------------+ |
1670 | | | |
1671 | LC LC |
1672 | | | |
1673 | +----------------------+ |
1674 | | Field Network | |
1675 | +----------------------+ |
1676 | | | | | |
1677 | Dev Dev Dev Dev |
1678 | |
1679 +----------------------------+
1681 BMS := Building Management Server
1682 HMI := Human Machine Interface
1683 LC := Local Controller
1685 Figure 4: BAS architecture
1687 There are typically two layers of network in a BAS. The upper one is
1688 called the Management Network and the lower one is called the Field
1689 Network. In management networks an IP-based communication protocol
1690 is used, while in field networks non-IP based communication protocols
1691 ("field protocols") are mainly used. Field networks have specific
1692 timing requirements, whereas management networks can be best-effort.
1694 A Human Machine Interface (HMI) is typically a desktop PC used by
1695 operators to monitor and display device states, send device control
1696 commands to Local Controllers (LCs), and configure building schedules
1697 (for example "turn off all room lights in the building at 10:00 PM").
1699 A Building Management Server (BMS) performs the following operations.
1701 o Collect and store device states from LCs at regular intervals.
1703 o Send control values to LCs according to a building schedule.
1705 o Send an alarm signal to operators if it detects abnormal devices
1706 states.
1708 The BMS and HMI communicate with LCs via IP-based "management
1709 protocols" (see standards [bacnetip], [knx]).
1711 A LC is typically a Programmable Logic Controller (PLC) which is
1712 connected to several tens or hundreds of devices using "field
1713 protocols". An LC performs the following kinds of operations:
1715 o Measure device states and provide the information to BMS or HMI.
1717 o Send control values to devices, unilaterally or as part of a
1718 feedback control loop.
1720 There are many field protocols used today; some are standards-based
1721 and others are proprietary (see standards [lontalk], [modbus],
1722 [profibus] and [flnet]). The result is that BASs have multiple MAC/
1723 PHY modules and interfaces. This makes BASs more expensive, slower
1724 to develop, and can result in "vendor lock-in" with multiple types of
1725 management applications.
1727 4.2.2. BAS Deployment Model
1729 An example BAS for medium or large buildings is shown in Figure 5.
1730 The physical layout spans multiple floors, and there is a monitoring
1731 room where the BAS management entities are located. Each floor will
1732 have one or more LCs depending upon the number of devices connected
1733 to the field network.
1735 +--------------------------------------------------+
1736 | Floor 3 |
1737 | +----LC~~~~+~~~~~+~~~~~+ |
1738 | | | | | |
1739 | | Dev Dev Dev |
1740 | | |
1741 |--- | ------------------------------------------|
1742 | | Floor 2 |
1743 | +----LC~~~~+~~~~~+~~~~~+ Field Network |
1744 | | | | | |
1745 | | Dev Dev Dev |
1746 | | |
1747 |--- | ------------------------------------------|
1748 | | Floor 1 |
1749 | +----LC~~~~+~~~~~+~~~~~+ +-----------------|
1750 | | | | | | Monitoring Room |
1751 | | Dev Dev Dev | |
1752 | | | BMS HMI |
1753 | | Management Network | | | |
1754 | +--------------------------------+-----+ |
1755 | | |
1756 +--------------------------------------------------+
1758 Figure 5: BAS Deployment model for Medium/Large Buildings
1760 Each LC is connected to the monitoring room via the Management
1761 network, and the management functions are performed within the
1762 building. In most cases, fast Ethernet (e.g. 100BASE-T) is used for
1763 the management network. Since the management network is non-
1764 realtime, use of Ethernet without quality of service is sufficient
1765 for today's deployment.
1767 In the field network a variety of physical interfaces such as RS232C
1768 and RS485 are used, which have specific timing requirements. Thus if
1769 a field network is to be replaced with an Ethernet or wireless
1770 network, such networks must support time-critical deterministic
1771 flows.
1773 In Figure 6, another deployment model is presented in which the
1774 management system is hosted remotely. This is becoming popular for
1775 small office and residential buildings in which a standalone
1776 monitoring system is not cost-effective.
1778 +---------------+
1779 | Remote Center |
1780 | |
1781 | BMS HMI |
1782 +------------------------------------+ | | | |
1783 | Floor 2 | | +---+---+ |
1784 | +----LC~~~~+~~~~~+ Field Network| | | |
1785 | | | | | | Router |
1786 | | Dev Dev | +-------|-------+
1787 | | | |
1788 |--- | ------------------------------| |
1789 | | Floor 1 | |
1790 | +----LC~~~~+~~~~~+ | |
1791 | | | | | |
1792 | | Dev Dev | |
1793 | | | |
1794 | | Management Network | WAN |
1795 | +------------------------Router-------------+
1796 | |
1797 +------------------------------------+
1799 Figure 6: Deployment model for Small Buildings
1801 Some interoperability is possible today in the Management Network,
1802 but not in today's field networks due to their non-IP-based design.
1804 4.2.3. Use Cases for Field Networks
1806 Below are use cases for Environmental Monitoring, Fire Detection, and
1807 Feedback Control, and their implications for field network
1808 performance.
1810 4.2.3.1. Environmental Monitoring
1812 The BMS polls each LC at a maximum measurement interval of 100ms (for
1813 example to draw a historical chart of 1 second granularity with a 10x
1814 sampling interval) and then performs the operations as specified by
1815 the operator. Each LC needs to measure each of its several hundred
1816 sensors once per measurement interval. Latency is not critical in
1817 this scenario as long as all sensor values are completed in the
1818 measurement interval. Availability is expected to be 99.999 %.
1820 4.2.3.2. Fire Detection
1822 On detection of a fire, the BMS must stop the HVAC, close the fire
1823 shutters, turn on the fire sprinklers, send an alarm, etc. There are
1824 typically ~10s of sensors per LC that BMS needs to manage. In this
1825 scenario the measurement interval is 10-50ms, the communication delay
1826 is 10ms, and the availability must be 99.9999 %.
1828 4.2.3.3. Feedback Control
1830 BAS systems utilize feedback control in various ways; the most time-
1831 critial is control of DC motors, which require a short feedback
1832 interval (1-5ms) with low communication delay (10ms) and jitter
1833 (1ms). The feedback interval depends on the characteristics of the
1834 device and a target quality of control value. There are typically
1835 ~10s of such devices per LC.
1837 Communication delay is expected to be less than 10 ms, jitter less
1838 than 1 sec while the availability must be 99.9999% .
1840 4.2.4. Security Considerations
1842 When BAS field networks were developed it was assumed that the field
1843 networks would always be physically isolated from external networks
1844 and therefore security was not a concern. In today's world many BASs
1845 are managed remotely and are thus connected to shared IP networks and
1846 so security is definitely a concern, yet security features are not
1847 available in the majority of BAS field network deployments .
1849 The management network, being an IP-based network, has the protocols
1850 available to enable network security, but in practice many BAS
1851 systems do not implement even the available security features such as
1852 device authentication or encryption for data in transit.
1854 4.3. BAS Future
1856 In the future we expect more fine-grained environmental monitoring
1857 and lower energy consumption, which will require more sensors and
1858 devices, thus requiring larger and more complex building networks.
1860 We expect building networks to be connected to or converged with
1861 other networks (Enterprise network, Home network, and Internet).
1863 Therefore better facilities for network management, control,
1864 reliability and security are critical in order to improve resident
1865 and operator convenience and comfort. For example the ability to
1866 monitor and control building devices via the internet would enable
1867 (for example) control of room lights or HVAC from a resident's
1868 desktop PC or phone application.
1870 4.4. BAS Asks
1872 The community would like to see an interoperable protocol
1873 specification that can satisfy the timing, security, availability and
1874 QoS constraints described above, such that the resulting converged
1875 network can replace the disparate field networks. Ideally this
1876 connectivity could extend to the open Internet.
1878 This would imply an architecture that can guarantee
1880 o Low communication delays (from <10ms to 100ms in a network of
1881 several hundred devices)
1883 o Low jitter (< 1 ms)
1885 o Tight feedback intervals (1ms - 10ms)
1887 o High network availability (up to 99.9999% )
1889 o Availability of network data in disaster scenario
1891 o Authentication between management and field devices (both local
1892 and remote)
1894 o Integrity and data origin authentication of communication data
1895 between field and management devices
1897 o Confidentiality of data when communicated to a remote device
1899 5. Wireless for Industrial
1901 5.1. Use Case Description
1903 Wireless networks are useful for industrial applications, for example
1904 when portable, fast-moving or rotating objects are involved, and for
1905 the resource-constrained devices found in the Internet of Things
1906 (IoT).
1908 Such network-connected sensors, actuators, control loops (etc.)
1909 typically require that the underlying network support real-time
1910 quality of service (QoS), as well as specific classes of other
1911 network properties such as reliability, redundancy, and security.
1913 These networks may also contain very large numbers of devices, for
1914 example for factories, "big data" acquisition, and the IoT. Given
1915 the large numbers of devices installed, and the potential
1916 pervasiveness of the IoT, this is a huge and very cost-sensitive
1917 market. For example, a 1% cost reduction in some areas could save
1918 $100B
1920 5.1.1. Network Convergence using 6TiSCH
1922 Some wireless network technologies support real-time QoS, and are
1923 thus useful for these kinds of networks, but others do not. For
1924 example WiFi is pervasive but does not provide guaranteed timing or
1925 delivery of packets, and thus is not useful in this context.
1927 In this use case we focus on one specific wireless network technology
1928 which does provide the required deterministic QoS, which is "IPv6
1929 over the TSCH mode of IEEE 802.15.4e" (6TiSCH, where TSCH stands for
1930 "Time-Slotted Channel Hopping", see [I-D.ietf-6tisch-architecture],
1931 [IEEE802154], [IEEE802154e], and [RFC7554]).
1933 There are other deterministic wireless busses and networks available
1934 today, however they are imcompatible with each other, and
1935 incompatible with IP traffic (for example [ISA100], [WirelessHART]).
1937 Thus the primary goal of this use case is to apply 6TiSH as a
1938 converged IP- and standards-based wireless network for industrial
1939 applications, i.e. to replace multiple proprietary and/or
1940 incompatible wireless networking and wireless network management
1941 standards.
1943 5.1.2. Common Protocol Development for 6TiSCH
1945 Today there are a number of protocols required by 6TiSCH which are
1946 still in development, and a second intent of this use case is to
1947 highlight the ways in which these "missing" protocols share goals in
1948 common with DetNet. Thus it is possible that some of the protocol
1949 technology developed for DetNet will also be applicable to 6TiSCH.
1951 These protocol goals are identified here, along with their
1952 relationship to DetNet. It is likely that ultimately the resulting
1953 protocols will not be identical, but will share design principles
1954 which contribute to the eficiency of enabling both DetNet and 6TiSCH.
1956 One such commonality is that although at a different time scale, in
1957 both TSN [IEEE802.1TSNTG] and TSCH a packet crosses the network from
1958 node to node follows a precise schedule, as a train that leaves
1959 intermediate stations at precise times along its path. This kind of
1960 operation reduces collisions, saves energy, and enables engineering
1961 the network for deterministic properties.
1963 Another commonality is remote monitoring and scheduling management of
1964 a TSCH network by a Path Computation Element (PCE) and Network
1965 Management Entity (NME). The PCE/NME manage timeslots and device
1966 resources in a manner that minimizes the interaction with and the
1967 load placed on resource-constrained devices. For example, a tiny IoT
1968 device may have just enough buffers to store one or a few IPv6
1969 packets, and will have limited bandwidth between peers such that it
1970 can maintain only a small amount of peer information, and will not be
1971 able to store many packets waiting to be forwarded. It is
1972 advantageous then for it to only be required to carry out the
1973 specific behavior assigned to it by the PCE/NME (as opposed to
1974 maintaining its own IP stack, for example).
1976 Note: Current WG discussion indicates that some peer-to-peer
1977 communication must be assumed, i.e. the PCE may communicate only
1978 indirectly with any given device, enabling hierarchical configuration
1979 of the system.
1981 6TiSCH depends on [PCE] and [I-D.finn-detnet-architecture].
1983 6TiSCH also depends on the fact that DetNet will maintain consistency
1984 with [IEEE802.1TSNTG].
1986 5.2. Wireless Industrial Today
1988 Today industrial wireless is accomplished using multiple
1989 deterministic wireless networks which are incompatible with each
1990 other and with IP traffic.
1992 6TiSCH is not yet fully specified, so it cannot be used in today's
1993 applications.
1995 5.3. Wireless Industrial Future
1997 5.3.1. Unified Wireless Network and Management
1999 We expect DetNet and 6TiSCH together to enable converged transport of
2000 deterministic and best-effort traffic flows between real-time
2001 industrial devices and wide area networks via IP routing. A high
2002 level view of a basic such network is shown in Figure 7.
2004 ---+-------- ............ ------------
2005 | External Network |
2006 | +-----+
2007 +-----+ | NME |
2008 | | LLN Border | |
2009 | | router +-----+
2010 +-----+
2011 o o o
2012 o o o o
2013 o o LLN o o o
2014 o o o o
2015 o
2017 Figure 7: Basic 6TiSCH Network
2019 Figure 8 shows a backbone router federating multiple synchronized
2020 6TiSCH subnets into a single subnet connected to the external
2021 network.
2023 ---+-------- ............ ------------
2024 | External Network |
2025 | +-----+
2026 | +-----+ | NME |
2027 +-----+ | +-----+ | |
2028 | | Router | | PCE | +-----+
2029 | | +--| |
2030 +-----+ +-----+
2031 | |
2032 | Subnet Backbone |
2033 +--------------------+------------------+
2034 | | |
2035 +-----+ +-----+ +-----+
2036 | | Backbone | | Backbone | | Backbone
2037 o | | router | | router | | router
2038 +-----+ +-----+ +-----+
2039 o o o o o
2040 o o o o o o o o o o o
2041 o o o LLN o o o o
2042 o o o o o o o o o o o o
2044 Figure 8: Extended 6TiSCH Network
2046 The backbone router must ensure end-to-end deterministic behavior
2047 between the LLN and the backbone. We would like to see this
2048 accomplished in conformance with the work done in
2049 [I-D.finn-detnet-architecture] with respect to Layer-3 aspects of
2050 deterministic networks that span multiple Layer-2 domains.
2052 The PCE must compute a deterministic path end-to-end across the TSCH
2053 network and IEEE802.1 TSN Ethernet backbone, and DetNet protocols are
2054 expected to enable end-to-end deterministic forwarding.
2056 +-----+
2057 | IoT |
2058 | G/W |
2059 +-----+
2060 ^ <---- Elimination
2061 | |
2062 Track branch | |
2063 +-------+ +--------+ Subnet Backbone
2064 | |
2065 +--|--+ +--|--+
2066 | | | Backbone | | | Backbone
2067 o | | | router | | | router
2068 +--/--+ +--|--+
2069 o / o o---o----/ o
2070 o o---o--/ o o o o o
2071 o \ / o o LLN o
2072 o v <---- Replication
2073 o
2075 Figure 9: 6TiSCH Network with PRE
2077 5.3.1.1. PCE and 6TiSCH ARQ Retries
2079 Note: The possible use of ARQ techniques in DetNet is currently
2080 considered a possible design alternative.
2082 6TiSCH uses the IEEE802.15.4 Automatic Repeat-reQuest (ARQ) mechanism
2083 to provide higher reliability of packet delivery. ARQ is related to
2084 packet replication and elimination because there are two independent
2085 paths for packets to arrive at the destination, and if an expected
2086 packed does not arrive on one path then it checks for the packet on
2087 the second path.
2089 Although to date this mechanism is only used by wireless networks,
2090 this may be a technique that would be appropriate for DetNet and so
2091 aspects of the enabling protocol could be co-developed.
2093 For example, in Figure 9, a Track is laid out from a field device in
2094 a 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN
2095 backbone.
2097 In ARQ the Replication function in the field device sends a copy of
2098 each packet over two different branches, and the PCE schedules each
2099 hop of both branches so that the two copies arrive in due time at the
2100 gateway. In case of a loss on one branch, hopefully the other copy
2101 of the packet still arrives within the allocated time. If two copies
2102 make it to the IoT gateway, the Elimination function in the gateway
2103 ignores the extra packet and presents only one copy to upper layers.
2105 At each 6TiSCH hop along the Track, the PCE may schedule more than
2106 one timeSlot for a packet, so as to support Layer-2 retries (ARQ).
2108 In current deployments, a TSCH Track does not necessarily support PRE
2109 but is systematically multi-path. This means that a Track is
2110 scheduled so as to ensure that each hop has at least two forwarding
2111 solutions, and the forwarding decision is to try the preferred one
2112 and use the other in case of Layer-2 transmission failure as detected
2113 by ARQ.
2115 5.3.2. Schedule Management by a PCE
2117 A common feature of 6TiSCH and DetNet is the action of a PCE to
2118 configure paths through the network. Specifically, what is needed is
2119 a protocol and data model that the PCE will use to get/set the
2120 relevant configuration from/to the devices, as well as perform
2121 operations on the devices. We expect that this protocol will be
2122 developed by DetNet with consideration for its reuse by 6TiSCH. The
2123 remainder of this section provides a bit more context from the 6TiSCH
2124 side.
2126 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests
2128 The 6TiSCH device does not expect to place the request for bandwidth
2129 between itself and another device in the network. Rather, an
2130 operation control system invoked through a human interface specifies
2131 the required traffic specification and the end nodes (in terms of
2132 latency and reliability). Based on this information, the PCE must
2133 compute a path between the end nodes and provision the network with
2134 per-flow state that describes the per-hop operation for a given
2135 packet, the corresponding timeslots, and the flow identification that
2136 enables recognizing that a certain packet belongs to a certain path,
2137 etc.
2139 For a static configuration that serves a certain purpose for a long
2140 period of time, it is expected that a node will be provisioned in one
2141 shot with a full schedule, which incorporates the aggregation of its
2142 behavior for multiple paths. 6TiSCH expects that the programing of
2143 the schedule will be done over COAP as discussed in
2144 [I-D.ietf-6tisch-coap].
2146 6TiSCH expects that the PCE commands will be mapped back and forth
2147 into CoAP by a gateway function at the edge of the 6TiSCH network.
2148 For instance, it is possible that a mapping entity on the backbone
2149 transforms a non-CoAP protocol such as PCEP into the RESTful
2150 interfaces that the 6TiSCH devices support. This architecture will
2151 be refined to comply with DetNet [I-D.finn-detnet-architecture] when
2152 the work is formalized. Related information about 6TiSCH can be
2153 found at [I-D.ietf-6tisch-6top-interface] and RPL [RFC6550].
2155 A protocol may be used to update the state in the devices during
2156 runtime, for example if it appears that a path through the network
2157 has ceased to perform as expected, but in 6TiSCH that flow was not
2158 designed and no protocol was selected. We would like to see DetNet
2159 define the appropriate end-to-end protocols to be used in that case.
2160 The implication is that these state updates take place once the
2161 system is configured and running, i.e. they are not limited to the
2162 initial communication of the configuration of the system.
2164 A "slotFrame" is the base object that a PCE would manipulate to
2165 program a schedule into an LLN node ([I-D.ietf-6tisch-architecture]).
2167 We would like to see the PCE read energy data from devices, and
2168 compute paths that will implement policies on how energy in devices
2169 is consumed, for instance to ensure that the spent energy does not
2170 exceeded the available energy over a period of time. Note: this
2171 statement implies that an extensible protocol for communicating
2172 device info to the PCE and enabling the PCE to act on it will be part
2173 of the DetNet architecture, however for subnets with specific
2174 protocols (e.g. CoAP) a gateway may be required.
2176 6TiSCH devices can discover their neighbors over the radio using a
2177 mechanism such as beacons, but even though the neighbor information
2178 is available in the 6TiSCH interface data model, 6TiSCH does not
2179 describe a protocol to proactively push the neighborhood information
2180 to a PCE. We would like to see DetNet define such a protocol; one
2181 possible design alternative is that it could operate over CoAP,
2182 alternatively it could be converted to/from CoAP by a gateway. We
2183 would like to see such a protocol carry multiple metrics, for example
2184 similar to those used for RPL operations [RFC6551]
2186 5.3.2.2. 6TiSCH IP Interface
2188 "6top" ([I-D.wang-6tisch-6top-sublayer]) is a logical link control
2189 sitting between the IP layer and the TSCH MAC layer which provides
2190 the link abstraction that is required for IP operations. The 6top
2191 data model and management interfaces are further discussed in
2192 [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap].
2194 An IP packet that is sent along a 6TiSCH path uses the Differentiated
2195 Services Per-Hop-Behavior Group called Deterministic Forwarding, as
2196 described in [I-D.svshah-tsvwg-deterministic-forwarding].
2198 5.3.3. 6TiSCH Security Considerations
2200 On top of the classical requirements for protection of control
2201 signaling, it must be noted that 6TiSCH networks operate on limited
2202 resources that can be depleted rapidly in a DoS attack on the system,
2203 for instance by placing a rogue device in the network, or by
2204 obtaining management control and setting up unexpected additional
2205 paths.
2207 5.4. Wireless Industrial Asks
2209 6TiSCH depends on DetNet to define:
2211 o Configuration (state) and operations for deterministic paths
2213 o End-to-end protocols for deterministic forwarding (tagging, IP)
2215 o Protocol for packet replication and elimination
2217 6. Cellular Radio
2219 6.1. Use Case Description
2221 This use case describes the application of deterministic networking
2222 in the context of cellular telecom transport networks. Important
2223 elements include time synchronization, clock distribution, and ways
2224 of establishing time-sensitive streams for both Layer-2 and Layer-3
2225 user plane traffic.
2227 6.1.1. Network Architecture
2229 Figure 10 illustrates a typical 3GPP-defined cellular network
2230 architecture, which includes "Fronthaul", "Midhaul" and "Backhaul"
2231 network segments. The "Fronthaul" is the network connecting base
2232 stations (baseband processing units) to the remote radio heads
2233 (antennas). The "Midhaul" is the network inter-connecting base
2234 stations (or small cell sites). The "Backhaul" is the network or
2235 links connecting the radio base station sites to the network
2236 controller/gateway sites (i.e. the core of the 3GPP cellular
2237 network).
2239 In Figure 10 "eNB" ("E-UTRAN Node B") is the hardware that is
2240 connected to the mobile phone network which communicates directly
2241 with mobile handsets ([TS36300]).
2243 Y (remote radio heads (antennas))
2244 \
2245 Y__ \.--. .--. +------+
2246 \_( `. +---+ _(Back`. | 3GPP |
2247 Y------( Front )----|eNB|----( Haul )----| core |
2248 ( ` .Haul ) +---+ ( ` . ) ) | netw |
2249 /`--(___.-' \ `--(___.-' +------+
2250 Y_/ / \.--. \
2251 Y_/ _( Mid`. \
2252 ( Haul ) \
2253 ( ` . ) ) \
2254 `--(___.-'\_____+---+ (small cell sites)
2255 \ |SCe|__Y
2256 +---+ +---+
2257 Y__|eNB|__Y
2258 +---+
2259 Y_/ \_Y ("local" radios)
2261 Figure 10: Generic 3GPP-based Cellular Network Architecture
2263 6.1.2. Delay Constraints
2265 The available processing time for Fronthaul networking overhead is
2266 limited to the available time after the baseband processing of the
2267 radio frame has completed. For example in Long Term Evolution (LTE)
2268 radio, processing of a radio frame is allocated 3ms but typically the
2269 processing uses most of it, allowing only a small fraction to be used
2270 by the Fronthaul network (e.g. up to 250us one-way delay, though the
2271 existing spec ([NGMN-fronth]) supports delay only up to 100us). This
2272 ultimately determines the distance the remote radio heads can be
2273 located from the base stations (e.g., 100us equals roughly 20 km of
2274 optical fiber-based transport). Allocation options of the available
2275 time budget between processing and transport are under heavy
2276 discussions in the mobile industry.
2278 For packet-based transport the allocated transport time (e.g. CPRI
2279 would allow for 100us delay [CPRI]) is consumed by all nodes and
2280 buffering between the remote radio head and the baseband processing
2281 unit, plus the distance-incurred delay.
2283 The baseband processing time and the available "delay budget" for the
2284 fronthaul is likely to change in the forthcoming "5G" due to reduced
2285 radio round trip times and other architectural and service
2286 requirements [NGMN].
2288 The transport time budget, as noted above, places limitations on the
2289 distance that remote radio heads can be located from base stations
2290 (i.e. the link length). In the above analysis, the entire transport
2291 time budget is assumed to be available for link propagation delay.
2292 However the transport time budget can be broken down into three
2293 components: scheduling /queueing delay, transmission delay, and link
2294 propagation delay. Using today's Fronthaul networking technology,
2295 the queuing, scheduling and transmission components might become the
2296 dominant factors in the total transport time rather than the link
2297 propagation delay. This is especially true in cases where the
2298 Fronthaul link is relatively short and it is shared among multiple
2299 Fronthaul flows, for example in indoor and small cell networks,
2300 massive MIMO antenna networks, and split Fronthaul architectures.
2302 DetNet technology can improve this application by controlling and
2303 reducing the time required for the queuing, scheduling and
2304 transmission operations by properly assigning the network resources,
2305 thus leaving more of the transport time budget available for link
2306 propagation, and thus enabling longer link lengths. However, link
2307 length is usually a given parameter and is not a controllable network
2308 parameter, since RRH and BBU sights are usually located in
2309 predetermined locations. However, the number of antennas in an RRH
2310 sight might increase for example by adding more antennas, increasing
2311 the MIMO capability of the network or support of massive MIMO. This
2312 means increasing the number of the fronthaul flows sharing the same
2313 fronthaul link. DetNet can now control the bandwidth assignment of
2314 the fronthaul link and the scheduling pf fronthaul packets over this
2315 link and provide adequate buffer provisioning for each flow to reduce
2316 the packet loss rate.
2318 Another way in which DetNet technology can aid Fronthaul networks is
2319 by providing effective isolation from best-effort (and other classes
2320 of) traffic, which can arise as a result of network slicing in 5G
2321 networks where Fronthaul traffic generated in different network
2322 slices might have differing performance requirements. DetNet
2323 technology can also dynamically control the bandwidth assignment,
2324 scheduling and packet forwarding decisions and the buffer
2325 provisioning of the Fronthaul flows to guarantee the end-to-end delay
2326 of the Fronthaul packets and minimize the packet loss rate.
2328 [METIS] documents the fundamental challenges as well as overall
2329 technical goals of the future 5G mobile and wireless system as the
2330 starting point. These future systems should support much higher data
2331 volumes and rates and significantly lower end-to-end latency for 100x
2332 more connected devices (at similar cost and energy consumption levels
2333 as today's system).
2335 For Midhaul connections, delay constraints are driven by Inter-Site
2336 radio functions like Coordinated Multipoint Processing (CoMP, see
2337 [CoMP]). CoMP reception and transmission is a framework in which
2338 multiple geographically distributed antenna nodes cooperate to
2339 improve the performance of the users served in the common cooperation
2340 area. The design principal of CoMP is to extend the current single-
2341 cell to multi-UE (User Equipment) transmission to a multi-cell-to-
2342 multi-UEs transmission by base station cooperation.
2344 CoMP has delay-sensitive performance parameters, which are "midhaul
2345 latency" and "CSI (Channel State Information) reporting and
2346 accuracy". The essential feature of CoMP is signaling between eNBs,
2347 so Midhaul latency is the dominating limitation of CoMP performance.
2348 Generally, CoMP can benefit from coordinated scheduling (either
2349 distributed or centralized) of different cells if the signaling delay
2350 between eNBs is within 1-10ms. This delay requirement is both rigid
2351 and absolute because any uncertainty in delay will degrade the
2352 performance significantly.
2354 Inter-site CoMP is one of the key requirements for 5G and is also a
2355 near-term goal for the current 4.5G network architecture.
2357 6.1.3. Time Synchronization Constraints
2359 Fronthaul time synchronization requirements are given by [TS25104],
2360 [TS36104], [TS36211], and [TS36133]. These can be summarized for the
2361 current 3GPP LTE-based networks as:
2363 Delay Accuracy:
2364 +-8ns (i.e. +-1/32 Tc, where Tc is the UMTS Chip time of 1/3.84
2365 MHz) resulting in a round trip accuracy of +-16ns. The value is
2366 this low to meet the 3GPP Timing Alignment Error (TAE) measurement
2367 requirements. Note: performance guarantees of low nanosecond
2368 values such as these are considered to be below the DetNet layer -
2369 it is assumed that the underlying implementation, e.g. the
2370 hardware, will provide sufficient support (e.g. buffering) to
2371 enable this level of accuracy. These values are maintained in the
2372 use case to give an indication of the overall application.
2374 Timing Alignment Error:
2375 Timing Alignment Error (TAE) is problematic to Fronthaul networks
2376 and must be minimized. If the transport network cannot guarantee
2377 low enough TAE then additional buffering has to be introduced at
2378 the edges of the network to buffer out the jitter. Buffering is
2379 not desirable as it reduces the total available delay budget.
2380 Packet Delay Variation (PDV) requirements can be derived from TAE
2381 for packet based Fronthaul networks.
2383 * For multiple input multiple output (MIMO) or TX diversity
2384 transmissions, at each carrier frequency, TAE shall not exceed
2385 65 ns (i.e. 1/4 Tc).
2387 * For intra-band contiguous carrier aggregation, with or without
2388 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2
2389 Tc).
2391 * For intra-band non-contiguous carrier aggregation, with or
2392 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e.
2393 one Tc).
2395 * For inter-band carrier aggregation, with or without MIMO or TX
2396 diversity, TAE shall not exceed 260 ns.
2398 Transport link contribution to radio frequency error:
2399 +-2 PPB. This value is considered to be "available" for the
2400 Fronthaul link out of the total 50 PPB budget reserved for the
2401 radio interface. Note: the reason that the transport link
2402 contributes to radio frequency error is as follows. The current
2403 way of doing Fronthaul is from the radio unit to remote radio head
2404 directly. The remote radio head is essentially a passive device
2405 (without buffering etc.) The transport drives the antenna
2406 directly by feeding it with samples and everything the transport
2407 adds will be introduced to radio as-is. So if the transport
2408 causes additional frequency error that shows immediately on the
2409 radio as well. Note: performance guarantees of low nanosecond
2410 values such as these are considered to be below the DetNet layer -
2411 it is assumed that the underlying implementation, e.g. the
2412 hardware, will provide sufficient support to enable this level of
2413 performance. These values are maintained in the use case to give
2414 an indication of the overall application.
2416 The above listed time synchronization requirements are difficult to
2417 meet with point-to-point connected networks, and more difficult when
2418 the network includes multiple hops. It is expected that networks
2419 must include buffering at the ends of the connections as imposed by
2420 the jitter requirements, since trying to meet the jitter requirements
2421 in every intermediate node is likely to be too costly. However,
2422 every measure to reduce jitter and delay on the path makes it easier
2423 to meet the end-to-end requirements.
2425 In order to meet the timing requirements both senders and receivers
2426 must remain time synchronized, demanding very accurate clock
2427 distribution, for example support for IEEE 1588 transparent clocks or
2428 boundary clocks in every intermediate node.
2430 In cellular networks from the LTE radio era onward, phase
2431 synchronization is needed in addition to frequency synchronization
2432 ([TS36300], [TS23401]). Time constraints are also important due to
2433 their impact on packet loss. If a packet is delivered too late, then
2434 the packet may be dropped by the host.
2436 6.1.4. Transport Loss Constraints
2438 Fronthaul and Midhaul networks assume almost error-free transport.
2439 Errors can result in a reset of the radio interfaces, which can cause
2440 reduced throughput or broken radio connectivity for mobile customers.
2442 For packetized Fronthaul and Midhaul connections packet loss may be
2443 caused by BER, congestion, or network failure scenarios. Different
2444 fronthaul functional splits are being considered by 3GPP, requiring
2445 strict frame loss ratio (FLR) guarantees. As one example (referring
2446 to the legacy CPRI split which is option 8 in 3GPP) lower layers
2447 splits may imply an FLR of less than 10E-7 for data traffic and less
2448 than 10E-6 for control and management traffic. Current tools for
2449 eliminating packet loss for Fronthaul and Midhaul networks have
2450 serious challenges, for example retransmitting lost packets and/or
2451 using forward error correction (FEC) to circumvent bit errors is
2452 practically impossible due to the additional delay incurred. Using
2453 redundant streams for better guarantees for delivery is also
2454 practically impossible in many cases due to high bandwidth
2455 requirements of Fronthaul and Midhaul networks. Protection switching
2456 is also a candidate but current technologies for the path switch are
2457 too slow to avoid reset of mobile interfaces.
2459 Fronthaul links are assumed to be symmetric, and all Fronthaul
2460 streams (i.e. those carrying radio data) have equal priority and
2461 cannot delay or pre-empt each other. This implies that the network
2462 must guarantee that each time-sensitive flow meets their schedule.
2464 6.1.5. Security Considerations
2466 Establishing time-sensitive streams in the network entails reserving
2467 networking resources for long periods of time. It is important that
2468 these reservation requests be authenticated to prevent malicious
2469 reservation attempts from hostile nodes (or accidental
2470 misconfiguration). This is particularly important in the case where
2471 the reservation requests span administrative domains. Furthermore,
2472 the reservation information itself should be digitally signed to
2473 reduce the risk of a legitimate node pushing a stale or hostile
2474 configuration into another networking node.
2476 Note: This is considered important for the security policy of the
2477 network, but does not affect the core DetNet architecture and design.
2479 6.2. Cellular Radio Networks Today
2481 6.2.1. Fronthaul
2483 Today's Fronthaul networks typically consist of:
2485 o Dedicated point-to-point fiber connection is common
2487 o Proprietary protocols and framings
2489 o Custom equipment and no real networking
2491 Current solutions for Fronthaul are direct optical cables or
2492 Wavelength-Division Multiplexing (WDM) connections.
2494 6.2.2. Midhaul and Backhaul
2496 Today's Midhaul and Backhaul networks typically consist of:
2498 o Mostly normal IP networks, MPLS-TP, etc.
2500 o Clock distribution and sync using 1588 and SyncE
2502 Telecommunication networks in the Mid- and Backhaul are already
2503 heading towards transport networks where precise time synchronization
2504 support is one of the basic building blocks. While the transport
2505 networks themselves have practically transitioned to all-IP packet-
2506 based networks to meet the bandwidth and cost requirements, highly
2507 accurate clock distribution has become a challenge.
2509 In the past, Mid- and Backhaul connections were typically based on
2510 Time Division Multiplexing (TDM-based) and provided frequency
2511 synchronization capabilities as a part of the transport media.
2512 Alternatively other technologies such as Global Positioning System
2513 (GPS) or Synchronous Ethernet (SyncE) are used [SyncE].
2515 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985]
2516 for legacy transport support) have become popular tools to build and
2517 manage new all-IP Radio Access Networks (RANs)
2518 [I-D.kh-spring-ip-ran-use-case]. Although various timing and
2519 synchronization optimizations have already been proposed and
2520 implemented including 1588 PTP enhancements
2521 [I-D.ietf-tictoc-1588overmpls] and [I-D.ietf-mpls-residence-time],
2522 these solution are not necessarily sufficient for the forthcoming RAN
2523 architectures nor do they guarantee the more stringent time-
2524 synchronization requirements such as [CPRI].
2526 There are also existing solutions for TDM over IP such as [RFC5087]
2527 and [RFC4553], as well as TDM over Ethernet transports such as
2528 [RFC5086].
2530 6.3. Cellular Radio Networks Future
2532 Future Cellular Radio Networks will be based on a mix of different
2533 xHaul networks (xHaul = front-, mid- and backhaul), and future
2534 transport networks should be able to support all of them
2535 simultaneously. It is already envisioned today that:
2537 o Not all "cellular radio network" traffic will be IP, for example
2538 some will remain at Layer 2 (e.g. Ethernet based). DetNet
2539 solutions must address all traffic types (Layer 2, Layer 3) with
2540 the same tools and allow their transport simultaneously.
2542 o All forms of xHaul networks will need some form of DetNet
2543 solutions. For example with the advent of 5G some Backhaul
2544 traffic will also have DetNet requirements, for example traffic
2545 belonging to time-critical 5G applications.
2547 o Different splits of the functionality run on the base stations and
2548 the on-site units could co-exist on the same Fronthaul and
2549 Backhaul network.
2551 We would like to see the following in future Cellular Radio networks:
2553 o Unified standards-based transport protocols and standard
2554 networking equipment that can make use of underlying deterministic
2555 link-layer services
2557 o Unified and standards-based network management systems and
2558 protocols in all parts of the network (including Fronthaul)
2560 New radio access network deployment models and architectures may
2561 require time- sensitive networking services with strict requirements
2562 on other parts of the network that previously were not considered to
2563 be packetized at all. Time and synchronization support are already
2564 topical for Backhaul and Midhaul packet networks [MEF] and are
2565 becoming a real issue for Fronthaul networks also. Specifically in
2566 Fronthaul networks the timing and synchronization requirements can be
2567 extreme for packet based technologies, for example, on the order of
2568 sub +-20 ns packet delay variation (PDV) and frequency accuracy of
2569 +0.002 PPM [Fronthaul].
2571 The actual transport protocols and/or solutions to establish required
2572 transport "circuits" (pinned-down paths) for Fronthaul traffic are
2573 still undefined. Those are likely to include (but are not limited
2574 to) solutions directly over Ethernet, over IP, and using MPLS/
2575 PseudoWire transport.
2577 Even the current time-sensitive networking features may not be
2578 sufficient for Fronthaul traffic. Therefore, having specific
2579 profiles that take the requirements of Fronthaul into account is
2580 desirable [IEEE8021CM].
2582 Interesting and important work for time-sensitive networking has been
2583 done for Ethernet [TSNTG], which specifies the use of IEEE 1588 time
2584 precision protocol (PTP) [IEEE1588] in the context of IEEE 802.1D and
2585 IEEE 802.1Q. [IEEE8021AS] specifies a Layer 2 time synchronizing
2586 service, and other specifications such as IEEE 1722 [IEEE1722]
2587 specify Ethernet-based Layer-2 transport for time-sensitive streams.
2589 New promising work seeks to enable the transport of time-sensitive
2590 fronthaul streams in Ethernet bridged networks [IEEE8021CM].
2591 Analogous to IEEE 1722 there is an ongoing standardization effort to
2592 define the Layer-2 transport encapsulation format for transporting
2593 radio over Ethernet (RoE) in the IEEE 1904.3 Task Force [IEEE19043].
2595 All-IP RANs and xHhaul networks would benefit from time
2596 synchronization and time-sensitive transport services. Although
2597 Ethernet appears to be the unifying technology for the transport,
2598 there is still a disconnect providing Layer 3 services. The protocol
2599 stack typically has a number of layers below the Ethernet Layer 2
2600 that shows up to the Layer 3 IP transport. It is not uncommon that
2601 on top of the lowest layer (optical) transport there is the first
2602 layer of Ethernet followed one or more layers of MPLS, PseudoWires
2603 and/or other tunneling protocols finally carrying the Ethernet layer
2604 visible to the user plane IP traffic.
2606 While there are existing technologies to establish circuits through
2607 the routed and switched networks (especially in MPLS/PWE space),
2608 there is still no way to signal the time synchronization and time-
2609 sensitive stream requirements/reservations for Layer-3 flows in a way
2610 that addresses the entire transport stack, including the Ethernet
2611 layers that need to be configured.
2613 Furthermore, not all "user plane" traffic will be IP. Therefore, the
2614 same solution also must address the use cases where the user plane
2615 traffic is a different layer, for example Ethernet frames.
2617 There is existing work describing the problem statement
2618 [I-D.finn-detnet-problem-statement] and the architecture
2619 [I-D.finn-detnet-architecture] for deterministic networking (DetNet)
2620 that targets solutions for time-sensitive (IP/transport) streams with
2621 deterministic properties over Ethernet-based switched networks.
2623 6.4. Cellular Radio Networks Asks
2625 A standard for data plane transport specification which is:
2627 o Unified among all xHauls (meaning that different flows with
2628 diverse DetNet requirements can coexist in the same network and
2629 traverse the same nodes without interfering with each other)
2631 o Deployed in a highly deterministic network environment
2633 o Capable of supporting multiple functional splits simultaneously,
2634 including existing Backhaul and CPRI Fronthaul and potentially new
2635 modes as defined for example in 3GPP; these goals can be supported
2636 by the existing DetNet Use Case Common Themes, notably "Mix of
2637 Deterministic and Best-Effort Traffic", "Bounded Latency", "Low
2638 Latency", "Symmetrical Path Delays", and "Deterministic Flows".
2640 o Capable of supporting Network Slicing and Multi-tenancy; these
2641 goals can be supported by the same DetNet themes noted above.
2643 o Capable of transporting both in-band and out-band control traffic
2644 (OAM info, ...).
2646 o Deployable over multiple data link technologies (e.g., IEEE 802.3,
2647 mmWave, etc.).
2649 A standard for data flow information models that are:
2651 o Aware of the time sensitivity and constraints of the target
2652 networking environment
2654 o Aware of underlying deterministic networking services (e.g., on
2655 the Ethernet layer)
2657 7. Industrial M2M
2659 7.1. Use Case Description
2661 Industrial Automation in general refers to automation of
2662 manufacturing, quality control and material processing. In this
2663 "machine to machine" (M2M) use case we consider machine units in a
2664 plant floor which periodically exchange data with upstream or
2665 downstream machine modules and/or a supervisory controller within a
2666 local area network.
2668 The actors of M2M communication are Programmable Logic Controllers
2669 (PLCs). Communication between PLCs and between PLCs and the
2670 supervisory PLC (S-PLC) is achieved via critical control/data streams
2671 Figure 11.
2673 S (Sensor)
2674 \ +-----+
2675 PLC__ \.--. .--. ---| MES |
2676 \_( `. _( `./ +-----+
2677 A------( Local )-------------( L2 )
2678 ( Net ) ( Net ) +-------+
2679 /`--(___.-' `--(___.-' ----| S-PLC |
2680 S_/ / PLC .--. / +-------+
2681 A_/ \_( `.
2682 (Actuator) ( Local )
2683 ( Net )
2684 /`--(___.-'\
2685 / \ A
2686 S A
2688 Figure 11: Current Generic Industrial M2M Network Architecture
2690 This use case focuses on PLC-related communications; communication to
2691 Manufacturing-Execution-Systems (MESs) are not addressed.
2693 This use case covers only critical control/data streams; non-critical
2694 traffic between industrial automation applications (such as
2695 communication of state, configuration, set-up, and database
2696 communication) are adequately served by currently available
2697 prioritizing techniques. Such traffic can use up to 80% of the total
2698 bandwidth required. There is also a subset of non-time-critical
2699 traffic that must be reliable even though it is not time sensitive.
2701 In this use case the primary need for deterministic networking is to
2702 provide end-to-end delivery of M2M messages within specific timing
2703 constraints, for example in closed loop automation control. Today
2704 this level of determinism is provided by proprietary networking
2705 technologies. In addition, standard networking technologies are used
2706 to connect the local network to remote industrial automation sites,
2707 e.g. over an enterprise or metro network which also carries other
2708 types of traffic. Therefore, flows that should be forwarded with
2709 deterministic guarantees need to be sustained regardless of the
2710 amount of other flows in those networks.
2712 7.2. Industrial M2M Communication Today
2714 Today, proprietary networks fulfill the needed timing and
2715 availability for M2M networks.
2717 The network topologies used today by industrial automation are
2718 similar to those used by telecom networks: Daisy Chain, Ring, Hub and
2719 Spoke, and Comb (a subset of Daisy Chain).
2721 PLC-related control/data streams are transmitted periodically and
2722 carry either a pre-configured payload or a payload configured during
2723 runtime.
2725 Some industrial applications require time synchronization at the end
2726 nodes. For such time-coordinated PLCs, accuracy of 1 microsecond is
2727 required. Even in the case of "non-time-coordinated" PLCs time sync
2728 may be needed e.g. for timestamping of sensor data.
2730 Industrial network scenarios require advanced security solutions.
2731 Many of the current industrial production networks are physically
2732 separated. Preventing critical flows from be leaked outside a domain
2733 is handled today by filtering policies that are typically enforced in
2734 firewalls.
2736 7.2.1. Transport Parameters
2738 The Cycle Time defines the frequency of message(s) between industrial
2739 actors. The Cycle Time is application dependent, in the range of 1ms
2740 - 100ms for critical control/data streams.
2742 Because industrial applications assume deterministic transport for
2743 critical Control-Data-Stream parameters (instead of defining latency
2744 and delay variation parameters) it is sufficient to fulfill the upper
2745 bound of latency (maximum latency). The underlying networking
2746 infrastructure must ensure a maximum end-to-end delivery time of
2747 messages in the range of 100 microseconds to 50 milliseconds
2748 depending on the control loop application.
2750 The bandwidth requirements of control/data streams are usually
2751 calculated directly from the bytes-per-cycle parameter of the control
2752 loop. For PLC-to-PLC communication one can expect 2 - 32 streams
2753 with packet size in the range of 100 - 700 bytes. For S-PLC to PLCs
2754 the number of streams is higher - up to 256 streams. Usually no more
2755 than 20% of available bandwidth is used for critical control/data
2756 streams. In today's networks 1Gbps links are commonly used.
2758 Most PLC control loops are rather tolerant of packet loss, however
2759 critical control/data streams accept no more than 1 packet loss per
2760 consecutive communication cycle (i.e. if a packet gets lost in cycle
2761 "n", then the next cycle ("n+1") must be lossless). After two or
2762 more consecutive packet losses the network may be considered to be
2763 "down" by the Application.
2765 As network downtime may impact the whole production system the
2766 required network availability is rather high (99,999%).
2768 Based on the above parameters we expect that some form of redundancy
2769 will be required for M2M communications, however any individual
2770 solution depends on several parameters including cycle time, delivery
2771 time, etc.
2773 7.2.2. Stream Creation and Destruction
2775 In an industrial environment, critical control/data streams are
2776 created rather infrequently, on the order of ~10 times per day / week
2777 / month. Most of these critical control/data streams get created at
2778 machine startup, however flexibility is also needed during runtime,
2779 for example when adding or removing a machine. Going forward as
2780 production systems become more flexible, we expect a significant
2781 increase in the rate at which streams are created, changed and
2782 destroyed.
2784 7.3. Industrial M2M Future
2786 We would like to see a converged IP-standards-based network with
2787 deterministic properties that can satisfy the timing, security and
2788 reliability constraints described above. Today's proprietary
2789 networks could then be interfaced to such a network via gateways or,
2790 in the case of new installations, devices could be connected directly
2791 to the converged network.
2793 For this use case we expect time synchronization accuracy on the
2794 order of 1us.
2796 7.4. Industrial M2M Asks
2798 o Converged IP-based network
2800 o Deterministic behavior (bounded latency and jitter )
2802 o High availability (presumably through redundancy) (99.999 %)
2804 o Low message delivery time (100us - 50ms)
2806 o Low packet loss (burstless, 0.1-1 %)
2808 o Security (e.g. prevent critical flows from being leaked between
2809 physically separated networks)
2811 8. Mining Industry
2813 8.1. Use Case Description
2815 The mining industry is highly dependent on networks to monitor and
2816 control their systems both in open-pit and underground extraction,
2817 transport and refining processes. In order to reduce risks and
2818 increase operational efficiency in mining operations, a number of
2819 processes have migrated the operators from the extraction site to
2820 remote control and monitoring.
2822 In the case of open pit mining, autonomous trucks are used to
2823 transport the raw materials from the open pit to the refining factory
2824 where the final product (e.g. Copper) is obtained. Although the
2825 operation is autonomous, the tracks are remotely monitored from a
2826 central facility.
2828 In pit mines, the monitoring of the tailings or mine dumps is
2829 critical in order to avoid any environmental pollution. In the past,
2830 monitoring has been conducted through manual inspection of pre-
2831 installed dataloggers. Cabling is not usually exploited in such
2832 scenarios due to the cost and complex deployment requirements.
2833 Currently, wireless technologies are being employed to monitor these
2834 cases permanently. Slopes are also monitored in order to anticipate
2835 possible mine collapse. Due to the unstable terrain, cable
2836 maintenance is costly and complex and hence wireless technologies are
2837 employed.
2839 In the underground monitoring case, autonomous vehicles with
2840 extraction tools travel autonomously through the tunnels, but their
2841 operational tasks (such as excavation, stone breaking and transport)
2842 are controlled remotely from a central facility. This generates
2843 video and feedback upstream traffic plus downstream actuator control
2844 traffic.
2846 8.2. Mining Industry Today
2848 Currently the mining industry uses a packet switched architecture
2849 supported by high speed ethernet. However in order to achieve the
2850 delay and packet loss requirements the network bandwidth is
2851 overestimated, thus providing very low efficiency in terms of
2852 resource usage.
2854 QoS is implemented at the Routers to separate video, management,
2855 monitoring and process control traffic for each stream.
2857 Since mobility is involved in this process, the connection between
2858 the backbone and the mobile devices (e.g. trucks, trains and
2859 excavators) is solved using a wireless link. These links are based
2860 on 802.11 for open-pit mining and leaky feeder for underground
2861 mining.
2863 Lately in pit mines the use of LPWAN technologies has been extended:
2864 Tailings, slopes and mine dumps are monitored by battery-powered
2865 dataloggers that make use of robust long range radio technologies.
2866 Reliability is usually ensured through retransmissions at L2.
2867 Gateways or concentrators act as bridges forwarding the data to the
2868 backbone ethernet network. Deterministic requirements are biased
2869 towards reliability rather than latency as events are slowly
2870 triggered or can be anticipated in advance.
2872 At the mineral processing stage, conveyor belts and refining
2873 processes are controlled by a SCADA system, which provides the in-
2874 factory delay-constrained networking requirements.
2876 Voice communications are currently served by a redundant trunking
2877 infrastructure, independent from current data networks.
2879 8.3. Mining Industry Future
2881 Mining operations and management are currently converging towards a
2882 combination of autonomous operation and teleoperation of transport
2883 and extraction machines. This means that video, audio, monitoring
2884 and process control traffic will increase dramatically. Ideally, all
2885 activities on the mine will rely on network infrastructure.
2887 Wireless for open-pit mining is already a reality with LPWAN
2888 technologies and it is expected to evolve to more advanced LPWAN
2889 technologies such as those based on LTE to increase last hop
2890 reliability or novel LPWAN flavours with deterministic access.
2892 One area in which DetNet can improve this use case is in the wired
2893 networks that make up the "backbone network" of the system, which
2894 connect together many wireless access points (APs). The mobile
2895 machines (which are connected to the network via wireless) transition
2896 from one AP to the next as they move about. A deterministic,
2897 reliable, low latency backbone can enable these transitions to be
2898 more reliable.
2900 Connections which extend all the way from the base stations to the
2901 machinery via a mix of wired and wireless hops would also be
2902 beneficial, for example to improve remote control responsiveness of
2903 digging machines. However to guarantee deterministic performance of
2904 a DetNet, the end-to-end underlying network must be deterministic.
2905 Thus for this use case if a deterministic wireless transport is
2906 integrated with a wire-based DetNet network, it could create the
2907 desired wired plus wireless end-to-end deterministic network.
2909 8.4. Mining Industry Asks
2911 o Improved bandwidth efficiency
2913 o Very low delay to enable machine teleoperation
2915 o Dedicated bandwidth usage for high resolution video streams
2917 o Predictable delay to enable realtime monitoring
2919 o Potential to construct a unified DetNet network over a combination
2920 of wired and deterministic wireless links
2922 9. Private Blockchain
2924 9.1. Use Case Description
2926 Blockchain was created with bitcoin, as a 'public' blockchain on the
2927 open Internet, however blockchain has also spread far beyond its
2928 original host into various industries such as smart manufacturing,
2929 logistics, security, legal rights and others. In these industries
2930 blockchain runs in designated and carefully managed network in which
2931 deterministic networking requirements could be addressed by Detnet.
2932 Such implementations are referred to as 'private' blockchain.
2934 The sole distinction between public and private blockchain is related
2935 to who is allowed to participate in the network, execute the
2936 consensus protocol and maintain the shared ledger.
2938 Today's networks treat the traffic from blockchain on a best-effort
2939 basis, but blockchain operation could be made much more efficient if
2940 deterministic networking service were available to minimize latency
2941 and packet loss in the network.
2943 9.1.1. Blockchain Operation
2945 A 'block' runs as a container of a batch of primary items such as
2946 transactions, property records etc. The blocks are chained in such a
2947 way that the hash of the previous block works as the pointer header
2948 of the new block, where confirmation of each block requires a
2949 consensus mechanism. When an item arrives at a blockchain node, the
2950 latter broadcasts this item to the rest of nodes which receive and
2951 verify it and put it in the ongoing block. Block confirmation
2952 process begins as the amount of items reaches the predefined block
2953 capacity, and the node broadcasts its proved block to the rest of
2954 nodes to be verified and chained.
2956 9.1.2. Blockchain Network Architecture
2958 Blockchain node communication and coordination is achieved mainly
2959 through frequent point to multi-point communication, however
2960 persistent point-to-point connections are used to transport both the
2961 items and the blocks to the other nodes.
2963 When a node initiates, it first requests the other nodes' address
2964 from a specific entity such as DNS, then it creates persistent
2965 connections each of with other nodes. If node A confirms an item, it
2966 sends the item to the other nodes via the persistent connections.
2968 As a new block in a node completes and gets proved among the nodes,
2969 it starts propagating this block towards its neighbor nodes. Assume
2970 node A receives a block, it sends invite message after verification
2971 to its neighbor B, B checks if the designated block is available, it
2972 responds get message to A if it is unavailable, and A send the
2973 complete block to B. B repeats the process as A to start the next
2974 round of block propagation.
2976 The challenge of blockchain network operation is not overall data
2977 rates, since the volume from both block and item stays between
2978 hundreds of bytes to a couple of mega bytes per second, but is in
2979 transporting the blocks with minimum latency to maximize efficiency
2980 of the blockchain consensus process.
2982 9.1.3. Security Considerations
2984 Security is crucial to blockchain applications, and todayt blockchain
2985 addresses its security issues mainly at the application level, where
2986 cryptography as well as hash-based consensus play a leading role
2987 preventing both double-spending and malicious service attack.
2988 However, there is concern that in the proposed use case of a private
2989 blockchain network which is dependent on deterministic properties,
2990 the network could be vulnerable to delays and other specific attacks
2991 against determinism which could interrupt service.
2993 9.2. Private Blockchain Today
2995 Today private blockchain runs in L2 or L3 VPN, in general without
2996 guaranteed determinism. The industry players are starting to realize
2997 that improving determinism in their blockchain networks could improve
2998 the performance of their service, but as of today these goals are not
2999 being met.
3001 9.3. Private Blockchain Future
3003 Blockchain system performance can be greatly improved through
3004 deterministic networking service primarily because it would
3005 accelerate the consensus process. It would be valuable to be able to
3006 design a private blockchain network with the following properties:
3008 o Transport of point to multi-point traffic in a coordinated network
3009 architecture rather than at the application layer (which typically
3010 uses point-to-point connections)
3012 o Guaranteed transport latency
3014 o Reduced packet loss (to the point where packet retransmission-
3015 incurred delay would be negligible.)
3017 9.4. Private Blockchain Asks
3019 o Layer 2 and Layer 3 multicast of blockchain traffic
3021 o Item and block delivery with bounded, low latency and negligible
3022 packet loss
3024 o Coexistence in a single network of blockchain and IT traffic.
3026 o Ability to scale the network by distributing the centralized
3027 control of the network across multiple control entities.
3029 10. Network Slicing
3031 10.1. Use Case Description
3033 Network slicing divides one physical network infrastructure into
3034 multiple logical networks. Each slice, corresponding to a logical
3035 network, uses resources and network functions independently from each
3036 other. Network slicing provides flexibility of resource allocation
3037 and service quality customization.
3039 Future services will demand network performance with a wide variety
3040 of characteristics such as high data rate, low latency, low loss
3041 rate, security and many other parameters. Ideally every service
3042 would have its own physical network satisfying its particular
3043 performance requirements, however that would be prohibitively
3044 expensive. Network slicing can provide a customized slice for a
3045 single service, and multiple slices can share the same physical
3046 network. This method can optimize the performance for the service at
3047 lower cost, and the flexibility of setting up and release the slices
3048 also allows the user to allocate the network resources dynamically.
3050 Unlike other DetNet use cases, Network slicing is not a specific
3051 application with specific deterministic requirements; it is proposed
3052 as a new requirement for the future network, which is still in
3053 discussion, and DetNet is a candidate solution for it.
3055 10.2. Network Slicing Use Cases
3057 Network Slicing is a core feature of 5G defined in 3GPP, which is
3058 currently under development. A Network Slice in mobile network is a
3059 complete logical network including Radio Access Network (RAN) and
3060 Core Network (CN). It provides telecommunication services and
3061 network capabilities, which may vary (or not) from slice to slice.
3063 A 5G bearer network is a typical use case of network slicing,
3064 including 3 service scenarios: enhanced Mobile Broadband (eMBB),
3065 Ultra-Reliable and Low Latency Communications (URLLC), and massive
3066 Machine Type Communications (mMTC). Each of these are described
3067 below.
3069 10.2.1. Enhanced Mobile Broadband (eMBB)
3071 eMBB focuses on services characterized by high data rates, such as
3072 high definition (HD) videos, virtual reality (VR), augmented reality
3073 (AR), and fixed mobile convergence (FMC).
3075 10.2.2. Ultra-Reliable and Low Latency Communications (URLLC)
3077 URLLC focuses on latency-sensitive services, such as self-driving
3078 vehicles, remote surgery, or drone control.
3080 10.2.3. massive Machine Type Communications (mMTC)
3082 mMTC focuses on services that have high requirements for connection
3083 density, such as those typical for smart city and smart agriculture
3084 use cases.
3086 10.3. Using DetNet in Network Slicing
3088 One of the requirements discussed for network slicing is the "hard"
3089 separation of various users' deterministic performance. That is, it
3090 should be impossible for activity, lack of activity, or changes in
3091 activity of one or more users to have any appreciable effect on the
3092 deterministic performance parameters of any other users. Typical
3093 techniques used today, which share a physical network among users, do
3094 not offer this kind of insulation. DetNet can supply point-to-point
3095 or point-to-multipoint paths that offer bandwidth and latency
3096 guarantees to a user that cannot be affected by other users' data
3097 traffic.
3099 Thus DetNet is a powerful tool when latency and reliability are
3100 required in Network Slicing. However, DetNet cannot cover every
3101 Network Slicing use case, and there are some other problems to be
3102 solved. Firstly, DetNet is a point-to-point or point-to-multipoint
3103 technology while Network Slicing needs multi-point to multi-point
3104 guarantee. Second, the number of flows that can be carried by DetNet
3105 is limited by DetNet scalability. Flow aggregation and queuing
3106 management modification may help to fix the problem. More work and
3107 discussions are needed in these topics.
3109 10.4. Network Slicing Today and Future
3111 Network slicing can satisfy the requirements of a lot of future
3112 deployment scenario, but it is still a collection of ideas and
3113 analysis, without a specific technical solution. A lot of
3114 technologies, such as Flex-E, Segment Routing, and DetNet have
3115 potential to be used in Network Slicing. For more details please see
3116 IETF99 Network Slicing BOF session agenda and materials.
3118 10.5. Network Slicing Asks
3120 o Isolation from other flows through Queuing Management
3122 o Service Quality Customization and Guarantee
3124 o Security
3126 11. Use Case Common Themes
3128 This section summarizes the expected properties of a DetNet network,
3129 based on the use cases as described in this draft.
3131 11.1. Unified, standards-based network
3133 11.1.1. Extensions to Ethernet
3135 A DetNet network is not "a new kind of network" - it based on
3136 extensions to existing Ethernet standards, including elements of IEEE
3137 802.1 AVB/TSN and related standards. Presumably it will be possible
3138 to run DetNet over other underlying transports besides Ethernet, but
3139 Ethernet is explicitly supported.
3141 11.1.2. Centrally Administered
3143 In general a DetNet network is not expected to be "plug and play" -
3144 it is expected that there is some centralized network configuration
3145 and control system. Such a system may be in a single central
3146 location, or it maybe distributed across multiple control entities
3147 that function together as a unified control system for the network.
3148 However, the ability to "hot swap" components (e.g. due to
3149 malfunction) is similar enough to "plug and play" that this kind of
3150 behavior may be expected in DetNet networks, depending on the
3151 implementation.
3153 11.1.3. Standardized Data Flow Information Models
3155 Data Flow Information Models to be used with DetNet networks are to
3156 be specified by DetNet.
3158 11.1.4. L2 and L3 Integration
3160 A DetNet network is intended to integrate between Layer 2 (bridged)
3161 network(s) (e.g. AVB/TSN LAN) and Layer 3 (routed) network(s) (e.g.
3162 using IP-based protocols). One example of this is "making AVB/TSN-
3163 type deterministic performance available from Layer 3 applications,
3164 e.g. using RTP". Another example is "connecting two AVB/TSN LANs
3165 ("islands") together through a standard router".
3167 11.1.5. Guaranteed End-to-End Delivery
3169 Packets sent over DetNet are guaranteed not to be dropped by the
3170 network due to congestion. (Packets may however be dropped for
3171 intended reasons, e.g. per security measures).
3173 11.1.6. Replacement for Multiple Proprietary Deterministic Networks
3175 There are many proprietary non-interoperable deterministic Ethernet-
3176 based networks currently available; DetNet is intended to provide an
3177 open-standards-based alternative to such networks.
3179 11.1.7. Mix of Deterministic and Best-Effort Traffic
3181 DetNet is intended to support coexistance of time-sensitive
3182 operational (OT) traffic and information (IT) traffic on the same
3183 ("unified") network.
3185 11.1.8. Unused Reserved BW to be Available to Best Effort Traffic
3187 If bandwidth reservations are made for a stream but the associated
3188 bandwidth is not used at any point in time, that bandwidth is made
3189 available on the network for best-effort traffic. If the owner of
3190 the reserved stream then starts transmitting again, the bandwidth is
3191 no longer available for best-effort traffic, on a moment-to-moment
3192 basis. Note that such "temporarily available" bandwidth is not
3193 available for time-sensitive traffic, which must have its own
3194 reservation.
3196 11.1.9. Lower Cost, Multi-Vendor Solutions
3198 The DetNet network specifications are intended to enable an ecosystem
3199 in which multiple vendors can create interoperable products, thus
3200 promoting device diversity and potentially higher numbers of each
3201 device manufactured, promoting cost reduction and cost competition
3202 among vendors. The intent is that DetNet networks should be able to
3203 be created at lower cost and with greater diversity of available
3204 devices than existing proprietary networks.
3206 11.2. Scalable Size
3208 DetNet networks range in size from very small, e.g. inside a single
3209 industrial machine, to very large, for example a Utility Grid network
3210 spanning a whole country, and involving many "hops" over various
3211 kinds of links for example radio repeaters, microwave linkes, fiber
3212 optic links, etc.. However recall that the scope of DetNet is
3213 confined to networks that are centrally administered, and explicitly
3214 excludes unbounded decentralized networks such as the Internet.
3216 11.3. Scalable Timing Parameters and Accuracy
3218 11.3.1. Bounded Latency
3220 The DetNet Data Flow Information Model is expected to provide means
3221 to configure the network that include parameters for querying network
3222 path latency, requesting bounded latency for a given stream,
3223 requesting worst case maximum and/or minimum latency for a given path
3224 or stream, and so on. It is an expected case that the network may
3225 not be able to provide a given requested service level, and if so the
3226 network control system should reply that the requested services is
3227 not available (as opposed to accepting the parameter but then not
3228 delivering the desired behavior).
3230 11.3.2. Low Latency
3232 Applications may require "extremely low latency" however depending on
3233 the application these may mean very different latency values; for
3234 example "low latency" across a Utility grid network is on a different
3235 time scale than "low latency" in a motor control loop in a small
3236 machine. The intent is that the mechanisms for specifying desired
3237 latency include wide ranges, and that architecturally there is
3238 nothing to prevent arbirtrarily low latencies from being implemented
3239 in a given network.
3241 11.3.3. Symmetrical Path Delays
3243 Some applications would like to specify that the transit delay time
3244 values be equal for both the transmit and return paths.
3246 11.4. High Reliability and Availability
3248 Reliablity is of critical importance to many DetNet applications, in
3249 which consequences of failure can be extraordinarily high in terms of
3250 cost and even human life. DetNet based systems are expected to be
3251 implemented with essentially arbitrarily high availability (for
3252 example 99.9999% up time, or even 12 nines). The intent is that the
3253 DetNet designs should not make any assumptions about the level of
3254 reliability and availability that may be required of a given system,
3255 and should define parameters for communicating these kinds of metrics
3256 within the network.
3258 A strategy used by DetNet for providing such extraordinarily high
3259 levels of reliability is to provide redundant paths that can be
3260 seamlessly switched between, while maintaining the required
3261 performance of that system.
3263 11.5. Security
3265 Security is of critical importance to many DetNet applications. A
3266 DetNet network must be able to be made secure against devices
3267 failures, attackers, misbehaving devices, and so on. In a DetNet
3268 network the data traffic is expected to be be time-sensitive, thus in
3269 addition to arriving with the data content as intended, the data must
3270 also arrive at the expected time. This may present "new" security
3271 challenges to implementers, and must be addressed accordingly. There
3272 are other security implications, including (but not limited to) the
3273 change in attack surface presented by packet replication and
3274 elimination.
3276 11.6. Deterministic Flows
3278 Reserved bandwidth data flows must be isolated from each other and
3279 from best-effort traffic, so that even if the network is saturated
3280 with best-effort (and/or reserved bandwidth) traffic, the configured
3281 flows are not adversely affected.
3283 12. Use Cases Explicitly Out of Scope for DetNet
3285 This section contains use case text that has been determined to be
3286 outside of the scope of the present DetNet work.
3288 12.1. DetNet Scope Limitations
3290 The scope of DetNet is deliberately limited to specific use cases
3291 that are consistent with the WG charter, subject to the
3292 interpretation of the WG. At the time the DetNet Use Cases were
3293 solicited and provided by the authors the scope of DetNet was not
3294 clearly defined, and as that clarity has emerged, certain of the use
3295 cases have been determined to be outside the scope of the present
3296 DetNet work. Such text has been moved into this section to clarify
3297 that these use cases will not be supported by the DetNet work.
3299 The text in this section was moved here based on the following
3300 "exclusion" principles. Or, as an alternative to moving all such
3301 text to this section, some draft text has been modified in situ to
3302 reflect these same principles.
3304 The following principles have been established to clarify the scope
3305 of the present DetNet work.
3307 o The scope of network addressed by DetNet is limited to networks
3308 that can be centrally controlled, i.e. an "enterprise" aka
3309 "corporate" network. This explicitly excludes "the open
3310 Internet".
3312 o Maintaining synchronized time across a DetNet network is crucial
3313 to its operation, however DetNet assumes that time is to be
3314 maintained using other means, for example (but not limited to)
3315 Precision Time Protocol ([IEEE1588]). A use case may state the
3316 accuracy and reliability that it expects from the DetNet network
3317 as part of a whole system, however it is understood that such
3318 timing properties are not guaranteed by DetNet itself. It is
3319 currently an open question as to whether DetNet protocols will
3320 include a way for an application to communicate such timing
3321 expectations to the network, and if so whether they would be
3322 expected to materially affect the performance they would receive
3323 from the network as a result.
3325 12.2. Internet-based Applications
3327 12.2.1. Use Case Description
3329 There are many applications that communicate across the open Internet
3330 that could benefit from guaranteed delivery and bounded latency. The
3331 following are some representative examples.
3333 12.2.1.1. Media Content Delivery
3335 Media content delivery continues to be an important use of the
3336 Internet, yet users often experience poor quality audio and video due
3337 to the delay and jitter inherent in today's Internet.
3339 12.2.1.2. Online Gaming
3341 Online gaming is a significant part of the gaming market, however
3342 latency can degrade the end user experience. For example "First
3343 Person Shooter" (FPS) games are highly delay-sensitive.
3345 12.2.1.3. Virtual Reality
3347 Virtual reality (VR) has many commercial applications including real
3348 estate presentations, remote medical procedures, and so on. Low
3349 latency is critical to interacting with the virtual world because
3350 perceptual delays can cause motion sickness.
3352 12.2.2. Internet-Based Applications Today
3354 Internet service today is by definition "best effort", with no
3355 guarantees on delivery or bandwidth.
3357 12.2.3. Internet-Based Applications Future
3359 We imagine an Internet from which we will be able to play a video
3360 without glitches and play games without lag.
3362 For online gaming, the maximum round-trip delay can be 100ms and
3363 stricter for FPS gaming which can be 10-50ms. Transport delay is the
3364 dominate part with a 5-20ms budget.
3366 For VR, 1-10ms maximum delay is needed and total network budget is
3367 1-5ms if doing remote VR.
3369 Flow identification can be used for gaming and VR, i.e. it can
3370 recognize a critical flow and provide appropriate latency bounds.
3372 12.2.4. Internet-Based Applications Asks
3374 o Unified control and management protocols to handle time-critical
3375 data flow
3377 o Application-aware flow filtering mechanism to recognize the timing
3378 critical flow without doing 5-tuple matching
3380 o Unified control plane to provide low latency service on Layer-3
3381 without changing the data plane
3383 o OAM system and protocols which can help to provide E2E-delay
3384 sensitive service provisioning
3386 12.3. Pro Audio and Video - Digital Rights Management (DRM)
3388 This section was moved here because this is considered a Link layer
3389 topic, not direct responsibility of DetNet.
3391 Digital Rights Management (DRM) is very important to the audio and
3392 video industries. Any time protected content is introduced into a
3393 network there are DRM concerns that must be maintained (see
3394 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of
3395 network technology, however there are cases when a secure link
3396 supporting authentication and encryption is required by content
3397 owners to carry their audio or video content when it is outside their
3398 own secure environment (for example see [DCI]).
3400 As an example, two techniques are Digital Transmission Content
3401 Protection (DTCP) and High-Bandwidth Digital Content Protection
3402 (HDCP). HDCP content is not approved for retransmission within any
3403 other type of DRM, while DTCP may be retransmitted under HDCP.
3404 Therefore if the source of a stream is outside of the network and it
3405 uses HDCP protection it is only allowed to be placed on the network
3406 with that same HDCP protection.
3408 12.4. Pro Audio and Video - Link Aggregation
3410 Note: The term "Link Aggregation" is used here as defined by the text
3411 in the following paragraph, i.e. not following a more common Network
3412 Industry definition. Current WG consensus is that this item won't be
3413 directly supported by the DetNet architecture, for example because it
3414 implies guarantee of in-order delivery of packets which conflicts
3415 with the core goal of achieving the lowest possible latency.
3417 For transmitting streams that require more bandwidth than a single
3418 link in the target network can support, link aggregation is a
3419 technique for combining (aggregating) the bandwidth available on
3420 multiple physical links to create a single logical link of the
3421 required bandwidth. However, if aggregation is to be used, the
3422 network controller (or equivalent) must be able to determine the
3423 maximum latency of any path through the aggregate link.
3425 13. Acknowledgments
3427 13.1. Pro Audio
3429 This section was derived from draft-gunther-detnet-proaudio-req-01.
3431 The editors would like to acknowledge the help of the following
3432 individuals and the companies they represent:
3434 Jeff Koftinoff, Meyer Sound
3436 Jouni Korhonen, Associate Technical Director, Broadcom
3438 Pascal Thubert, CTAO, Cisco
3440 Kieran Tyrrell, Sienda New Media Technologies GmbH
3442 13.2. Utility Telecom
3444 This section was derived from draft-wetterwald-detnet-utilities-reqs-
3445 02.
3447 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy
3448 Practice Cisco
3450 Pascal Thubert, CTAO Cisco
3452 13.3. Building Automation Systems
3454 This section was derived from draft-bas-usecase-detnet-00.
3456 13.4. Wireless for Industrial
3458 This section was derived from draft-thubert-6tisch-4detnet-01.
3460 This specification derives from the 6TiSCH architecture, which is the
3461 result of multiple interactions, in particular during the 6TiSCH
3462 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at
3463 the IETF.
3465 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier
3466 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael
3467 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon,
3468 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey,
3469 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria
3470 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation
3471 and various contributions.
3473 13.5. Cellular Radio
3475 This section was derived from draft-korhonen-detnet-telreq-00.
3477 13.6. Industrial M2M
3479 The authors would like to thank Feng Chen and Marcel Kiessling for
3480 their comments and suggestions.
3482 13.7. Internet Applications and CoMP
3484 This section was derived from draft-zha-detnet-use-case-00.
3486 This document has benefited from reviews, suggestions, comments and
3487 proposed text provided by the following members, listed in
3488 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver
3489 Huang.
3491 13.8. Electrical Utilities
3493 The wind power generation use case has been extracted from the study
3494 of Wind Farms conducted within the 5GPPP Virtuwind Project. The
3495 project is funded by the European Union's Horizon 2020 research and
3496 innovation programme under grant agreement No 671648 (VirtuWind).
3498 13.9. Network Slicing
3500 This section was written by Xuesong Geng, who would like to
3501 acknowledge Norm Finn and Mach Chen for their useful comments.
3503 13.10. Mining
3505 This section was written by Diego Dujovne in conjunction with Xavier
3506 Vilasojana.
3508 13.11. Private Blockchain
3510 This section was written by Daniel Huang.
3512 14. Informative References
3514 [ACE] IETF, "Authentication and Authorization for Constrained
3515 Environments",
3516 .
3518 [Ahm14] Ahmed, M. and R. Kim, "Communication network architectures
3519 for smart-wind power farms.", Energies, p. 3900-3921. ,
3520 June 2014.
3522 [bacnetip]
3523 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP",
3524 January 1999.
3526 [CCAMP] IETF, "Common Control and Measurement Plane",
3527 .
3529 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND
3530 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_
3531 and_Enhancement_v2.0, March 2015,
3532 .
3535 [CONTENT_PROTECTION]
3536 Olsen, D., "1722a Content Protection", 2012,
3537 .
3540 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI);
3541 Interface Specification", CPRI Specification V6.1, July
3542 2014, .
3545 [CPRI-transp]
3546 CPRI TWG, "CPRI requirements for Ethernet Fronthaul",
3547 November 2015,
3548 .
3551 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification,
3552 Version 1.2", 2012, .
3554 [DICE] IETF, "DTLS In Constrained Environments",
3555 .
3557 [EA12] Evans, P. and M. Annunziata, "Industrial Internet: Pushing
3558 the Boundaries of Minds and Machines", November 2012.
3560 [ESPN_DC2]
3561 Daley, D., "ESPN's DC2 Scales AVB Large", 2014,
3562 .
3565 [flnet] Japan Electrical Manufacturers Association, "JEMA 1479 -
3566 English Edition", September 2012.
3568 [Fronthaul]
3569 Chen, D. and T. Mustala, "Ethernet Fronthaul
3570 Considerations", IEEE 1904.3, February 2015,
3571 .
3574 [HART] www.hartcomm.org, "Highway Addressable remote Transducer,
3575 a group of specifications for industrial process and
3576 control devices administered by the HART Foundation".
3578 [I-D.finn-detnet-architecture]
3579 Finn, N. and P. Thubert, "Deterministic Networking
3580 Architecture", draft-finn-detnet-architecture-08 (work in
3581 progress), August 2016.
3583 [I-D.finn-detnet-problem-statement]
3584 Finn, N. and P. Thubert, "Deterministic Networking Problem
3585 Statement", draft-finn-detnet-problem-statement-05 (work
3586 in progress), March 2016.
3588 [I-D.ietf-6tisch-6top-interface]
3589 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
3590 (6top) Interface", draft-ietf-6tisch-6top-interface-04
3591 (work in progress), July 2015.
3593 [I-D.ietf-6tisch-architecture]
3594 Thubert, P., "An Architecture for IPv6 over the TSCH mode
3595 of IEEE 802.15.4", draft-ietf-6tisch-architecture-12 (work
3596 in progress), August 2017.
3598 [I-D.ietf-6tisch-coap]
3599 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and
3600 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work
3601 in progress), March 2015.
3603 [I-D.ietf-6tisch-terminology]
3604 Palattella, M., Thubert, P., Watteyne, T., and Q. Wang,
3605 "Terminology in IPv6 over the TSCH mode of IEEE
3606 802.15.4e", draft-ietf-6tisch-terminology-09 (work in
3607 progress), June 2017.
3609 [I-D.ietf-ipv6-multilink-subnets]
3610 Thaler, D. and C. Huitema, "Multi-link Subnet Support in
3611 IPv6", draft-ietf-ipv6-multilink-subnets-00 (work in
3612 progress), July 2002.
3614 [I-D.ietf-mpls-residence-time]
3615 Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S.,
3616 and S. Vainshtein, "Residence Time Measurement in MPLS
3617 network", draft-ietf-mpls-residence-time-15 (work in
3618 progress), March 2017.
3620 [I-D.ietf-roll-rpl-industrial-applicability]
3621 Phinney, T., Thubert, P., and R. Assimiti, "RPL
3622 applicability in industrial networks", draft-ietf-roll-
3623 rpl-industrial-applicability-02 (work in progress),
3624 October 2013.
3626 [I-D.ietf-tictoc-1588overmpls]
3627 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L.
3628 Montini, "Transporting Timing messages over MPLS
3629 Networks", draft-ietf-tictoc-1588overmpls-07 (work in
3630 progress), October 2015.
3632 [I-D.kh-spring-ip-ran-use-case]
3633 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing
3634 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02
3635 (work in progress), November 2014.
3637 [I-D.svshah-tsvwg-deterministic-forwarding]
3638 Shah, S. and P. Thubert, "Deterministic Forwarding PHB",
3639 draft-svshah-tsvwg-deterministic-forwarding-04 (work in
3640 progress), August 2015.
3642 [I-D.thubert-6lowpan-backbone-router]
3643 Thubert, P., "6LoWPAN Backbone Router", draft-thubert-
3644 6lowpan-backbone-router-03 (work in progress), February
3645 2013.
3647 [I-D.wang-6tisch-6top-sublayer]
3648 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
3649 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in
3650 progress), November 2015.
3652 [IEC-60870-5-104]
3653 International Electrotechnical Commission, "International
3654 Standard IEC 60870-5-104: Network access for IEC
3655 60870-5-101 using standard transport profiles", June 2006.
3657 [IEC61400]
3658 "International standard 61400-25: Communications for
3659 monitoring and control of wind power plants", June 2013.
3661 [IEC61850-90-12]
3662 TC57 WG10, IEC., "IEC 61850-90-12 TR: Communication
3663 networks and systems for power utility automation - Part
3664 90-12: Wide area network engineering guidelines", 2015.
3666 [IEC62439-3:2012]
3667 TC65, IEC., "IEC 62439-3: Industrial communication
3668 networks - High availability automation networks - Part 3:
3669 Parallel Redundancy Protocol (PRP) and High-availability
3670 Seamless Redundancy (HSR)", 2012.
3672 [IEEE1588]
3673 IEEE, "IEEE Standard for a Precision Clock Synchronization
3674 Protocol for Networked Measurement and Control Systems",
3675 IEEE Std 1588-2008, 2008,
3676 .
3679 [IEEE1646]
3680 "Communication Delivery Time Performance Requirements for
3681 Electric Power Substation Automation", IEEE Standard
3682 1646-2004 , Apr 2004.
3684 [IEEE1722]
3685 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport
3686 Protocol for Time Sensitive Applications in a Bridged
3687 Local Area Network", IEEE Std 1722-2011, 2011,
3688 .
3691 [IEEE19043]
3692 IEEE Standards Association, "IEEE 1904.3 TF", IEEE 1904.3,
3693 2015, .
3695 [IEEE802.1TSNTG]
3696 IEEE Standards Association, "IEEE 802.1 Time-Sensitive
3697 Networks Task Group", March 2013,
3698 .
3700 [IEEE802154]
3701 IEEE standard for Information Technology, "IEEE std.
3702 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC)
3703 and Physical Layer (PHY) Specifications for Low-Rate
3704 Wireless Personal Area Networks".
3706 [IEEE802154e]
3707 IEEE standard for Information Technology, "IEEE standard
3708 for Information Technology, IEEE std. 802.15.4, Part.
3709 15.4: Wireless Medium Access Control (MAC) and Physical
3710 Layer (PHY) Specifications for Low-Rate Wireless Personal
3711 Area Networks, June 2011 as amended by IEEE std.
3712 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area
3713 Networks (LR-WPANs) Amendment 1: MAC sublayer", April
3714 2012.
3716 [IEEE8021AS]
3717 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)",
3718 IEEE 802.1AS-2001, 2011,
3719 .
3722 [IEEE8021CM]
3723 Farkas, J., "Time-Sensitive Networking for Fronthaul",
3724 Unapproved PAR, PAR for a New IEEE Standard;
3725 IEEE P802.1CM, April 2015,
3726 .
3729 [IEEE8021TSN]
3730 IEEE 802.1, "The charter of the TG is to provide the
3731 specifications that will allow time-synchronized low
3732 latency streaming services through 802 networks.", 2016,
3733 .
3735 [IETFDetNet]
3736 IETF, "Charter for IETF DetNet Working Group", 2015,
3737 .
3739 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation",
3740 .
3742 [ISA100.11a]
3743 ISA/ANSI, "Wireless Systems for Industrial Automation:
3744 Process Control and Related Applications - ISA100.11a-2011
3745 - IEC 62734", 2011, .
3748 [ISO7240-16]
3749 ISO, "ISO 7240-16:2007 Fire detection and alarm systems --
3750 Part 16: Sound system control and indicating equipment",
3751 2007, .
3754 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006.
3756 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0",
3757 1994.
3759 [LTE-Latency]
3760 Johnston, S., "LTE Latency: How does it compare to other
3761 technologies", March 2014,
3762 .
3765 [MEF] MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells",
3766 MEF 22.1.1, July 2014,
3767 .
3770 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and
3771 wireless system", ICT-317669-METIS/D1.1 ICT-
3772 317669-METIS/D1.1, April 2013, .
3775 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL
3776 SPECIFICATION V1.1b", December 2006.
3778 [MODBUS] Modbus Organization, Inc., "MODBUS Application Protocol
3779 Specification", Apr 2012.
3781 [net5G] Ericsson, "5G Radio Access, Challenges for 2020 and
3782 Beyond", Ericsson white paper wp-5g, June 2013,
3783 .
3785 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0,
3786 February 2015, .
3789 [NGMN-fronth]
3790 NGMN Alliance, "Fronthaul Requirements for C-RAN", March
3791 2015, .
3794 [OPCXML] OPC Foundation, "OPC XML-Data Access Specification", Dec
3795 2004.
3797 [PCE] IETF, "Path Computation Element",
3798 .
3800 [profibus]
3801 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001.
3803 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
3804 Requirement Levels", BCP 14, RFC 2119,
3805 DOI 10.17487/RFC2119, March 1997,
3806 .
3808 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6
3809 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460,
3810 December 1998, .
3812 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black,
3813 "Definition of the Differentiated Services Field (DS
3814 Field) in the IPv4 and IPv6 Headers", RFC 2474,
3815 DOI 10.17487/RFC2474, December 1998,
3816 .
3818 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol
3819 Label Switching Architecture", RFC 3031,
3820 DOI 10.17487/RFC3031, January 2001,
3821 .
3823 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
3824 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
3825 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001,
3826 .
3828 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation
3829 Metric for IP Performance Metrics (IPPM)", RFC 3393,
3830 DOI 10.17487/RFC3393, November 2002,
3831 .
3833 [RFC3411] Harrington, D., Presuhn, R., and B. Wijnen, "An
3834 Architecture for Describing Simple Network Management
3835 Protocol (SNMP) Management Frameworks", STD 62, RFC 3411,
3836 DOI 10.17487/RFC3411, December 2002,
3837 .
3839 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between
3840 Information Models and Data Models", RFC 3444,
3841 DOI 10.17487/RFC3444, January 2003,
3842 .
3844 [RFC3972] Aura, T., "Cryptographically Generated Addresses (CGA)",
3845 RFC 3972, DOI 10.17487/RFC3972, March 2005,
3846 .
3848 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation
3849 Edge-to-Edge (PWE3) Architecture", RFC 3985,
3850 DOI 10.17487/RFC3985, March 2005,
3851 .
3853 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing
3854 Architecture", RFC 4291, DOI 10.17487/RFC4291, February
3855 2006, .
3857 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure-
3858 Agnostic Time Division Multiplexing (TDM) over Packet
3859 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006,
3860 .
3862 [RFC4903] Thaler, D., "Multi-Link Subnet Issues", RFC 4903,
3863 DOI 10.17487/RFC4903, June 2007,
3864 .
3866 [RFC4919] Kushalnagar, N., Montenegro, G., and C. Schumacher, "IPv6
3867 over Low-Power Wireless Personal Area Networks (6LoWPANs):
3868 Overview, Assumptions, Problem Statement, and Goals",
3869 RFC 4919, DOI 10.17487/RFC4919, August 2007,
3870 .
3872 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and
3873 P. Pate, "Structure-Aware Time Division Multiplexed (TDM)
3874 Circuit Emulation Service over Packet Switched Network
3875 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007,
3876 .
3878 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi,
3879 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087,
3880 DOI 10.17487/RFC5087, December 2007,
3881 .
3883 [RFC6282] Hui, J., Ed. and P. Thubert, "Compression Format for IPv6
3884 Datagrams over IEEE 802.15.4-Based Networks", RFC 6282,
3885 DOI 10.17487/RFC6282, September 2011,
3886 .
3888 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J.,
3889 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur,
3890 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for
3891 Low-Power and Lossy Networks", RFC 6550,
3892 DOI 10.17487/RFC6550, March 2012,
3893 .
3895 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N.,
3896 and D. Barthel, "Routing Metrics Used for Path Calculation
3897 in Low-Power and Lossy Networks", RFC 6551,
3898 DOI 10.17487/RFC6551, March 2012,
3899 .
3901 [RFC6775] Shelby, Z., Ed., Chakrabarti, S., Nordmark, E., and C.
3902 Bormann, "Neighbor Discovery Optimization for IPv6 over
3903 Low-Power Wireless Personal Area Networks (6LoWPANs)",
3904 RFC 6775, DOI 10.17487/RFC6775, November 2012,
3905 .
3907 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using
3908 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the
3909 Internet of Things (IoT): Problem Statement", RFC 7554,
3910 DOI 10.17487/RFC7554, May 2015,
3911 .
3913 [Spe09] Sperotto, A., Sadre, R., Vliet, F., and A. Pras, "A First
3914 Look into SCADA Network Traffic", IP Operations and
3915 Management, p. 518-521. , June 2009.
3917 [SRP_LATENCY]
3918 Gunther, C., "Specifying SRP Latency", 2014,
3919 .
3922 [STUDIO_IP]
3923 Mace, G., "IP Networked Studio Infrastructure for
3924 Synchronized & Real-Time Multimedia Transmissions", 2007,
3925 .
3928 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in
3929 packet networks", Recommendation G.8261, August 2013,
3930 .
3932 [TEAS] IETF, "Traffic Engineering Architecture and Signaling",
3933 .
3935 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements
3936 for Evolved Universal Terrestrial Radio Access Network
3937 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013.
3939 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception
3940 (FDD)", 3GPP TS 25.104 3.14.0, March 2007.
3942 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access
3943 (E-UTRA); Base Station (BS) radio transmission and
3944 reception", 3GPP TS 36.104 10.11.0, July 2013.
3946 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access
3947 (E-UTRA); Requirements for support of radio resource
3948 management", 3GPP TS 36.133 12.7.0, April 2015.
3950 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access
3951 (E-UTRA); Physical channels and modulation", 3GPP
3952 TS 36.211 10.7.0, March 2013.
3954 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA)
3955 and Evolved Universal Terrestrial Radio Access Network
3956 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300
3957 10.11.0, September 2013.
3959 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive
3960 Networks Task Group", 2013,
3961 .
3963 [UHD-video]
3964 Holub, P., "Ultra-High Definition Videos and Their
3965 Applications over the Network", The 7th International
3966 Symposium on VICTORIES Project PetrHolub_presentation,
3967 October 2014, .
3970 [WirelessHART]
3971 www.hartcomm.org, "Industrial Communication Networks -
3972 Wireless Communication Network and Communication Profiles
3973 - WirelessHART - IEC 62591", 2010.
3975 Authors' Addresses
3977 Ethan Grossman (editor)
3978 Dolby Laboratories, Inc.
3979 1275 Market Street
3980 San Francisco, CA 94103
3981 USA
3983 Phone: +1 415 645 4726
3984 Email: ethan.grossman@dolby.com
3985 URI: http://www.dolby.com
3986 Craig Gunther
3987 Harman International
3988 10653 South River Front Parkway
3989 South Jordan, UT 84095
3990 USA
3992 Phone: +1 801 568-7675
3993 Email: craig.gunther@harman.com
3994 URI: http://www.harman.com
3996 Pascal Thubert
3997 Cisco Systems, Inc
3998 Building D
3999 45 Allee des Ormes - BP1200
4000 MOUGINS - Sophia Antipolis 06254
4001 FRANCE
4003 Phone: +33 497 23 26 34
4004 Email: pthubert@cisco.com
4006 Patrick Wetterwald
4007 Cisco Systems
4008 45 Allees des Ormes
4009 Mougins 06250
4010 FRANCE
4012 Phone: +33 4 97 23 26 36
4013 Email: pwetterw@cisco.com
4015 Jean Raymond
4016 Hydro-Quebec
4017 1500 University
4018 Montreal H3A3S7
4019 Canada
4021 Phone: +1 514 840 3000
4022 Email: raymond.jean@hydro.qc.ca
4023 Jouni Korhonen
4024 Broadcom Corporation
4025 3151 Zanker Road
4026 San Jose, CA 95134
4027 USA
4029 Email: jouni.nospam@gmail.com
4031 Yu Kaneko
4032 Toshiba
4033 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi
4034 Kanagawa, Japan
4036 Email: yu1.kaneko@toshiba.co.jp
4038 Subir Das
4039 Applied Communication Sciences
4040 150 Mount Airy Road, Basking Ridge
4041 New Jersey, 07920, USA
4043 Email: sdas@appcomsci.com
4045 Yiyong Zha
4046 Huawei Technologies
4048 Email: zhayiyong@huawei.com
4050 Balazs Varga
4051 Ericsson
4052 Konyves Kalman krt. 11/B
4053 Budapest 1097
4054 Hungary
4056 Email: balazs.a.varga@ericsson.com
4058 Janos Farkas
4059 Ericsson
4060 Konyves Kalman krt. 11/B
4061 Budapest 1097
4062 Hungary
4064 Email: janos.farkas@ericsson.com
4065 Franz-Josef Goetz
4066 Siemens
4067 Gleiwitzerstr. 555
4068 Nurnberg 90475
4069 Germany
4071 Email: franz-josef.goetz@siemens.com
4073 Juergen Schmitt
4074 Siemens
4075 Gleiwitzerstr. 555
4076 Nurnberg 90475
4077 Germany
4079 Email: juergen.jues.schmitt@siemens.com
4081 Xavier Vilajosana
4082 Worldsensing
4083 483 Arago
4084 Barcelona, Catalonia 08013
4085 Spain
4087 Email: xvilajosana@worldsensing.com
4089 Toktam Mahmoodi
4090 King's College London
4091 Strand, London WC2R 2LS
4092 London, London WC2R 2LS
4093 United Kingdom
4095 Email: toktam.mahmoodi@kcl.ac.uk
4097 Spiros Spirou
4098 Intracom Telecom
4099 19.7 km Markopoulou Ave.
4100 Peania, Attiki 19002
4101 Greece
4103 Email: spis@intracom-telecom.com
4104 Petra Vizarreta
4105 Technical University of Munich, TUM
4106 Maxvorstadt, ArcisstraBe 21
4107 Munich, Germany 80333
4108 Germany
4110 Email: petra.vizarreta@lkn.ei.tum.de
4112 Daniel Huang
4113 ZTE Corporation, Inc.
4114 No. 50 Software Avenue
4115 Nanjing, Jiangsu 210012
4116 P.R. China
4118 Email: huang.guangping@zte.com.cn
4120 Xuesong Geng
4121 Huawei Technologies
4123 Email: gengxuesong@huawei.com
4125 Diego Dujovne
4126 Universidad Diego Portales
4128 Email: diego.dujovne@mail.udp.cl
4130 Maik Seewald
4131 Cisco Systems
4133 Email: maseewal@cisco.com