idnits 2.17.1
draft-ietf-detnet-use-cases-17.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
** The document seems to lack an IANA Considerations section. (See Section
2.2 of https://www.ietf.org/id-info/checklist for how to handle the case
when there are no actions for IANA.)
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (June 26, 2018) is 2131 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Unused Reference: 'ACE' is defined on line 3625, but no explicit
reference was found in the text
== Unused Reference: 'CCAMP' is defined on line 3637, but no explicit
reference was found in the text
== Unused Reference: 'CPRI-transp' is defined on line 3656, but no explicit
reference was found in the text
== Unused Reference: 'DICE' is defined on line 3665, but no explicit
reference was found in the text
== Unused Reference: 'EA12' is defined on line 3668, but no explicit
reference was found in the text
== Unused Reference: 'HART' is defined on line 3689, but no explicit
reference was found in the text
== Unused Reference: 'I-D.ietf-6tisch-terminology' is defined on line 3708,
but no explicit reference was found in the text
== Unused Reference: 'I-D.ietf-ipv6-multilink-subnets' is defined on line
3724, but no explicit reference was found in the text
== Unused Reference: 'I-D.ietf-roll-rpl-industrial-applicability' is
defined on line 3735, but no explicit reference was found in the text
== Unused Reference: 'I-D.thubert-6lowpan-backbone-router' is defined on
line 3757, but no explicit reference was found in the text
== Unused Reference: 'IEC61850-90-12' is defined on line 3776, but no
explicit reference was found in the text
== Unused Reference: 'IEEE8021TSN' is defined on line 3846, but no explicit
reference was found in the text
== Unused Reference: 'IETFDetNet' is defined on line 3852, but no explicit
reference was found in the text
== Unused Reference: 'ISO7240-16' is defined on line 3865, but no explicit
reference was found in the text
== Unused Reference: 'LTE-Latency' is defined on line 3876, but no explicit
reference was found in the text
== Unused Reference: 'RFC2119' is defined on line 3927, but no explicit
reference was found in the text
== Unused Reference: 'RFC2460' is defined on line 3932, but no explicit
reference was found in the text
== Unused Reference: 'RFC2474' is defined on line 3936, but no explicit
reference was found in the text
== Unused Reference: 'RFC3209' is defined on line 3947, but no explicit
reference was found in the text
== Unused Reference: 'RFC3393' is defined on line 3952, but no explicit
reference was found in the text
== Unused Reference: 'RFC3444' is defined on line 3963, but no explicit
reference was found in the text
== Unused Reference: 'RFC3972' is defined on line 3968, but no explicit
reference was found in the text
== Unused Reference: 'RFC4291' is defined on line 3977, but no explicit
reference was found in the text
== Unused Reference: 'RFC4903' is defined on line 3986, but no explicit
reference was found in the text
== Unused Reference: 'RFC4919' is defined on line 3990, but no explicit
reference was found in the text
== Unused Reference: 'RFC6282' is defined on line 4007, but no explicit
reference was found in the text
== Unused Reference: 'RFC6775' is defined on line 4025, but no explicit
reference was found in the text
== Unused Reference: 'TEAS' is defined on line 4056, but no explicit
reference was found in the text
== Unused Reference: 'UHD-video' is defined on line 4094, but no explicit
reference was found in the text
== Outdated reference: A later version (-30) exists of
draft-ietf-6tisch-architecture-14
== Outdated reference: A later version (-13) exists of
draft-ietf-detnet-architecture-05
== Outdated reference: A later version (-09) exists of
draft-ietf-detnet-problem-statement-05
-- Obsolete informational reference (is this intentional?): RFC 2460
(Obsoleted by RFC 8200)
Summary: 1 error (**), 0 flaws (~~), 33 warnings (==), 2 comments (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Internet Engineering Task Force E. Grossman, Ed.
3 Internet-Draft DOLBY
4 Intended status: Informational June 26, 2018
5 Expires: December 28, 2018
7 Deterministic Networking Use Cases
8 draft-ietf-detnet-use-cases-17
10 Abstract
12 This draft documents requirements in several diverse industries to
13 establish multi-hop paths for characterized flows with deterministic
14 properties. In this context deterministic implies that flows can be
15 established which provide guaranteed bandwidth and latency which can
16 be established from either a Layer 2 or Layer 3 (IP) interface, and
17 which can co-exist on an IP network with best-effort traffic.
19 Additional requirements include optional redundant paths, very high
20 reliability paths, time synchronization, and clock distribution.
21 Industries considered include professional audio, electrical
22 utilities, building automation systems, wireless for industrial,
23 cellular radio, industrial machine-to-machine, mining, private
24 blockchain, and network slicing.
26 For each case, this document will identify the application, identify
27 representative solutions used today, and the improvements IETF DetNet
28 solutions may enable.
30 Status of This Memo
32 This Internet-Draft is submitted in full conformance with the
33 provisions of BCP 78 and BCP 79.
35 Internet-Drafts are working documents of the Internet Engineering
36 Task Force (IETF). Note that other groups may also distribute
37 working documents as Internet-Drafts. The list of current Internet-
38 Drafts is at https://datatracker.ietf.org/drafts/current/.
40 Internet-Drafts are draft documents valid for a maximum of six months
41 and may be updated, replaced, or obsoleted by other documents at any
42 time. It is inappropriate to use Internet-Drafts as reference
43 material or to cite them other than as "work in progress."
45 This Internet-Draft will expire on December 28, 2018.
47 Copyright Notice
49 Copyright (c) 2018 IETF Trust and the persons identified as the
50 document authors. All rights reserved.
52 This document is subject to BCP 78 and the IETF Trust's Legal
53 Provisions Relating to IETF Documents
54 (https://trustee.ietf.org/license-info) in effect on the date of
55 publication of this document. Please review these documents
56 carefully, as they describe your rights and restrictions with respect
57 to this document. Code Components extracted from this document must
58 include Simplified BSD License text as described in Section 4.e of
59 the Trust Legal Provisions and are provided without warranty as
60 described in the Simplified BSD License.
62 Table of Contents
64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5
65 2. Pro Audio and Video . . . . . . . . . . . . . . . . . . . . . 6
66 2.1. Use Case Description . . . . . . . . . . . . . . . . . . 6
67 2.1.1. Uninterrupted Stream Playback . . . . . . . . . . . . 7
68 2.1.2. Synchronized Stream Playback . . . . . . . . . . . . 7
69 2.1.3. Sound Reinforcement . . . . . . . . . . . . . . . . . 8
70 2.1.4. Deterministic Time to Establish Streaming . . . . . . 8
71 2.1.5. Secure Transmission . . . . . . . . . . . . . . . . . 8
72 2.1.5.1. Safety . . . . . . . . . . . . . . . . . . . . . 8
73 2.2. Pro Audio Today . . . . . . . . . . . . . . . . . . . . . 9
74 2.3. Pro Audio Future . . . . . . . . . . . . . . . . . . . . 9
75 2.3.1. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 9
76 2.3.2. High Reliability Stream Paths . . . . . . . . . . . . 9
77 2.3.3. Integration of Reserved Streams into IT Networks . . 9
78 2.3.4. Use of Unused Reservations by Best-Effort Traffic . . 10
79 2.3.5. Traffic Segregation . . . . . . . . . . . . . . . . . 10
80 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets . . . 10
81 2.3.5.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 11
82 2.3.6. Latency Optimization by a Central Controller . . . . 11
83 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory . . 11
84 2.4. Pro Audio Asks . . . . . . . . . . . . . . . . . . . . . 12
85 3. Electrical Utilities . . . . . . . . . . . . . . . . . . . . 12
86 3.1. Use Case Description . . . . . . . . . . . . . . . . . . 12
87 3.1.1. Transmission Use Cases . . . . . . . . . . . . . . . 12
88 3.1.1.1. Protection . . . . . . . . . . . . . . . . . . . 12
89 3.1.1.2. Intra-Substation Process Bus Communications . . . 18
90 3.1.1.3. Wide Area Monitoring and Control Systems . . . . 19
91 3.1.1.4. IEC 61850 WAN engineering guidelines requirement
92 classification . . . . . . . . . . . . . . . . . 20
93 3.1.2. Generation Use Case . . . . . . . . . . . . . . . . . 21
94 3.1.2.1. Control of the Generated Power . . . . . . . . . 21
95 3.1.2.2. Control of the Generation Infrastructure . . . . 22
96 3.1.3. Distribution use case . . . . . . . . . . . . . . . . 27
97 3.1.3.1. Fault Location Isolation and Service Restoration
98 (FLISR) . . . . . . . . . . . . . . . . . . . . . 27
99 3.2. Electrical Utilities Today . . . . . . . . . . . . . . . 28
100 3.2.1. Security Current Practices and Limitations . . . . . 28
101 3.3. Electrical Utilities Future . . . . . . . . . . . . . . . 30
102 3.3.1. Migration to Packet-Switched Network . . . . . . . . 31
103 3.3.2. Telecommunications Trends . . . . . . . . . . . . . . 31
104 3.3.2.1. General Telecommunications Requirements . . . . . 31
105 3.3.2.2. Specific Network topologies of Smart Grid
106 Applications . . . . . . . . . . . . . . . . . . 32
107 3.3.2.3. Precision Time Protocol . . . . . . . . . . . . . 33
108 3.3.3. Security Trends in Utility Networks . . . . . . . . . 34
109 3.4. Electrical Utilities Asks . . . . . . . . . . . . . . . . 36
110 4. Building Automation Systems . . . . . . . . . . . . . . . . . 36
111 4.1. Use Case Description . . . . . . . . . . . . . . . . . . 36
112 4.2. Building Automation Systems Today . . . . . . . . . . . . 37
113 4.2.1. BAS Architecture . . . . . . . . . . . . . . . . . . 37
114 4.2.2. BAS Deployment Model . . . . . . . . . . . . . . . . 38
115 4.2.3. Use Cases for Field Networks . . . . . . . . . . . . 40
116 4.2.3.1. Environmental Monitoring . . . . . . . . . . . . 40
117 4.2.3.2. Fire Detection . . . . . . . . . . . . . . . . . 40
118 4.2.3.3. Feedback Control . . . . . . . . . . . . . . . . 41
119 4.2.4. Security Considerations . . . . . . . . . . . . . . . 41
120 4.3. BAS Future . . . . . . . . . . . . . . . . . . . . . . . 41
121 4.4. BAS Asks . . . . . . . . . . . . . . . . . . . . . . . . 42
122 5. Wireless for Industrial . . . . . . . . . . . . . . . . . . . 42
123 5.1. Use Case Description . . . . . . . . . . . . . . . . . . 42
124 5.1.1. Network Convergence using 6TiSCH . . . . . . . . . . 43
125 5.1.2. Common Protocol Development for 6TiSCH . . . . . . . 43
126 5.2. Wireless Industrial Today . . . . . . . . . . . . . . . . 44
127 5.3. Wireless Industrial Future . . . . . . . . . . . . . . . 44
128 5.3.1. Unified Wireless Network and Management . . . . . . . 44
129 5.3.1.1. PCE and 6TiSCH ARQ Retries . . . . . . . . . . . 46
130 5.3.2. Schedule Management by a PCE . . . . . . . . . . . . 47
131 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests . . . . . . 47
132 5.3.2.2. 6TiSCH IP Interface . . . . . . . . . . . . . . . 48
133 5.3.3. 6TiSCH Security Considerations . . . . . . . . . . . 49
134 5.4. Wireless Industrial Asks . . . . . . . . . . . . . . . . 49
135 6. Cellular Radio . . . . . . . . . . . . . . . . . . . . . . . 49
136 6.1. Use Case Description . . . . . . . . . . . . . . . . . . 49
137 6.1.1. Network Architecture . . . . . . . . . . . . . . . . 49
138 6.1.2. Delay Constraints . . . . . . . . . . . . . . . . . . 50
139 6.1.3. Time Synchronization Constraints . . . . . . . . . . 52
140 6.1.4. Transport Loss Constraints . . . . . . . . . . . . . 54
141 6.1.5. Security Considerations . . . . . . . . . . . . . . . 54
142 6.2. Cellular Radio Networks Today . . . . . . . . . . . . . . 55
143 6.2.1. Fronthaul . . . . . . . . . . . . . . . . . . . . . . 55
144 6.2.2. Midhaul and Backhaul . . . . . . . . . . . . . . . . 55
145 6.3. Cellular Radio Networks Future . . . . . . . . . . . . . 56
146 6.4. Cellular Radio Networks Asks . . . . . . . . . . . . . . 58
147 7. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 59
148 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 59
149 7.2. Industrial M2M Communication Today . . . . . . . . . . . 60
150 7.2.1. Transport Parameters . . . . . . . . . . . . . . . . 60
151 7.2.2. Stream Creation and Destruction . . . . . . . . . . . 61
152 7.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 61
153 7.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 62
154 8. Mining Industry . . . . . . . . . . . . . . . . . . . . . . . 62
155 8.1. Use Case Description . . . . . . . . . . . . . . . . . . 62
156 8.2. Mining Industry Today . . . . . . . . . . . . . . . . . . 63
157 8.3. Mining Industry Future . . . . . . . . . . . . . . . . . 63
158 8.4. Mining Industry Asks . . . . . . . . . . . . . . . . . . 64
159 9. Private Blockchain . . . . . . . . . . . . . . . . . . . . . 64
160 9.1. Use Case Description . . . . . . . . . . . . . . . . . . 64
161 9.1.1. Blockchain Operation . . . . . . . . . . . . . . . . 65
162 9.1.2. Blockchain Network Architecture . . . . . . . . . . . 65
163 9.1.3. Security Considerations . . . . . . . . . . . . . . . 66
164 9.2. Private Blockchain Today . . . . . . . . . . . . . . . . 66
165 9.3. Private Blockchain Future . . . . . . . . . . . . . . . . 66
166 9.4. Private Blockchain Asks . . . . . . . . . . . . . . . . . 66
167 10. Network Slicing . . . . . . . . . . . . . . . . . . . . . . . 67
168 10.1. Use Case Description . . . . . . . . . . . . . . . . . . 67
169 10.2. DetNet Applied to Network Slicing . . . . . . . . . . . 67
170 10.2.1. Resource Isolation Across Slices . . . . . . . . . . 67
171 10.2.2. Deterministic Services Within Slices . . . . . . . . 67
172 10.3. A Network Slicing Use Case Example - 5G Bearer Network . 68
173 10.4. Non-5G Applications of Network Slicing . . . . . . . . . 68
174 10.5. Limitations of DetNet in Network Slicing . . . . . . . . 69
175 10.6. Network Slicing Today and Future . . . . . . . . . . . . 69
176 10.7. Network Slicing Asks . . . . . . . . . . . . . . . . . . 69
177 11. Use Case Common Themes . . . . . . . . . . . . . . . . . . . 69
178 11.1. Unified, standards-based network . . . . . . . . . . . . 69
179 11.1.1. Extensions to Ethernet . . . . . . . . . . . . . . . 69
180 11.1.2. Centrally Administered . . . . . . . . . . . . . . . 69
181 11.1.3. Standardized Data Flow Information Models . . . . . 70
182 11.1.4. L2 and L3 Integration . . . . . . . . . . . . . . . 70
183 11.1.5. Consideration for IPv4 . . . . . . . . . . . . . . . 70
184 11.1.6. Guaranteed End-to-End Delivery . . . . . . . . . . . 70
185 11.1.7. Replacement for Multiple Proprietary Deterministic
186 Networks . . . . . . . . . . . . . . . . . . . . . . 70
187 11.1.8. Mix of Deterministic and Best-Effort Traffic . . . . 71
188 11.1.9. Unused Reserved BW to be Available to Best Effort
189 Traffic . . . . . . . . . . . . . . . . . . . . . . 71
190 11.1.10. Lower Cost, Multi-Vendor Solutions . . . . . . . . . 71
192 11.2. Scalable Size . . . . . . . . . . . . . . . . . . . . . 71
193 11.3. Scalable Timing Parameters and Accuracy . . . . . . . . 71
194 11.3.1. Bounded Latency . . . . . . . . . . . . . . . . . . 71
195 11.3.2. Low Latency . . . . . . . . . . . . . . . . . . . . 72
196 11.3.3. Symmetrical Path Delays . . . . . . . . . . . . . . 72
197 11.4. High Reliability and Availability . . . . . . . . . . . 72
198 11.5. Security . . . . . . . . . . . . . . . . . . . . . . . . 72
199 11.6. Deterministic Flows . . . . . . . . . . . . . . . . . . 73
200 12. Use Cases Explicitly Out of Scope for DetNet . . . . . . . . 73
201 12.1. DetNet Scope Limitations . . . . . . . . . . . . . . . . 73
202 12.2. Internet-based Applications . . . . . . . . . . . . . . 74
203 12.2.1. Use Case Description . . . . . . . . . . . . . . . . 74
204 12.2.1.1. Media Content Delivery . . . . . . . . . . . . . 74
205 12.2.1.2. Online Gaming . . . . . . . . . . . . . . . . . 74
206 12.2.1.3. Virtual Reality . . . . . . . . . . . . . . . . 74
207 12.2.2. Internet-Based Applications Today . . . . . . . . . 74
208 12.2.3. Internet-Based Applications Future . . . . . . . . . 74
209 12.2.4. Internet-Based Applications Asks . . . . . . . . . . 75
210 12.3. Pro Audio and Video - Digital Rights Management (DRM) . 75
211 12.4. Pro Audio and Video - Link Aggregation . . . . . . . . . 75
212 13. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 76
213 14. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 77
214 14.1. Pro Audio . . . . . . . . . . . . . . . . . . . . . . . 77
215 14.2. Utility Telecom . . . . . . . . . . . . . . . . . . . . 78
216 14.3. Building Automation Systems . . . . . . . . . . . . . . 78
217 14.4. Wireless for Industrial . . . . . . . . . . . . . . . . 78
218 14.5. Cellular Radio . . . . . . . . . . . . . . . . . . . . . 78
219 14.6. Industrial M2M . . . . . . . . . . . . . . . . . . . . . 79
220 14.7. Internet Applications and CoMP . . . . . . . . . . . . . 79
221 14.8. Electrical Utilities . . . . . . . . . . . . . . . . . . 79
222 14.9. Network Slicing . . . . . . . . . . . . . . . . . . . . 79
223 14.10. Mining . . . . . . . . . . . . . . . . . . . . . . . . . 79
224 14.11. Private Blockchain . . . . . . . . . . . . . . . . . . . 79
225 15. Informative References . . . . . . . . . . . . . . . . . . . 79
226 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 90
228 1. Introduction
230 This draft presents use cases from diverse industries which have in
231 common a need for deterministic flows, but which also differ notably
232 in their network topologies and specific desired behavior. Together,
233 they provide broad industry context for DetNet and a yardstick
234 against which proposed DetNet designs can be measured (to what extent
235 does a proposed design satisfy these various use cases?)
237 For DetNet, use cases explicitly do not define requirements; The
238 DetNet WG will consider the use cases, decide which elements are in
239 scope for DetNet, and the results will be incorporated into future
240 drafts. Similarly, the DetNet use case draft explicitly does not
241 suggest any specific design, architecture or protocols, which will be
242 topics of future drafts.
244 We present for each use case the answers to the following questions:
246 o What is the use case?
248 o How is it addressed today?
250 o How would you like it to be addressed in the future?
252 o What do you want the IETF to deliver?
254 The level of detail in each use case should be sufficient to express
255 the relevant elements of the use case, but not more.
257 At the end we consider the use cases collectively, and examine the
258 most significant goals they have in common.
260 2. Pro Audio and Video
262 2.1. Use Case Description
264 The professional audio and video industry ("ProAV") includes:
266 o Music and film content creation
268 o Broadcast
270 o Cinema
272 o Live sound
274 o Public address, media and emergency systems at large venues
275 (airports, stadiums, churches, theme parks).
277 These industries have already transitioned audio and video signals
278 from analog to digital. However, the digital interconnect systems
279 remain primarily point-to-point with a single (or small number of)
280 signals per link, interconnected with purpose-built hardware.
282 These industries are now transitioning to packet-based infrastructure
283 to reduce cost, increase routing flexibility, and integrate with
284 existing IT infrastructure.
286 Today ProAV applications have no way to establish deterministic flows
287 from a standards-based Layer 3 (IP) interface, which is a fundamental
288 limitation to the use cases described here. Today deterministic
289 flows can be created within standards-based layer 2 LANs (e.g. using
290 IEEE 802.1 AVB) however these are not routable via IP and thus are
291 not effective for distribution over wider areas (for example
292 broadcast events that span wide geographical areas).
294 It would be highly desirable if such flows could be routed over the
295 open Internet, however solutions with more limited scope (e.g.
296 enterprise networks) would still provide a substantial improvement.
298 The following sections describe specific ProAV use cases.
300 2.1.1. Uninterrupted Stream Playback
302 Transmitting audio and video streams for live playback is unlike
303 common file transfer because uninterrupted stream playback in the
304 presence of network errors cannot be achieved by re-trying the
305 transmission; by the time the missing or corrupt packet has been
306 identified it is too late to execute a re-try operation. Buffering
307 can be used to provide enough delay to allow time for one or more
308 retries, however this is not an effective solution in applications
309 where large delays (latencies) are not acceptable (as discussed
310 below).
312 Streams with guaranteed bandwidth can eliminate congestion on the
313 network as a cause of transmission errors that would lead to playback
314 interruption. Use of redundant paths can further mitigate
315 transmission errors to provide greater stream reliability.
317 2.1.2. Synchronized Stream Playback
319 Latency in this context is the time between when a signal is
320 initially sent over a stream and when it is received. A common
321 example in ProAV is time-synchronizing audio and video when they take
322 separate paths through the playback system. In this case the latency
323 of both the audio and video streams must be bounded and consistent if
324 the sound is to remain matched to the movement in the video. A
325 common tolerance for audio/video sync is one NTSC video frame (about
326 33ms) and to maintain the audience perception of correct lip sync the
327 latency needs to be consistent within some reasonable tolerance, for
328 example 10%.
330 A common architecture for synchronizing multiple streams that have
331 different paths through the network (and thus potentially different
332 latencies) is to enable measurement of the latency of each path, and
333 have the data sinks (for example speakers) delay (buffer) all packets
334 on all but the slowest path. Each packet of each stream is assigned
335 a presentation time which is based on the longest required delay.
337 This implies that all sinks must maintain a common time reference of
338 sufficient accuracy, which can be achieved by any of various
339 techniques.
341 This type of architecture is commonly implemented using a central
342 controller that determines path delays and arbitrates buffering
343 delays.
345 2.1.3. Sound Reinforcement
347 Consider the latency (delay) from when a person speaks into a
348 microphone to when their voice emerges from the speaker. If this
349 delay is longer than about 10-15 milliseconds it is noticeable and
350 can make a sound reinforcement system unusable (see slide 6 of
351 [SRP_LATENCY]). (If you have ever tried to speak in the presence of
352 a delayed echo of your voice you may know this experience).
354 Note that the 15ms latency bound includes all parts of the signal
355 path, not just the network, so the network latency must be
356 significantly less than 15ms.
358 In some cases local performers must perform in synchrony with a
359 remote broadcast. In such cases the latencies of the broadcast
360 stream and the local performer must be adjusted to match each other,
361 with a worst case of one video frame (33ms for NTSC video).
363 In cases where audio phase is a consideration, for example beam-
364 forming using multiple speakers, latency requirements can be in the
365 10 microsecond range (1 audio sample at 96kHz).
367 2.1.4. Deterministic Time to Establish Streaming
369 Note: The WG has decided that guidelines for deterministic time to
370 establish stream startup is not within scope of DetNet. If bounded
371 timing of establishing or re-establish streams is required in a given
372 use case, it is up to the application/system to achieve this. (The
373 supporting text from this section has been removed as of draft 12).
375 2.1.5. Secure Transmission
377 2.1.5.1. Safety
379 Professional audio systems can include amplifiers that are capable of
380 generating hundreds or thousands of watts of audio power which if
381 used incorrectly can cause hearing damage to those in the vicinity.
382 Apart from the usual care required by the systems operators to
383 prevent such incidents, the network traffic that controls these
384 devices must be secured (as with any sensitive application traffic).
386 2.2. Pro Audio Today
388 Some proprietary systems have been created which enable deterministic
389 streams at Layer 3 however they are "engineered networks" which
390 require careful configuration to operate, often require that the
391 system be over-provisioned, and it is implied that all devices on the
392 network voluntarily play by the rules of that network. To enable
393 these industries to successfully transition to an interoperable
394 multi-vendor packet-based infrastructure requires effective open
395 standards, and we believe that establishing relevant IETF standards
396 is a crucial factor.
398 2.3. Pro Audio Future
400 2.3.1. Layer 3 Interconnecting Layer 2 Islands
402 It would be valuable to enable IP to connect multiple Layer 2 LANs.
404 As an example, ESPN recently constructed a state-of-the-art 194,000
405 sq ft, $125 million broadcast studio called DC2. The DC2 network is
406 capable of handling 46 Tbps of throughput with 60,000 simultaneous
407 signals. Inside the facility are 1,100 miles of fiber feeding four
408 audio control rooms (see [ESPN_DC2] ).
410 In designing DC2 they replaced as much point-to-point technology as
411 they could with packet-based technology. They constructed seven
412 individual studios using layer 2 LANS (using IEEE 802.1 AVB) that
413 were entirely effective at routing audio within the LANs. However to
414 interconnect these layer 2 LAN islands together they ended up using
415 dedicated paths in a custom SDN (Software Defined Networking) router
416 because there is no standards-based routing solution available.
418 2.3.2. High Reliability Stream Paths
420 On-air and other live media streams are often backed up with
421 redundant links that seamlessly act to deliver the content when the
422 primary link fails for any reason. In point-to-point systems this is
423 provided by an additional point-to-point link; the analogous
424 requirement in a packet-based system is to provide an alternate path
425 through the network such that no individual link can bring down the
426 system.
428 2.3.3. Integration of Reserved Streams into IT Networks
430 A commonly cited goal of moving to a packet based media
431 infrastructure is that costs can be reduced by using off the shelf,
432 commodity network hardware. In addition, economy of scale can be
433 realized by combining media infrastructure with IT infrastructure.
435 In keeping with these goals, stream reservation technology should be
436 compatible with existing protocols, and not compromise use of the
437 network for best effort (non-time-sensitive) traffic.
439 2.3.4. Use of Unused Reservations by Best-Effort Traffic
441 In cases where stream bandwidth is reserved but not currently used
442 (or is under-utilized) that bandwidth must be available to best-
443 effort (i.e. non-time-sensitive) traffic. For example a single
444 stream may be nailed up (reserved) for specific media content that
445 needs to be presented at different times of the day, ensuring timely
446 delivery of that content, yet in between those times the full
447 bandwidth of the network can be utilized for best-effort tasks such
448 as file transfers.
450 This also addresses a concern of IT network administrators that are
451 considering adding reserved bandwidth traffic to their networks that
452 ("users will reserve large quantities of bandwidth and then never un-
453 reserve it even though they are not using it, and soon the network
454 will have no bandwidth left").
456 2.3.5. Traffic Segregation
458 Note: It is still under WG discussion whether this topic will be
459 addressed by DetNet.
461 Sink devices may be low cost devices with limited processing power.
462 In order to not overwhelm the CPUs in these devices it is important
463 to limit the amount of traffic that these devices must process.
465 As an example, consider the use of individual seat speakers in a
466 cinema. These speakers are typically required to be cost reduced
467 since the quantities in a single theater can reach hundreds of seats.
468 Discovery protocols alone in a one thousand seat theater can generate
469 enough broadcast traffic to overwhelm a low powered CPU. Thus an
470 installation like this will benefit greatly from some type of traffic
471 segregation that can define groups of seats to reduce traffic within
472 each group. All seats in the theater must still be able to
473 communicate with a central controller.
475 There are many techniques that can be used to support this
476 requirement including (but not limited to) the following examples.
478 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets
480 Packet forwarding rules can be used to eliminate some extraneous
481 streaming traffic from reaching potentially low powered sink devices,
482 however there may be other types of broadcast traffic that should be
483 eliminated using other means for example VLANs or IP subnets.
485 2.3.5.2. Multicast Addressing (IPv4 and IPv6)
487 Multicast addressing is commonly used to keep bandwidth utilization
488 of shared links to a minimum.
490 Because of the MAC Address forwarding nature of Layer 2 bridges it is
491 important that a multicast MAC address is only associated with one
492 stream. This will prevent reservations from forwarding packets from
493 one stream down a path that has no interested sinks simply because
494 there is another stream on that same path that shares the same
495 multicast MAC address.
497 Since each multicast MAC Address can represent 32 different IPv4
498 multicast addresses there must be a process put in place to make sure
499 this does not occur. Requiring use of IPv6 address can achieve this,
500 however due to their continued prevalence, solutions that are
501 effective for IPv4 installations are also required.
503 2.3.6. Latency Optimization by a Central Controller
505 A central network controller might also perform optimizations based
506 on the individual path delays, for example sinks that are closer to
507 the source can inform the controller that they can accept greater
508 latency since they will be buffering packets to match presentation
509 times of farther away sinks. The controller might then move a stream
510 reservation on a short path to a longer path in order to free up
511 bandwidth for other critical streams on that short path. See slides
512 3-5 of [SRP_LATENCY].
514 Additional optimization can be achieved in cases where sinks have
515 differing latency requirements, for example in a live outdoor concert
516 the speaker sinks have stricter latency requirements than the
517 recording hardware sinks. See slide 7 of [SRP_LATENCY].
519 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory
521 Device cost can be reduced in a system with guaranteed reservations
522 with a small bounded latency due to the reduced requirements for
523 buffering (i.e. memory) on sink devices. For example, a theme park
524 might broadcast a live event across the globe via a layer 3 protocol;
525 in such cases the size of the buffers required is proportional to the
526 latency bounds and jitter caused by delivery, which depends on the
527 worst case segment of the end-to-end network path. For example on
528 todays open internet the latency is typically unacceptable for audio
529 and video streaming without many seconds of buffering. In such
530 scenarios a single gateway device at the local network that receives
531 the feed from the remote site would provide the expensive buffering
532 required to mask the latency and jitter issues associated with long
533 distance delivery. Sink devices in the local location would have no
534 additional buffering requirements, and thus no additional costs,
535 beyond those required for delivery of local content. The sink device
536 would be receiving the identical packets as those sent by the source
537 and would be unaware that there were any latency or jitter issues
538 along the path.
540 2.4. Pro Audio Asks
542 o Layer 3 routing on top of AVB (and/or other high QoS networks)
544 o Content delivery with bounded, lowest possible latency
546 o IntServ and DiffServ integration with AVB (where practical)
548 o Single network for A/V and IT traffic
550 o Standards-based, interoperable, multi-vendor
552 o IT department friendly
554 o Enterprise-wide networks (e.g. size of San Francisco but not the
555 whole Internet (yet...))
557 3. Electrical Utilities
559 3.1. Use Case Description
561 Many systems that an electrical utility deploys today rely on high
562 availability and deterministic behavior of the underlying networks.
563 Here we present use cases in Transmission, Generation and
564 Distribution, including key timing and reliability metrics. We also
565 discuss security issues and industry trends which affect the
566 architecture of next generation utility networks
568 3.1.1. Transmission Use Cases
570 3.1.1.1. Protection
572 Protection means not only the protection of human operators but also
573 the protection of the electrical equipment and the preservation of
574 the stability and frequency of the grid. If a fault occurs in the
575 transmission or distribution of electricity then severe damage can
576 occur to human operators, electrical equipment and the grid itself,
577 leading to blackouts.
579 Communication links in conjunction with protection relays are used to
580 selectively isolate faults on high voltage lines, transformers,
581 reactors and other important electrical equipment. The role of the
582 teleprotection system is to selectively disconnect a faulty part by
583 transferring command signals within the shortest possible time.
585 3.1.1.1.1. Key Criteria
587 The key criteria for measuring teleprotection performance are command
588 transmission time, dependability and security. These criteria are
589 defined by the IEC standard 60834 as follows:
591 o Transmission time (Speed): The time between the moment where state
592 changes at the transmitter input and the moment of the
593 corresponding change at the receiver output, including propagation
594 delay. Overall operating time for a teleprotection system
595 includes the time for initiating the command at the transmitting
596 end, the propagation delay over the network (including equipments)
597 and the selection and decision time at the receiving end,
598 including any additional delay due to a noisy environment.
600 o Dependability: The ability to issue and receive valid commands in
601 the presence of interference and/or noise, by minimizing the
602 probability of missing command (PMC). Dependability targets are
603 typically set for a specific bit error rate (BER) level.
605 o Security: The ability to prevent false tripping due to a noisy
606 environment, by minimizing the probability of unwanted commands
607 (PUC). Security targets are also set for a specific bit error
608 rate (BER) level.
610 Additional elements of the the teleprotection system that impact its
611 performance include:
613 o Network bandwidth
615 o Failure recovery capacity (aka resiliency)
617 3.1.1.1.2. Fault Detection and Clearance Timing
619 Most power line equipment can tolerate short circuits or faults for
620 up to approximately five power cycles before sustaining irreversible
621 damage or affecting other segments in the network. This translates
622 to total fault clearance time of 100ms. As a safety precaution,
623 however, actual operation time of protection systems is limited to
624 70- 80 percent of this period, including fault recognition time,
625 command transmission time and line breaker switching time.
627 Some system components, such as large electromechanical switches,
628 require particularly long time to operate and take up the majority of
629 the total clearance time, leaving only a 10ms window for the
630 telecommunications part of the protection scheme, independent of the
631 distance to travel. Given the sensitivity of the issue, new networks
632 impose requirements that are even more stringent: IEC standard 61850
633 limits the transfer time for protection messages to 1/4 - 1/2 cycle
634 or 4 - 8ms (for 60Hz lines) for the most critical messages.
636 3.1.1.1.3. Symmetric Channel Delay
638 Teleprotection channels which are differential must be synchronous,
639 which means that any delays on the transmit and receive paths must
640 match each other. Teleprotection systems ideally support zero
641 asymmetric delay; typical legacy relays can tolerate delay
642 discrepancies of up to 750us.
644 Some tools available for lowering delay variation below this
645 threshold are:
647 o For legacy systems using Time Division Multiplexing (TDM), jitter
648 buffers at the multiplexers on each end of the line can be used to
649 offset delay variation by queuing sent and received packets. The
650 length of the queues must balance the need to regulate the rate of
651 transmission with the need to limit overall delay, as larger
652 buffers result in increased latency.
654 o For jitter-prone IP packet networks, traffic management tools can
655 ensure that the teleprotection signals receive the highest
656 transmission priority to minimize jitter.
658 o Standard packet-based synchronization technologies, such as
659 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet
660 (Sync-E), can help keep networks stable by maintaining a highly
661 accurate clock source on the various network devices.
663 3.1.1.1.4. Teleprotection Network Requirements (IEC 61850)
665 The following table captures the main network metrics as based on the
666 IEC 61850 standard.
668 +-----------------------------+-------------------------------------+
669 | Teleprotection Requirement | Attribute |
670 +-----------------------------+-------------------------------------+
671 | One way maximum delay | 4-10 ms |
672 | Asymetric delay required | Yes |
673 | Maximum jitter | less than 250 us (750 us for legacy |
674 | | IED) |
675 | Topology | Point to point, point to Multi- |
676 | | point |
677 | Availability | 99.9999 |
678 | precise timing required | Yes |
679 | Recovery time on node | less than 50ms - hitless |
680 | failure | |
681 | performance management | Yes, Mandatory |
682 | Redundancy | Yes |
683 | Packet loss | 0.1% to 1% |
684 +-----------------------------+-------------------------------------+
686 Table 1: Teleprotection network requirements
688 3.1.1.1.5. Inter-Trip Protection scheme
690 "Inter-tripping" is the signal-controlled tripping of a circuit
691 breaker to complete the isolation of a circuit or piece of apparatus
692 in concert with the tripping of other circuit breakers.
694 +--------------------------------+----------------------------------+
695 | Inter-Trip protection | Attribute |
696 | Requirement | |
697 +--------------------------------+----------------------------------+
698 | One way maximum delay | 5 ms |
699 | Asymetric delay required | No |
700 | Maximum jitter | Not critical |
701 | Topology | Point to point, point to Multi- |
702 | | point |
703 | Bandwidth | 64 Kbps |
704 | Availability | 99.9999 |
705 | precise timing required | Yes |
706 | Recovery time on node failure | less than 50ms - hitless |
707 | performance management | Yes, Mandatory |
708 | Redundancy | Yes |
709 | Packet loss | 0.1% |
710 +--------------------------------+----------------------------------+
712 Table 2: Inter-Trip protection network requirements
714 3.1.1.1.6. Current Differential Protection Scheme
716 Current differential protection is commonly used for line protection,
717 and is typical for protecting parallel circuits. At both end of the
718 lines the current is measured by the differential relays, and both
719 relays will trip the circuit breaker if the current going into the
720 line does not equal the current going out of the line. This type of
721 protection scheme assumes some form of communications being present
722 between the relays at both end of the line, to allow both relays to
723 compare measured current values. Line differential protection
724 schemes assume a very low telecommunications delay between both
725 relays, often as low as 5ms. Moreover, as those systems are often
726 not time-synchronized, they also assume symmetric telecommunications
727 paths with constant delay, which allows comparing current measurement
728 values taken at the exact same time.
730 +----------------------------------+--------------------------------+
731 | Current Differential protection | Attribute |
732 | Requirement | |
733 +----------------------------------+--------------------------------+
734 | One way maximum delay | 5 ms |
735 | Asymetric delay Required | Yes |
736 | Maximum jitter | less than 250 us (750us for |
737 | | legacy IED) |
738 | Topology | Point to point, point to |
739 | | Multi-point |
740 | Bandwidth | 64 Kbps |
741 | Availability | 99.9999 |
742 | precise timing required | Yes |
743 | Recovery time on node failure | less than 50ms - hitless |
744 | performance management | Yes, Mandatory |
745 | Redundancy | Yes |
746 | Packet loss | 0.1% |
747 +----------------------------------+--------------------------------+
749 Table 3: Current Differential Protection metrics
751 3.1.1.1.7. Distance Protection Scheme
753 Distance (Impedance Relay) protection scheme is based on voltage and
754 current measurements. The network metrics are similar (but not
755 identical to) Current Differential protection.
757 +-------------------------------+-----------------------------------+
758 | Distance protection | Attribute |
759 | Requirement | |
760 +-------------------------------+-----------------------------------+
761 | One way maximum delay | 5 ms |
762 | Asymetric delay Required | No |
763 | Maximum jitter | Not critical |
764 | Topology | Point to point, point to Multi- |
765 | | point |
766 | Bandwidth | 64 Kbps |
767 | Availability | 99.9999 |
768 | precise timing required | Yes |
769 | Recovery time on node failure | less than 50ms - hitless |
770 | performance management | Yes, Mandatory |
771 | Redundancy | Yes |
772 | Packet loss | 0.1% |
773 +-------------------------------+-----------------------------------+
775 Table 4: Distance Protection requirements
777 3.1.1.1.8. Inter-Substation Protection Signaling
779 This use case describes the exchange of Sampled Value and/or GOOSE
780 (Generic Object Oriented Substation Events) message between
781 Intelligent Electronic Devices (IED) in two substations for
782 protection and tripping coordination. The two IEDs are in a master-
783 slave mode.
785 The Current Transformer or Voltage Transformer (CT/VT) in one
786 substation sends the sampled analog voltage or current value to the
787 Merging Unit (MU) over hard wire. The MU sends the time-synchronized
788 61850-9-2 sampled values to the slave IED. The slave IED forwards
789 the information to the Master IED in the other substation. The
790 master IED makes the determination (for example based on sampled
791 value differentials) to send a trip command to the originating IED.
792 Once the slave IED/Relay receives the GOOSE trip for breaker
793 tripping, it opens the breaker. It then sends a confirmation message
794 back to the master. All data exchanges between IEDs are either
795 through Sampled Value and/or GOOSE messages.
797 +----------------------------------+--------------------------------+
798 | Inter-Substation protection | Attribute |
799 | Requirement | |
800 +----------------------------------+--------------------------------+
801 | One way maximum delay | 5 ms |
802 | Asymetric delay Required | No |
803 | Maximum jitter | Not critical |
804 | Topology | Point to point, point to |
805 | | Multi-point |
806 | Bandwidth | 64 Kbps |
807 | Availability | 99.9999 |
808 | precise timing required | Yes |
809 | Recovery time on node failure | less than 50ms - hitless |
810 | performance management | Yes, Mandatory |
811 | Redundancy | Yes |
812 | Packet loss | 1% |
813 +----------------------------------+--------------------------------+
815 Table 5: Inter-Substation Protection requirements
817 3.1.1.2. Intra-Substation Process Bus Communications
819 This use case describes the data flow from the CT/VT to the IEDs in
820 the substation via the MU. The CT/VT in the substation send the
821 analog voltage or current values to the MU over hard wire. The MU
822 converts the analog values into digital format (typically time-
823 synchronized Sampled Values as specified by IEC 61850-9-2) and sends
824 them to the IEDs in the substation. The GPS Master Clock can send
825 1PPS or IRIG-B format to the MU through a serial port or IEEE 1588
826 protocol via a network. Process bus communication using 61850
827 simplifies connectivity within the substation and removes the
828 requirement for multiple serial connections and removes the slow
829 serial bus architectures that are typically used. This also ensures
830 increased flexibility and increased speed with the use of multicast
831 messaging between multiple devices.
833 +----------------------------------+--------------------------------+
834 | Intra-Substation protection | Attribute |
835 | Requirement | |
836 +----------------------------------+--------------------------------+
837 | One way maximum delay | 5 ms |
838 | Asymetric delay Required | No |
839 | Maximum jitter | Not critical |
840 | Topology | Point to point, point to |
841 | | Multi-point |
842 | Bandwidth | 64 Kbps |
843 | Availability | 99.9999 |
844 | precise timing required | Yes |
845 | Recovery time on Node failure | less than 50ms - hitless |
846 | performance management | Yes, Mandatory |
847 | Redundancy | Yes - No |
848 | Packet loss | 0.1% |
849 +----------------------------------+--------------------------------+
851 Table 6: Intra-Substation Protection requirements
853 3.1.1.3. Wide Area Monitoring and Control Systems
855 The application of synchrophasor measurement data from Phasor
856 Measurement Units (PMU) to Wide Area Monitoring and Control Systems
857 promises to provide important new capabilities for improving system
858 stability. Access to PMU data enables more timely situational
859 awareness over larger portions of the grid than what has been
860 possible historically with normal SCADA (Supervisory Control and Data
861 Acquisition) data. Handling the volume and real-time nature of
862 synchrophasor data presents unique challenges for existing
863 application architectures. Wide Area management System (WAMS) makes
864 it possible for the condition of the bulk power system to be observed
865 and understood in real-time so that protective, preventative, or
866 corrective action can be taken. Because of the very high sampling
867 rate of measurements and the strict requirement for time
868 synchronization of the samples, WAMS has stringent telecommunications
869 requirements in an IP network that are captured in the following
870 table:
872 +----------------------+--------------------------------------------+
873 | WAMS Requirement | Attribute |
874 +----------------------+--------------------------------------------+
875 | One way maximum | 50 ms |
876 | delay | |
877 | Asymetric delay | No |
878 | Required | |
879 | Maximum jitter | Not critical |
880 | Topology | Point to point, point to Multi-point, |
881 | | Multi-point to Multi-point |
882 | Bandwidth | 100 Kbps |
883 | Availability | 99.9999 |
884 | precise timing | Yes |
885 | required | |
886 | Recovery time on | less than 50ms - hitless |
887 | Node failure | |
888 | performance | Yes, Mandatory |
889 | management | |
890 | Redundancy | Yes |
891 | Packet loss | 1% |
892 | Consecutive Packet | At least 1 packet per application cycle |
893 | Loss | must be received. |
894 +----------------------+--------------------------------------------+
896 Table 7: WAMS Special Communication Requirements
898 3.1.1.4. IEC 61850 WAN engineering guidelines requirement
899 classification
901 The IEC (International Electrotechnical Commission) has recently
902 published a Technical Report which offers guidelines on how to define
903 and deploy Wide Area Networks for the interconnections of electric
904 substations, generation plants and SCADA operation centers. The IEC
905 61850-90-12 is providing a classification of WAN communication
906 requirements into 4 classes. Table 8 summarizes these requirements:
908 +----------------+------------+------------+------------+-----------+
909 | WAN | Class WA | Class WB | Class WC | Class WD |
910 | Requirement | | | | |
911 +----------------+------------+------------+------------+-----------+
912 | Application | EHV (Extra | HV (High | MV (Medium | General |
913 | field | High | Voltage) | Voltage) | purpose |
914 | | Voltage) | | | |
915 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms |
916 | Jitter | 10 us | 100 us | 1 ms | 10 ms |
917 | Latency | 100 us | 1 ms | 10 ms | 100 ms |
918 | Asymetry | | | | |
919 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 |
920 | | | | | ms |
921 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | |
922 | | 10-6 | 10-4 | | |
923 | Unavailability | 10-7 to | 10-5 to | 10-3 | |
924 | | 10-6 | 10-4 | | |
925 | Recovery delay | Zero | 50 ms | 5 s | 50 s |
926 | Cyber security | extremely | High | Medium | Medium |
927 | | high | | | |
928 +----------------+------------+------------+------------+-----------+
930 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC
932 3.1.2. Generation Use Case
934 Energy generation systems are complex infrastructures that require
935 control of both the generated power and the generation
936 infrastructure.
938 3.1.2.1. Control of the Generated Power
940 The electrical power generation frequency must be maintained within a
941 very narrow band. Deviations from the acceptable frequency range are
942 detected and the required signals are sent to the power plants for
943 frequency regulation.
945 Automatic Generation Control (AGC) is a system for adjusting the
946 power output of generators at different power plants, in response to
947 changes in the load.
949 +---------------------------------------------------+---------------+
950 | FCAG (Frequency Control Automatic Generation) | Attribute |
951 | Requirement | |
952 +---------------------------------------------------+---------------+
953 | One way maximum delay | 500 ms |
954 | Asymetric delay Required | No |
955 | Maximum jitter | Not critical |
956 | Topology | Point to |
957 | | point |
958 | Bandwidth | 20 Kbps |
959 | Availability | 99.999 |
960 | precise timing required | Yes |
961 | Recovery time on Node failure | N/A |
962 | performance management | Yes, |
963 | | Mandatory |
964 | Redundancy | Yes |
965 | Packet loss | 1% |
966 +---------------------------------------------------+---------------+
968 Table 9: FCAG Communication Requirements
970 3.1.2.2. Control of the Generation Infrastructure
972 The control of the generation infrastructure combines requirements
973 from industrial automation systems and energy generation systems. In
974 this section we present the use case of the control of the generation
975 infrastructure of a wind turbine.
977 |
978 |
979 | +-----------------+
980 | | +----+ |
981 | | |WTRM| WGEN |
982 WROT x==|===| | |
983 | | +----+ WCNV|
984 | |WNAC |
985 | +---+---WYAW---+--+
986 | | |
987 | | | +----+
988 |WTRF | |WMET|
989 | | | |
990 Wind Turbine | +--+-+
991 Controller | |
992 WTUR | | |
993 WREP | | |
994 WSLG | | |
995 WALG | WTOW | |
997 Figure 1: Wind Turbine Control Network
999 Figure 1 presents the subsystems that operate a wind turbine. These
1000 subsystems include
1002 o WROT (Rotor Control)
1004 o WNAC (Nacelle Control) (nacelle: housing containing the generator)
1006 o WTRM (Transmission Control)
1008 o WGEN (Generator)
1010 o WYAW (Yaw Controller) (of the tower head)
1012 o WCNV (In-Turbine Power Converter)
1014 o WMET (External Meteorological Station providing real time
1015 information to the controllers of the tower)
1017 Traffic characteristics relevant for the network planning and
1018 dimensioning process in a wind turbine scenario are listed below.
1019 The values in this section are based mainly on the relevant
1020 references [Ahm14] and [Spe09]. Each logical node (Figure 1) is a
1021 part of the metering network and produces analog measurements and
1022 status information which must comply with their respective data rate
1023 constraints.
1025 +-----------+--------+--------+-------------+---------+-------------+
1026 | Subsystem | Sensor | Analog | Data Rate | Status | Data rate |
1027 | | Count | Sample | (bytes/sec) | Sample | (bytes/sec) |
1028 | | | Count | | Count | |
1029 +-----------+--------+--------+-------------+---------+-------------+
1030 | WROT | 14 | 9 | 642 | 5 | 10 |
1031 | WTRM | 18 | 10 | 2828 | 8 | 16 |
1032 | WGEN | 14 | 12 | 73764 | 2 | 4 |
1033 | WCNV | 14 | 12 | 74060 | 2 | 4 |
1034 | WTRF | 12 | 5 | 73740 | 2 | 4 |
1035 | WNAC | 12 | 9 | 112 | 3 | 6 |
1036 | WYAW | 7 | 8 | 220 | 4 | 8 |
1037 | WTOW | 4 | 1 | 8 | 3 | 6 |
1038 | WMET | 7 | 7 | 228 | - | - |
1039 +-----------+--------+--------+-------------+---------+-------------+
1041 Table 10: Wind Turbine Data Rate Constraints
1043 Quality of Service (QoS) constraints for different services are
1044 presented in Table 11. These constraints are defined by IEEE 1646
1045 standard [IEEE1646] and IEC 61400 standard [IEC61400].
1047 +---------------------+---------+-------------+---------------------+
1048 | Service | Latency | Reliability | Packet Loss Rate |
1049 +---------------------+---------+-------------+---------------------+
1050 | Analogue measure | 16 ms | 99.99% | < 10-6 |
1051 | Status information | 16 ms | 99.99% | < 10-6 |
1052 | Protection traffic | 4 ms | 100.00% | < 10-9 |
1053 | Reporting and | 1 s | 99.99% | < 10-6 |
1054 | logging | | | |
1055 | Video surveillance | 1 s | 99.00% | No specific |
1056 | | | | requirement |
1057 | Internet connection | 60 min | 99.00% | No specific |
1058 | | | | requirement |
1059 | Control traffic | 16 ms | 100.00% | < 10-9 |
1060 | Data polling | 16 ms | 99.99% | < 10-6 |
1061 +---------------------+---------+-------------+---------------------+
1063 Table 11: Wind Turbine Reliability and Latency Constraints
1065 3.1.2.2.1. Intra-Domain Network Considerations
1067 A wind turbine is composed of a large set of subsystems including
1068 sensors and actuators which require time-critical operation. The
1069 reliability and latency constraints of these different subsystems is
1070 shown in Table 11. These subsystems are connected to an intra-domain
1071 network which is used to monitor and control the operation of the
1072 turbine and connect it to the SCADA subsystems. The different
1073 components are interconnected using fiber optics, industrial buses,
1074 industrial Ethernet, EtherCat, or a combination of them. Industrial
1075 signaling and control protocols such as Modbus, Profibus, Profinet
1076 and EtherCat are used directly on top of the Layer 2 transport or
1077 encapsulated over TCP/IP.
1079 The Data collected from the sensors and condition monitoring systems
1080 is multiplexed onto fiber cables for transmission to the base of the
1081 tower, and to remote control centers. The turbine controller
1082 continuously monitors the condition of the wind turbine and collects
1083 statistics on its operation. This controller also manages a large
1084 number of switches, hydraulic pumps, valves, and motors within the
1085 wind turbine.
1087 There is usually a controller both at the bottom of the tower and in
1088 the nacelle. The communication between these two controllers usually
1089 takes place using fiber optics instead of copper links. Sometimes, a
1090 third controller is installed in the hub of the rotor and manages the
1091 pitch of the blades. That unit usually communicates with the nacelle
1092 unit using serial communications.
1094 3.1.2.2.2. Inter-Domain network considerations
1096 A remote control center belonging to a grid operator regulates the
1097 power output, enables remote actuation, and monitors the health of
1098 one or more wind parks in tandem. It connects to the local control
1099 center in a wind park over the Internet (Figure 2) via firewalls at
1100 both ends. The AS path between the local control center and the Wind
1101 Park typically involves several ISPs at different tiers. For
1102 example, a remote control center in Denmark can regulate a wind park
1103 in Greece over the normal public AS path between the two locations.
1105 The remote control center is part of the SCADA system, setting the
1106 desired power output to the wind park and reading back the result
1107 once the new power output level has been set. Traffic between the
1108 remote control center and the wind park typically consists of
1109 protocols like IEC 60870-5-104 [IEC-60870-5-104], OPC XML-DA
1110 [OPCXML], Modbus [MODBUS], and SNMP [RFC3411]. Currently, traffic
1111 flows between the wind farm and the remote control center are best
1112 effort. QoS requirements are not strict, so no SLAs or service
1113 provisioning mechanisms (e.g., VPN) are employed. In case of events
1114 like equipment failure, tolerance for alarm delay is on the order of
1115 minutes, due to redundant systems already in place.
1117 +--------------+
1118 | |
1119 | |
1120 | Wind Park #1 +----+
1121 | | | XXXXXX
1122 | | | X XXXXXXXX +----------------+
1123 +--------------+ | XXXX X XXXXX | |
1124 +---+ XXX | Remote Control |
1125 XXX Internet +----+ Center |
1126 +----+X XXX | |
1127 +--------------+ | XXXXXXX XX | |
1128 | | | XX XXXXXXX +----------------+
1129 | | | XXXXX
1130 | Wind Park #2 +----+
1131 | |
1132 | |
1133 +--------------+
1135 Figure 2: Wind Turbine Control via Internet
1137 We expect future use cases which require bounded latency, bounded
1138 jitter and extraordinary low packet loss for inter-domain traffic
1139 flows due to the softwarization and virtualization of core wind farm
1140 equipment (e.g. switches, firewalls and SCADA server components).
1141 These factors will create opportunities for service providers to
1142 install new services and dynamically manage them from remote
1143 locations. For example, to enable fail-over of a local SCADA server,
1144 a SCADA server in another wind farm site (under the administrative
1145 control of the same operator) could be utilized temporarily
1146 (Figure 3). In that case local traffic would be forwarded to the
1147 remote SCADA server and existing intra-domain QoS and timing
1148 parameters would have to be met for inter-domain traffic flows.
1150 +--------------+
1151 | |
1152 | |
1153 | Wind Park #1 +----+
1154 | | | XXXXXX
1155 | | | X XXXXXXXX +----------------+
1156 +--------------+ | XXXX XXXXX | |
1157 +---+ Operator XXX | Remote Control |
1158 XXX Administered +----+ Center |
1159 +----+X WAN XXX | |
1160 +--------------+ | XXXXXXX XX | |
1161 | | | XX XXXXXXX +----------------+
1162 | | | XXXXX
1163 | Wind Park #2 +----+
1164 | |
1165 | |
1166 +--------------+
1168 Figure 3: Wind Turbine Control via Operator Administered WAN
1170 3.1.3. Distribution use case
1172 3.1.3.1. Fault Location Isolation and Service Restoration (FLISR)
1174 Fault Location, Isolation, and Service Restoration (FLISR) refers to
1175 the ability to automatically locate the fault, isolate the fault, and
1176 restore service in the distribution network. This will likely be the
1177 first widespread application of distributed intelligence in the grid.
1179 Static power switch status (open/closed) in the network dictates the
1180 power flow to secondary substations. Reconfiguring the network in
1181 the event of a fault is typically done manually on site to energize/
1182 de-energize alternate paths. Automating the operation of substation
1183 switchgear allows the flow of power to be altered automatically under
1184 fault conditions.
1186 FLISR can be managed centrally from a Distribution Management System
1187 (DMS) or executed locally through distributed control via intelligent
1188 switches and fault sensors.
1190 +----------------------+--------------------------------------------+
1191 | FLISR Requirement | Attribute |
1192 +----------------------+--------------------------------------------+
1193 | One way maximum | 80 ms |
1194 | delay | |
1195 | Asymetric delay | No |
1196 | Required | |
1197 | Maximum jitter | 40 ms |
1198 | Topology | Point to point, point to Multi-point, |
1199 | | Multi-point to Multi-point |
1200 | Bandwidth | 64 Kbps |
1201 | Availability | 99.9999 |
1202 | precise timing | Yes |
1203 | required | |
1204 | Recovery time on | Depends on customer impact |
1205 | Node failure | |
1206 | performance | Yes, Mandatory |
1207 | management | |
1208 | Redundancy | Yes |
1209 | Packet loss | 0.1% |
1210 +----------------------+--------------------------------------------+
1212 Table 12: FLISR Communication Requirements
1214 3.2. Electrical Utilities Today
1216 Many utilities still rely on complex environments formed of multiple
1217 application-specific proprietary networks, including TDM networks.
1219 In this kind of environment there is no mixing of OT and IT
1220 applications on the same network, and information is siloed between
1221 operational areas.
1223 Specific calibration of the full chain is required, which is costly.
1225 This kind of environment prevents utility operations from realizing
1226 the operational efficiency benefits, visibility, and functional
1227 integration of operational information across grid applications and
1228 data networks.
1230 In addition, there are many security-related issues as discussed in
1231 the following section.
1233 3.2.1. Security Current Practices and Limitations
1235 Grid monitoring and control devices are already targets for cyber
1236 attacks, and legacy telecommunications protocols have many intrinsic
1237 network-related vulnerabilities. For example, DNP3, Modbus,
1238 PROFIBUS/PROFINET, and other protocols are designed around a common
1239 paradigm of request and respond. Each protocol is designed for a
1240 master device such as an HMI (Human Machine Interface) system to send
1241 commands to subordinate slave devices to retrieve data (reading
1242 inputs) or control (writing to outputs). Because many of these
1243 protocols lack authentication, encryption, or other basic security
1244 measures, they are prone to network-based attacks, allowing a
1245 malicious actor or attacker to utilize the request-and-respond system
1246 as a mechanism for command-and-control like functionality. Specific
1247 security concerns common to most industrial control, including
1248 utility telecommunication protocols include the following:
1250 o Network or transport errors (e.g. malformed packets or excessive
1251 latency) can cause protocol failure.
1253 o Protocol commands may be available that are capable of forcing
1254 slave devices into inoperable states, including powering-off
1255 devices, forcing them into a listen-only state, disabling
1256 alarming.
1258 o Protocol commands may be available that are capable of restarting
1259 communications and otherwise interrupting processes.
1261 o Protocol commands may be available that are capable of clearing,
1262 erasing, or resetting diagnostic information such as counters and
1263 diagnostic registers.
1265 o Protocol commands may be available that are capable of requesting
1266 sensitive information about the controllers, their configurations,
1267 or other need-to-know information.
1269 o Most protocols are application layer protocols transported over
1270 TCP; therefore it is easy to transport commands over non-standard
1271 ports or inject commands into authorized traffic flows.
1273 o Protocol commands may be available that are capable of
1274 broadcasting messages to many devices at once (i.e. a potential
1275 DoS).
1277 o Protocol commands may be available to query the device network to
1278 obtain defined points and their values (i.e. a configuration
1279 scan).
1281 o Protocol commands may be available that will list all available
1282 function codes (i.e. a function scan).
1284 These inherent vulnerabilities, along with increasing connectivity
1285 between IT an OT networks, make network-based attacks very feasible.
1287 Simple injection of malicious protocol commands provides control over
1288 the target process. Altering legitimate protocol traffic can also
1289 alter information about a process and disrupt the legitimate controls
1290 that are in place over that process. A man-in-the-middle attack
1291 could provide both control over a process and misrepresentation of
1292 data back to operator consoles.
1294 3.3. Electrical Utilities Future
1296 The business and technology trends that are sweeping the utility
1297 industry will drastically transform the utility business from the way
1298 it has been for many decades. At the core of many of these changes
1299 is a drive to modernize the electrical grid with an integrated
1300 telecommunications infrastructure. However, interoperability
1301 concerns, legacy networks, disparate tools, and stringent security
1302 requirements all add complexity to the grid transformation. Given
1303 the range and diversity of the requirements that should be addressed
1304 by the next generation telecommunications infrastructure, utilities
1305 need to adopt a holistic architectural approach to integrate the
1306 electrical grid with digital telecommunications across the entire
1307 power delivery chain.
1309 The key to modernizing grid telecommunications is to provide a
1310 common, adaptable, multi-service network infrastructure for the
1311 entire utility organization. Such a network serves as the platform
1312 for current capabilities while enabling future expansion of the
1313 network to accommodate new applications and services.
1315 To meet this diverse set of requirements, both today and in the
1316 future, the next generation utility telecommunnications network will
1317 be based on open-standards-based IP architecture. An end-to-end IP
1318 architecture takes advantage of nearly three decades of IP technology
1319 development, facilitating interoperability and device management
1320 across disparate networks and devices, as it has been already
1321 demonstrated in many mission-critical and highly secure networks.
1323 IPv6 is seen as a future telecommunications technology for the Smart
1324 Grid; the IEC (International Electrotechnical Commission) and
1325 different National Committees have mandated a specific adhoc group
1326 (AHG8) to define the migration strategy to IPv6 for all the IEC TC57
1327 power automation standards. The AHG8 has recently finalised the work
1328 on the migration strategy and the following Technical Report has been
1329 issued: IEC TR 62357-200:2015: Guidelines for migration from Internet
1330 Protocol version 4 (IPv4) to Internet Protocol version 6 (IPv6).
1332 We expect cloud-based SCADA systems to control and monitor the
1333 critical and non-critical subsystems of generation systems, for
1334 example wind farms.
1336 3.3.1. Migration to Packet-Switched Network
1338 Throughout the world, utilities are increasingly planning for a
1339 future based on smart grid applications requiring advanced
1340 telecommunications systems. Many of these applications utilize
1341 packet connectivity for communicating information and control signals
1342 across the utility's Wide Area Network (WAN), made possible by
1343 technologies such as multiprotocol label switching (MPLS). The data
1344 that traverses the utility WAN includes:
1346 o Grid monitoring, control, and protection data
1348 o Non-control grid data (e.g. asset data for condition-based
1349 monitoring)
1351 o Physical safety and security data (e.g. voice and video)
1353 o Remote worker access to corporate applications (voice, maps,
1354 schematics, etc.)
1356 o Field area network backhaul for smart metering, and distribution
1357 grid management
1359 o Enterprise traffic (email, collaboration tools, business
1360 applications)
1362 WANs support this wide variety of traffic to and from substations,
1363 the transmission and distribution grid, generation sites, between
1364 control centers, and between work locations and data centers. To
1365 maintain this rapidly expanding set of applications, many utilities
1366 are taking steps to evolve present time-division multiplexing (TDM)
1367 based and frame relay infrastructures to packet systems. Packet-
1368 based networks are designed to provide greater functionalities and
1369 higher levels of service for applications, while continuing to
1370 deliver reliability and deterministic (real-time) traffic support.
1372 3.3.2. Telecommunications Trends
1374 These general telecommunications topics are in addition to the use
1375 cases that have been addressed so far. These include both current
1376 and future telecommunications related topics that should be factored
1377 into the network architecture and design.
1379 3.3.2.1. General Telecommunications Requirements
1381 o IP Connectivity everywhere
1383 o Monitoring services everywhere and from different remote centers
1384 o Move services to a virtual data center
1386 o Unify access to applications / information from the corporate
1387 network
1389 o Unify services
1391 o Unified Communications Solutions
1393 o Mix of fiber and microwave technologies - obsolescence of SONET/
1394 SDH or TDM
1396 o Standardize grid telecommunications protocol to opened standard to
1397 ensure interoperability
1399 o Reliable Telecommunications for Transmission and Distribution
1400 Substations
1402 o IEEE 1588 time synchronization Client / Server Capabilities
1404 o Integration of Multicast Design
1406 o QoS Requirements Mapping
1408 o Enable Future Network Expansion
1410 o Substation Network Resilience
1412 o Fast Convergence Design
1414 o Scalable Headend Design
1416 o Define Service Level Agreements (SLA) and Enable SLA Monitoring
1418 o Integration of 3G/4G Technologies and future technologies
1420 o Ethernet Connectivity for Station Bus Architecture
1422 o Ethernet Connectivity for Process Bus Architecture
1424 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP
1426 3.3.2.2. Specific Network topologies of Smart Grid Applications
1428 Utilities often have very large private telecommunications networks.
1429 It covers an entire territory / country. The main purpose of the
1430 network, until now, has been to support transmission network
1431 monitoring, control, and automation, remote control of generation
1432 sites, and providing FCAPS (Fault, Configuration, Accounting,
1433 Performance, Security) services from centralized network operation
1434 centers.
1436 Going forward, one network will support operation and maintenance of
1437 electrical networks (generation, transmission, and distribution),
1438 voice and data services for ten of thousands of employees and for
1439 exchange with neighboring interconnections, and administrative
1440 services. To meet those requirements, utility may deploy several
1441 physical networks leveraging different technologies across the
1442 country: an optical network and a microwave network for instance.
1443 Each protection and automatism system between two points has two
1444 telecommunications circuits, one on each network. Path diversity
1445 between two substations is key. Regardless of the event type
1446 (hurricane, ice storm, etc.), one path shall stay available so the
1447 system can still operate.
1449 In the optical network, signals are transmitted over more than tens
1450 of thousands of circuits using fiber optic links, microwave and
1451 telephone cables. This network is the nervous system of the
1452 utility's power transmission operations. The optical network
1453 represents ten of thousands of km of cable deployed along the power
1454 lines, with individual runs as long as 280 km.
1456 3.3.2.3. Precision Time Protocol
1458 Some utilities do not use GPS clocks in generation substations. One
1459 of the main reasons is that some of the generation plants are 30 to
1460 50 meters deep under ground and the GPS signal can be weak and
1461 unreliable. Instead, atomic clocks are used. Clocks are
1462 synchronized amongst each other. Rubidium clocks provide clock and
1463 1ms timestamps for IRIG-B.
1465 Some companies plan to transition to the Precision Time Protocol
1466 (PTP, [IEEE1588]), distributing the synchronization signal over the
1467 IP/MPLS network. PTP provides a mechanism for synchronizing the
1468 clocks of participating nodes to a high degree of accuracy and
1469 precision.
1471 PTP operates based on the following assumptions:
1473 It is assumed that the network eliminates cyclic forwarding of PTP
1474 messages within each communication path (e.g. by using a spanning
1475 tree protocol).
1477 PTP is tolerant of an occasional missed message, duplicated
1478 message, or message that arrived out of order. However, PTP
1479 assumes that such impairments are relatively rare.
1481 PTP was designed assuming a multicast communication model, however
1482 PTP also supports a unicast communication model as long as the
1483 behavior of the protocol is preserved.
1485 Like all message-based time transfer protocols, PTP time accuracy
1486 is degraded by delay asymmetry in the paths taken by event
1487 messages. Asymmetry is not detectable by PTP, however, if such
1488 delays are known a priori, PTP can correct for asymmetry.
1490 IEC 61850 defines the use of IEC/IEEE 61850-9-3:2016. The title is:
1491 Precision time protocol profile for power utility automation. It is
1492 based on Annex B/IEC 62439 which offers the support of redundant
1493 attachment of clocks to Parallel Redundancy Protocol (PRP) and High-
1494 availability Seamless Redundancy (HSR) networks.
1496 3.3.3. Security Trends in Utility Networks
1498 Although advanced telecommunications networks can assist in
1499 transforming the energy industry by playing a critical role in
1500 maintaining high levels of reliability, performance, and
1501 manageability, they also introduce the need for an integrated
1502 security infrastructure. Many of the technologies being deployed to
1503 support smart grid projects such as smart meters and sensors can
1504 increase the vulnerability of the grid to attack. Top security
1505 concerns for utilities migrating to an intelligent smart grid
1506 telecommunications platform center on the following trends:
1508 o Integration of distributed energy resources
1510 o Proliferation of digital devices to enable management, automation,
1511 protection, and control
1513 o Regulatory mandates to comply with standards for critical
1514 infrastructure protection
1516 o Migration to new systems for outage management, distribution
1517 automation, condition-based maintenance, load forecasting, and
1518 smart metering
1520 o Demand for new levels of customer service and energy management
1522 This development of a diverse set of networks to support the
1523 integration of microgrids, open-access energy competition, and the
1524 use of network-controlled devices is driving the need for a converged
1525 security infrastructure for all participants in the smart grid,
1526 including utilities, energy service providers, large commercial and
1527 industrial, as well as residential customers. Securing the assets of
1528 electric power delivery systems (from the control center to the
1529 substation, to the feeders and down to customer meters) requires an
1530 end-to-end security infrastructure that protects the myriad of
1531 telecommunications assets used to operate, monitor, and control power
1532 flow and measurement.
1534 "Cyber security" refers to all the security issues in automation and
1535 telecommunications that affect any functions related to the operation
1536 of the electric power systems. Specifically, it involves the
1537 concepts of:
1539 o Integrity : data cannot be altered undetectably
1541 o Authenticity : the telecommunications parties involved must be
1542 validated as genuine
1544 o Authorization : only requests and commands from the authorized
1545 users can be accepted by the system
1547 o Confidentiality : data must not be accessible to any
1548 unauthenticated users
1550 When designing and deploying new smart grid devices and
1551 telecommunications systems, it is imperative to understand the
1552 various impacts of these new components under a variety of attack
1553 situations on the power grid. Consequences of a cyber attack on the
1554 grid telecommunications network can be catastrophic. This is why
1555 security for smart grid is not just an ad hoc feature or product,
1556 it's a complete framework integrating both physical and Cyber
1557 security requirements and covering the entire smart grid networks
1558 from generation to distribution. Security has therefore become one
1559 of the main foundations of the utility telecom network architecture
1560 and must be considered at every layer with a defense-in-depth
1561 approach. Migrating to IP based protocols is key to address these
1562 challenges for two reasons:
1564 o IP enables a rich set of features and capabilities to enhance the
1565 security posture
1567 o IP is based on open standards, which allows interoperability
1568 between different vendors and products, driving down the costs
1569 associated with implementing security solutions in OT networks.
1571 Securing OT (Operation technology) telecommunications over packet-
1572 switched IP networks follow the same principles that are foundational
1573 for securing the IT infrastructure, i.e., consideration must be given
1574 to enforcing electronic access control for both person-to-machine and
1575 machine-to-machine communications, and providing the appropriate
1576 levels of data privacy, device and platform integrity, and threat
1577 detection and mitigation.
1579 3.4. Electrical Utilities Asks
1581 o Mixed L2 and L3 topologies
1583 o Deterministic behavior
1585 o Bounded latency and jitter
1587 o Tight feedback intervals
1589 o High availability, low recovery time
1591 o Redundancy, low packet loss
1593 o Precise timing
1595 o Centralized computing of deterministic paths
1597 o Distributed configuration may also be useful
1599 4. Building Automation Systems
1601 4.1. Use Case Description
1603 A Building Automation System (BAS) manages equipment and sensors in a
1604 building for improving residents' comfort, reducing energy
1605 consumption, and responding to failures and emergencies. For
1606 example, the BAS measures the temperature of a room using sensors and
1607 then controls the HVAC (heating, ventilating, and air conditioning)
1608 to maintain a set temperature and minimize energy consumption.
1610 A BAS primarily performs the following functions:
1612 o Periodically measures states of devices, for example humidity and
1613 illuminance of rooms, open/close state of doors, FAN speed, etc.
1615 o Stores the measured data.
1617 o Provides the measured data to BAS systems and operators.
1619 o Generates alarms for abnormal state of devices.
1621 o Controls devices (e.g. turn off room lights at 10:00 PM).
1623 4.2. Building Automation Systems Today
1625 4.2.1. BAS Architecture
1627 A typical BAS architecture of today is shown in Figure 4.
1629 +----------------------------+
1630 | |
1631 | BMS HMI |
1632 | | | |
1633 | +----------------------+ |
1634 | | Management Network | |
1635 | +----------------------+ |
1636 | | | |
1637 | LC LC |
1638 | | | |
1639 | +----------------------+ |
1640 | | Field Network | |
1641 | +----------------------+ |
1642 | | | | | |
1643 | Dev Dev Dev Dev |
1644 | |
1645 +----------------------------+
1647 BMS := Building Management Server
1648 HMI := Human Machine Interface
1649 LC := Local Controller
1651 Figure 4: BAS architecture
1653 There are typically two layers of network in a BAS. The upper one is
1654 called the Management Network and the lower one is called the Field
1655 Network. In management networks an IP-based communication protocol
1656 is used, while in field networks non-IP based communication protocols
1657 ("field protocols") are mainly used. Field networks have specific
1658 timing requirements, whereas management networks can be best-effort.
1660 A Human Machine Interface (HMI) is typically a desktop PC used by
1661 operators to monitor and display device states, send device control
1662 commands to Local Controllers (LCs), and configure building schedules
1663 (for example "turn off all room lights in the building at 10:00 PM").
1665 A Building Management Server (BMS) performs the following operations.
1667 o Collect and store device states from LCs at regular intervals.
1669 o Send control values to LCs according to a building schedule.
1671 o Send an alarm signal to operators if it detects abnormal devices
1672 states.
1674 The BMS and HMI communicate with LCs via IP-based "management
1675 protocols" (see standards [bacnetip], [knx]).
1677 A LC is typically a Programmable Logic Controller (PLC) which is
1678 connected to several tens or hundreds of devices using "field
1679 protocols". An LC performs the following kinds of operations:
1681 o Measure device states and provide the information to BMS or HMI.
1683 o Send control values to devices, unilaterally or as part of a
1684 feedback control loop.
1686 There are many field protocols used today; some are standards-based
1687 and others are proprietary (see standards [lontalk], [modbus],
1688 [profibus] and [flnet]). The result is that BASs have multiple MAC/
1689 PHY modules and interfaces. This makes BASs more expensive, slower
1690 to develop, and can result in "vendor lock-in" with multiple types of
1691 management applications.
1693 4.2.2. BAS Deployment Model
1695 An example BAS for medium or large buildings is shown in Figure 5.
1696 The physical layout spans multiple floors, and there is a monitoring
1697 room where the BAS management entities are located. Each floor will
1698 have one or more LCs depending upon the number of devices connected
1699 to the field network.
1701 +--------------------------------------------------+
1702 | Floor 3 |
1703 | +----LC~~~~+~~~~~+~~~~~+ |
1704 | | | | | |
1705 | | Dev Dev Dev |
1706 | | |
1707 |--- | ------------------------------------------|
1708 | | Floor 2 |
1709 | +----LC~~~~+~~~~~+~~~~~+ Field Network |
1710 | | | | | |
1711 | | Dev Dev Dev |
1712 | | |
1713 |--- | ------------------------------------------|
1714 | | Floor 1 |
1715 | +----LC~~~~+~~~~~+~~~~~+ +-----------------|
1716 | | | | | | Monitoring Room |
1717 | | Dev Dev Dev | |
1718 | | | BMS HMI |
1719 | | Management Network | | | |
1720 | +--------------------------------+-----+ |
1721 | | |
1722 +--------------------------------------------------+
1724 Figure 5: BAS Deployment model for Medium/Large Buildings
1726 Each LC is connected to the monitoring room via the Management
1727 network, and the management functions are performed within the
1728 building. In most cases, fast Ethernet (e.g. 100BASE-T) is used for
1729 the management network. Since the management network is non-
1730 realtime, use of Ethernet without quality of service is sufficient
1731 for today's deployment.
1733 In the field network a variety of physical interfaces such as RS232C
1734 and RS485 are used, which have specific timing requirements. Thus if
1735 a field network is to be replaced with an Ethernet or wireless
1736 network, such networks must support time-critical deterministic
1737 flows.
1739 In Figure 6, another deployment model is presented in which the
1740 management system is hosted remotely. This is becoming popular for
1741 small office and residential buildings in which a standalone
1742 monitoring system is not cost-effective.
1744 +---------------+
1745 | Remote Center |
1746 | |
1747 | BMS HMI |
1748 +------------------------------------+ | | | |
1749 | Floor 2 | | +---+---+ |
1750 | +----LC~~~~+~~~~~+ Field Network| | | |
1751 | | | | | | Router |
1752 | | Dev Dev | +-------|-------+
1753 | | | |
1754 |--- | ------------------------------| |
1755 | | Floor 1 | |
1756 | +----LC~~~~+~~~~~+ | |
1757 | | | | | |
1758 | | Dev Dev | |
1759 | | | |
1760 | | Management Network | WAN |
1761 | +------------------------Router-------------+
1762 | |
1763 +------------------------------------+
1765 Figure 6: Deployment model for Small Buildings
1767 Some interoperability is possible today in the Management Network,
1768 but not in today's field networks due to their non-IP-based design.
1770 4.2.3. Use Cases for Field Networks
1772 Below are use cases for Environmental Monitoring, Fire Detection, and
1773 Feedback Control, and their implications for field network
1774 performance.
1776 4.2.3.1. Environmental Monitoring
1778 The BMS polls each LC at a maximum measurement interval of 100ms (for
1779 example to draw a historical chart of 1 second granularity with a 10x
1780 sampling interval) and then performs the operations as specified by
1781 the operator. Each LC needs to measure each of its several hundred
1782 sensors once per measurement interval. Latency is not critical in
1783 this scenario as long as all sensor values are completed in the
1784 measurement interval. Availability is expected to be 99.999 %.
1786 4.2.3.2. Fire Detection
1788 On detection of a fire, the BMS must stop the HVAC, close the fire
1789 shutters, turn on the fire sprinklers, send an alarm, etc. There are
1790 typically ~10s of sensors per LC that BMS needs to manage. In this
1791 scenario the measurement interval is 10-50ms, the communication delay
1792 is 10ms, and the availability must be 99.9999 %.
1794 4.2.3.3. Feedback Control
1796 BAS systems utilize feedback control in various ways; the most time-
1797 critial is control of DC motors, which require a short feedback
1798 interval (1-5ms) with low communication delay (10ms) and jitter
1799 (1ms). The feedback interval depends on the characteristics of the
1800 device and a target quality of control value. There are typically
1801 ~10s of such devices per LC.
1803 Communication delay is expected to be less than 10ms, jitter less
1804 than 1ms while the availability must be 99.9999% .
1806 4.2.4. Security Considerations
1808 When BAS field networks were developed it was assumed that the field
1809 networks would always be physically isolated from external networks
1810 and therefore security was not a concern. In today's world many BASs
1811 are managed remotely and are thus connected to shared IP networks and
1812 so security is definitely a concern, yet security features are not
1813 available in the majority of BAS field network deployments .
1815 The management network, being an IP-based network, has the protocols
1816 available to enable network security, but in practice many BAS
1817 systems do not implement even the available security features such as
1818 device authentication or encryption for data in transit.
1820 4.3. BAS Future
1822 In the future we expect more fine-grained environmental monitoring
1823 and lower energy consumption, which will require more sensors and
1824 devices, thus requiring larger and more complex building networks.
1826 We expect building networks to be connected to or converged with
1827 other networks (Enterprise network, Home network, and Internet).
1829 Therefore better facilities for network management, control,
1830 reliability and security are critical in order to improve resident
1831 and operator convenience and comfort. For example the ability to
1832 monitor and control building devices via the internet would enable
1833 (for example) control of room lights or HVAC from a resident's
1834 desktop PC or phone application.
1836 4.4. BAS Asks
1838 The community would like to see an interoperable protocol
1839 specification that can satisfy the timing, security, availability and
1840 QoS constraints described above, such that the resulting converged
1841 network can replace the disparate field networks. Ideally this
1842 connectivity could extend to the open Internet.
1844 This would imply an architecture that can guarantee
1846 o Low communication delays (from <10ms to 100ms in a network of
1847 several hundred devices)
1849 o Low jitter (< 1 ms)
1851 o Tight feedback intervals (1ms - 10ms)
1853 o High network availability (up to 99.9999% )
1855 o Availability of network data in disaster scenario
1857 o Authentication between management and field devices (both local
1858 and remote)
1860 o Integrity and data origin authentication of communication data
1861 between field and management devices
1863 o Confidentiality of data when communicated to a remote device
1865 5. Wireless for Industrial
1867 5.1. Use Case Description
1869 Wireless networks are useful for industrial applications, for example
1870 when portable, fast-moving or rotating objects are involved, and for
1871 the resource-constrained devices found in the Internet of Things
1872 (IoT).
1874 Such network-connected sensors, actuators, control loops (etc.)
1875 typically require that the underlying network support real-time
1876 quality of service (QoS), as well as specific classes of other
1877 network properties such as reliability, redundancy, and security.
1879 These networks may also contain very large numbers of devices, for
1880 example for factories, "big data" acquisition, and the IoT. Given
1881 the large numbers of devices installed, and the potential
1882 pervasiveness of the IoT, this is a huge and very cost-sensitive
1883 market. For example, a 1% cost reduction in some areas could save
1884 $100B
1886 5.1.1. Network Convergence using 6TiSCH
1888 Some wireless network technologies support real-time QoS, and are
1889 thus useful for these kinds of networks, but others do not. For
1890 example WiFi is pervasive but does not provide guaranteed timing or
1891 delivery of packets, and thus is not useful in this context.
1893 In this use case we focus on one specific wireless network technology
1894 which does provide the required deterministic QoS, which is "IPv6
1895 over the TSCH mode of IEEE 802.15.4e" (6TiSCH, where TSCH stands for
1896 "Time-Slotted Channel Hopping", see [I-D.ietf-6tisch-architecture],
1897 [IEEE802154], [IEEE802154e], and [RFC7554]).
1899 There are other deterministic wireless busses and networks available
1900 today, however they are imcompatible with each other, and
1901 incompatible with IP traffic (for example [ISA100], [WirelessHART]).
1903 Thus the primary goal of this use case is to apply 6TiSCH as a
1904 converged IP- and standards-based wireless network for industrial
1905 applications, i.e. to replace multiple proprietary and/or
1906 incompatible wireless networking and wireless network management
1907 standards.
1909 5.1.2. Common Protocol Development for 6TiSCH
1911 Today there are a number of protocols required by 6TiSCH which are
1912 still in development, and a second intent of this use case is to
1913 highlight the ways in which these "missing" protocols share goals in
1914 common with DetNet. Thus it is possible that some of the protocol
1915 technology developed for DetNet will also be applicable to 6TiSCH.
1917 These protocol goals are identified here, along with their
1918 relationship to DetNet. It is likely that ultimately the resulting
1919 protocols will not be identical, but will share design principles
1920 which contribute to the eficiency of enabling both DetNet and 6TiSCH.
1922 One such commonality is that although at a different time scale, in
1923 both TSN [IEEE802.1TSNTG] and TSCH a packet crosses the network from
1924 node to node follows a precise schedule, as a train that leaves
1925 intermediate stations at precise times along its path. This kind of
1926 operation reduces collisions, saves energy, and enables engineering
1927 the network for deterministic properties.
1929 Another commonality is remote monitoring and scheduling management of
1930 a TSCH network by a Path Computation Element (PCE) and Network
1931 Management Entity (NME). The PCE/NME manage timeslots and device
1932 resources in a manner that minimizes the interaction with and the
1933 load placed on resource-constrained devices. For example, a tiny IoT
1934 device may have just enough buffers to store one or a few IPv6
1935 packets, and will have limited bandwidth between peers such that it
1936 can maintain only a small amount of peer information, and will not be
1937 able to store many packets waiting to be forwarded. It is
1938 advantageous then for it to only be required to carry out the
1939 specific behavior assigned to it by the PCE/NME (as opposed to
1940 maintaining its own IP stack, for example).
1942 Note: Current WG discussion indicates that some peer-to-peer
1943 communication must be assumed, i.e. the PCE may communicate only
1944 indirectly with any given device, enabling hierarchical configuration
1945 of the system.
1947 6TiSCH depends on [PCE] and [I-D.ietf-detnet-architecture].
1949 6TiSCH also depends on the fact that DetNet will maintain consistency
1950 with [IEEE802.1TSNTG].
1952 5.2. Wireless Industrial Today
1954 Today industrial wireless is accomplished using multiple
1955 deterministic wireless networks which are incompatible with each
1956 other and with IP traffic.
1958 6TiSCH is not yet fully specified, so it cannot be used in today's
1959 applications.
1961 5.3. Wireless Industrial Future
1963 5.3.1. Unified Wireless Network and Management
1965 We expect DetNet and 6TiSCH together to enable converged transport of
1966 deterministic and best-effort traffic flows between real-time
1967 industrial devices and wide area networks via IP routing. A high
1968 level view of a basic such network is shown in Figure 7.
1970 ---+-------- ............ ------------
1971 | External Network |
1972 | +-----+
1973 +-----+ | NME |
1974 | | LLN Border | |
1975 | | router +-----+
1976 +-----+
1977 o o o
1978 o o o o
1979 o o LLN o o o
1980 o o o o
1981 o
1983 Figure 7: Basic 6TiSCH Network
1985 Figure 8 shows a backbone router federating multiple synchronized
1986 6TiSCH subnets into a single subnet connected to the external
1987 network.
1989 ---+-------- ............ ------------
1990 | External Network |
1991 | +-----+
1992 | +-----+ | NME |
1993 +-----+ | +-----+ | |
1994 | | Router | | PCE | +-----+
1995 | | +--| |
1996 +-----+ +-----+
1997 | |
1998 | Subnet Backbone |
1999 +--------------------+------------------+
2000 | | |
2001 +-----+ +-----+ +-----+
2002 | | Backbone | | Backbone | | Backbone
2003 o | | router | | router | | router
2004 +-----+ +-----+ +-----+
2005 o o o o o
2006 o o o o o o o o o o o
2007 o o o LLN o o o o
2008 o o o o o o o o o o o o
2010 Figure 8: Extended 6TiSCH Network
2012 The backbone router must ensure end-to-end deterministic behavior
2013 between the LLN and the backbone. We would like to see this
2014 accomplished in conformance with the work done in
2015 [I-D.ietf-detnet-architecture] with respect to Layer-3 aspects of
2016 deterministic networks that span multiple Layer-2 domains.
2018 The PCE must compute a deterministic path end-to-end across the TSCH
2019 network and IEEE802.1 TSN Ethernet backbone, and DetNet protocols are
2020 expected to enable end-to-end deterministic forwarding.
2022 +-----+
2023 | IoT |
2024 | G/W |
2025 +-----+
2026 ^ <---- Elimination
2027 | |
2028 Track branch | |
2029 +-------+ +--------+ Subnet Backbone
2030 | |
2031 +--|--+ +--|--+
2032 | | | Backbone | | | Backbone
2033 o | | | router | | | router
2034 +--/--+ +--|--+
2035 o / o o---o----/ o
2036 o o---o--/ o o o o o
2037 o \ / o o LLN o
2038 o v <---- Replication
2039 o
2041 Figure 9: 6TiSCH Network with PRE
2043 5.3.1.1. PCE and 6TiSCH ARQ Retries
2045 Note: The possible use of ARQ techniques in DetNet is currently
2046 considered a possible design alternative.
2048 6TiSCH uses the IEEE802.15.4 Automatic Repeat-reQuest (ARQ) mechanism
2049 to provide higher reliability of packet delivery. ARQ is related to
2050 packet replication and elimination because there are two independent
2051 paths for packets to arrive at the destination, and if an expected
2052 packed does not arrive on one path then it checks for the packet on
2053 the second path.
2055 Although to date this mechanism is only used by wireless networks,
2056 this may be a technique that would be appropriate for DetNet and so
2057 aspects of the enabling protocol could be co-developed.
2059 For example, in Figure 9, a Track is laid out from a field device in
2060 a 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN
2061 backbone.
2063 In ARQ the Replication function in the field device sends a copy of
2064 each packet over two different branches, and the PCE schedules each
2065 hop of both branches so that the two copies arrive in due time at the
2066 gateway. In case of a loss on one branch, hopefully the other copy
2067 of the packet still arrives within the allocated time. If two copies
2068 make it to the IoT gateway, the Elimination function in the gateway
2069 ignores the extra packet and presents only one copy to upper layers.
2071 At each 6TiSCH hop along the Track, the PCE may schedule more than
2072 one timeSlot for a packet, so as to support Layer-2 retries (ARQ).
2074 In current deployments, a TSCH Track does not necessarily support PRE
2075 but is systematically multi-path. This means that a Track is
2076 scheduled so as to ensure that each hop has at least two forwarding
2077 solutions, and the forwarding decision is to try the preferred one
2078 and use the other in case of Layer-2 transmission failure as detected
2079 by ARQ.
2081 5.3.2. Schedule Management by a PCE
2083 A common feature of 6TiSCH and DetNet is the action of a PCE to
2084 configure paths through the network. Specifically, what is needed is
2085 a protocol and data model that the PCE will use to get/set the
2086 relevant configuration from/to the devices, as well as perform
2087 operations on the devices. We expect that this protocol will be
2088 developed by DetNet with consideration for its reuse by 6TiSCH. The
2089 remainder of this section provides a bit more context from the 6TiSCH
2090 side.
2092 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests
2094 The 6TiSCH device does not expect to place the request for bandwidth
2095 between itself and another device in the network. Rather, an
2096 operation control system invoked through a human interface specifies
2097 the required traffic specification and the end nodes (in terms of
2098 latency and reliability). Based on this information, the PCE must
2099 compute a path between the end nodes and provision the network with
2100 per-flow state that describes the per-hop operation for a given
2101 packet, the corresponding timeslots, and the flow identification that
2102 enables recognizing that a certain packet belongs to a certain path,
2103 etc.
2105 For a static configuration that serves a certain purpose for a long
2106 period of time, it is expected that a node will be provisioned in one
2107 shot with a full schedule, which incorporates the aggregation of its
2108 behavior for multiple paths. 6TiSCH expects that the programing of
2109 the schedule will be done over COAP as discussed in
2110 [I-D.ietf-6tisch-coap].
2112 6TiSCH expects that the PCE commands will be mapped back and forth
2113 into CoAP by a gateway function at the edge of the 6TiSCH network.
2114 For instance, it is possible that a mapping entity on the backbone
2115 transforms a non-CoAP protocol such as PCEP into the RESTful
2116 interfaces that the 6TiSCH devices support. This architecture will
2117 be refined to comply with DetNet [I-D.ietf-detnet-architecture] when
2118 the work is formalized. Related information about 6TiSCH can be
2119 found at [I-D.ietf-6tisch-6top-interface] and RPL [RFC6550].
2121 A protocol may be used to update the state in the devices during
2122 runtime, for example if it appears that a path through the network
2123 has ceased to perform as expected, but in 6TiSCH that flow was not
2124 designed and no protocol was selected. We would like to see DetNet
2125 define the appropriate end-to-end protocols to be used in that case.
2126 The implication is that these state updates take place once the
2127 system is configured and running, i.e. they are not limited to the
2128 initial communication of the configuration of the system.
2130 A "slotFrame" is the base object that a PCE would manipulate to
2131 program a schedule into an LLN node ([I-D.ietf-6tisch-architecture]).
2133 We would like to see the PCE read energy data from devices, and
2134 compute paths that will implement policies on how energy in devices
2135 is consumed, for instance to ensure that the spent energy does not
2136 exceeded the available energy over a period of time. Note: this
2137 statement implies that an extensible protocol for communicating
2138 device info to the PCE and enabling the PCE to act on it will be part
2139 of the DetNet architecture, however for subnets with specific
2140 protocols (e.g. CoAP) a gateway may be required.
2142 6TiSCH devices can discover their neighbors over the radio using a
2143 mechanism such as beacons, but even though the neighbor information
2144 is available in the 6TiSCH interface data model, 6TiSCH does not
2145 describe a protocol to proactively push the neighborhood information
2146 to a PCE. We would like to see DetNet define such a protocol; one
2147 possible design alternative is that it could operate over CoAP,
2148 alternatively it could be converted to/from CoAP by a gateway. We
2149 would like to see such a protocol carry multiple metrics, for example
2150 similar to those used for RPL operations [RFC6551]
2152 5.3.2.2. 6TiSCH IP Interface
2154 "6top" ([I-D.wang-6tisch-6top-sublayer]) is a logical link control
2155 sitting between the IP layer and the TSCH MAC layer which provides
2156 the link abstraction that is required for IP operations. The 6top
2157 data model and management interfaces are further discussed in
2158 [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap].
2160 An IP packet that is sent along a 6TiSCH path uses the Differentiated
2161 Services Per-Hop-Behavior Group called Deterministic Forwarding, as
2162 described in [I-D.svshah-tsvwg-deterministic-forwarding].
2164 5.3.3. 6TiSCH Security Considerations
2166 On top of the classical requirements for protection of control
2167 signaling, it must be noted that 6TiSCH networks operate on limited
2168 resources that can be depleted rapidly in a DoS attack on the system,
2169 for instance by placing a rogue device in the network, or by
2170 obtaining management control and setting up unexpected additional
2171 paths.
2173 5.4. Wireless Industrial Asks
2175 6TiSCH depends on DetNet to define:
2177 o Configuration (state) and operations for deterministic paths
2179 o End-to-end protocols for deterministic forwarding (tagging, IP)
2181 o Protocol for packet replication and elimination
2183 6. Cellular Radio
2185 6.1. Use Case Description
2187 This use case describes the application of deterministic networking
2188 in the context of cellular telecom transport networks. Important
2189 elements include time synchronization, clock distribution, and ways
2190 of establishing time-sensitive streams for both Layer-2 and Layer-3
2191 user plane traffic.
2193 6.1.1. Network Architecture
2195 Figure 10 illustrates a typical 3GPP-defined cellular network
2196 architecture, which includes "Fronthaul", "Midhaul" and "Backhaul"
2197 network segments. The "Fronthaul" is the network connecting base
2198 stations (baseband processing units) to the remote radio heads
2199 (antennas). The "Midhaul" is the network inter-connecting base
2200 stations (or small cell sites). The "Backhaul" is the network or
2201 links connecting the radio base station sites to the network
2202 controller/gateway sites (i.e. the core of the 3GPP cellular
2203 network).
2205 In Figure 10 "eNB" ("E-UTRAN Node B") is the hardware that is
2206 connected to the mobile phone network which communicates directly
2207 with mobile handsets ([TS36300]).
2209 Y (remote radio heads (antennas))
2210 \
2211 Y__ \.--. .--. +------+
2212 \_( `. +---+ _(Back`. | 3GPP |
2213 Y------( Front )----|eNB|----( Haul )----| core |
2214 ( ` .Haul ) +---+ ( ` . ) ) | netw |
2215 /`--(___.-' \ `--(___.-' +------+
2216 Y_/ / \.--. \
2217 Y_/ _( Mid`. \
2218 ( Haul ) \
2219 ( ` . ) ) \
2220 `--(___.-'\_____+---+ (small cell sites)
2221 \ |SCe|__Y
2222 +---+ +---+
2223 Y__|eNB|__Y
2224 +---+
2225 Y_/ \_Y ("local" radios)
2227 Figure 10: Generic 3GPP-based Cellular Network Architecture
2229 6.1.2. Delay Constraints
2231 The available processing time for Fronthaul networking overhead is
2232 limited to the available time after the baseband processing of the
2233 radio frame has completed. For example in Long Term Evolution (LTE)
2234 radio, processing of a radio frame is allocated 3ms but typically the
2235 processing uses most of it, allowing only a small fraction to be used
2236 by the Fronthaul network (e.g. up to 250us one-way delay, though the
2237 existing spec ([NGMN-fronth]) supports delay only up to 100us). This
2238 ultimately determines the distance the remote radio heads can be
2239 located from the base stations (e.g., 100us equals roughly 20 km of
2240 optical fiber-based transport). Allocation options of the available
2241 time budget between processing and transport are under heavy
2242 discussions in the mobile industry.
2244 For packet-based transport the allocated transport time (e.g. CPRI
2245 would allow for 100us delay [CPRI]) is consumed by all nodes and
2246 buffering between the remote radio head and the baseband processing
2247 unit, plus the distance-incurred delay.
2249 The baseband processing time and the available "delay budget" for the
2250 fronthaul is likely to change in the forthcoming "5G" due to reduced
2251 radio round trip times and other architectural and service
2252 requirements [NGMN].
2254 The transport time budget, as noted above, places limitations on the
2255 distance that remote radio heads can be located from base stations
2256 (i.e. the link length). In the above analysis, the entire transport
2257 time budget is assumed to be available for link propagation delay.
2258 However the transport time budget can be broken down into three
2259 components: scheduling /queueing delay, transmission delay, and link
2260 propagation delay. Using today's Fronthaul networking technology,
2261 the queuing, scheduling and transmission components might become the
2262 dominant factors in the total transport time rather than the link
2263 propagation delay. This is especially true in cases where the
2264 Fronthaul link is relatively short and it is shared among multiple
2265 Fronthaul flows, for example in indoor and small cell networks,
2266 massive MIMO antenna networks, and split Fronthaul architectures.
2268 DetNet technology can improve this application by controlling and
2269 reducing the time required for the queuing, scheduling and
2270 transmission operations by properly assigning the network resources,
2271 thus leaving more of the transport time budget available for link
2272 propagation, and thus enabling longer link lengths. However, link
2273 length is usually a given parameter and is not a controllable network
2274 parameter, since RRH and BBU sights are usually located in
2275 predetermined locations. However, the number of antennas in an RRH
2276 sight might increase for example by adding more antennas, increasing
2277 the MIMO capability of the network or support of massive MIMO. This
2278 means increasing the number of the fronthaul flows sharing the same
2279 fronthaul link. DetNet can now control the bandwidth assignment of
2280 the fronthaul link and the scheduling of fronthaul packets over this
2281 link and provide adequate buffer provisioning for each flow to reduce
2282 the packet loss rate.
2284 Another way in which DetNet technology can aid Fronthaul networks is
2285 by providing effective isolation from best-effort (and other classes
2286 of) traffic, which can arise as a result of network slicing in 5G
2287 networks where Fronthaul traffic generated in different network
2288 slices might have differing performance requirements. DetNet
2289 technology can also dynamically control the bandwidth assignment,
2290 scheduling and packet forwarding decisions and the buffer
2291 provisioning of the Fronthaul flows to guarantee the end-to-end delay
2292 of the Fronthaul packets and minimize the packet loss rate.
2294 [METIS] documents the fundamental challenges as well as overall
2295 technical goals of the future 5G mobile and wireless system as the
2296 starting point. These future systems should support much higher data
2297 volumes and rates and significantly lower end-to-end latency for 100x
2298 more connected devices (at similar cost and energy consumption levels
2299 as today's system).
2301 For Midhaul connections, delay constraints are driven by Inter-Site
2302 radio functions like Coordinated Multipoint Processing (CoMP, see
2303 [CoMP]). CoMP reception and transmission is a framework in which
2304 multiple geographically distributed antenna nodes cooperate to
2305 improve the performance of the users served in the common cooperation
2306 area. The design principal of CoMP is to extend the current single-
2307 cell to multi-UE (User Equipment) transmission to a multi-cell-to-
2308 multi-UEs transmission by base station cooperation.
2310 CoMP has delay-sensitive performance parameters, which are "midhaul
2311 latency" and "CSI (Channel State Information) reporting and
2312 accuracy". The essential feature of CoMP is signaling between eNBs,
2313 so Midhaul latency is the dominating limitation of CoMP performance.
2314 Generally, CoMP can benefit from coordinated scheduling (either
2315 distributed or centralized) of different cells if the signaling delay
2316 between eNBs is within 1-10ms. This delay requirement is both rigid
2317 and absolute because any uncertainty in delay will degrade the
2318 performance significantly.
2320 Inter-site CoMP is one of the key requirements for 5G and is also a
2321 near-term goal for the current 4.5G network architecture.
2323 6.1.3. Time Synchronization Constraints
2325 Fronthaul time synchronization requirements are given by [TS25104],
2326 [TS36104], [TS36211], and [TS36133]. These can be summarized for the
2327 current 3GPP LTE-based networks as:
2329 Delay Accuracy:
2330 +-8ns (i.e. +-1/32 Tc, where Tc is the UMTS Chip time of 1/3.84
2331 MHz) resulting in a round trip accuracy of +-16ns. The value is
2332 this low to meet the 3GPP Timing Alignment Error (TAE) measurement
2333 requirements. Note: performance guarantees of low nanosecond
2334 values such as these are considered to be below the DetNet layer -
2335 it is assumed that the underlying implementation, e.g. the
2336 hardware, will provide sufficient support (e.g. buffering) to
2337 enable this level of accuracy. These values are maintained in the
2338 use case to give an indication of the overall application.
2340 Timing Alignment Error:
2341 Timing Alignment Error (TAE) is problematic to Fronthaul networks
2342 and must be minimized. If the transport network cannot guarantee
2343 low enough TAE then additional buffering has to be introduced at
2344 the edges of the network to buffer out the jitter. Buffering is
2345 not desirable as it reduces the total available delay budget.
2346 Packet Delay Variation (PDV) requirements can be derived from TAE
2347 for packet based Fronthaul networks.
2349 * For multiple input multiple output (MIMO) or TX diversity
2350 transmissions, at each carrier frequency, TAE shall not exceed
2351 65 ns (i.e. 1/4 Tc).
2353 * For intra-band contiguous carrier aggregation, with or without
2354 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2
2355 Tc).
2357 * For intra-band non-contiguous carrier aggregation, with or
2358 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e.
2359 one Tc).
2361 * For inter-band carrier aggregation, with or without MIMO or TX
2362 diversity, TAE shall not exceed 260 ns.
2364 Transport link contribution to radio frequency error:
2365 +-2 PPB. This value is considered to be "available" for the
2366 Fronthaul link out of the total 50 PPB budget reserved for the
2367 radio interface. Note: the reason that the transport link
2368 contributes to radio frequency error is as follows. The current
2369 way of doing Fronthaul is from the radio unit to remote radio head
2370 directly. The remote radio head is essentially a passive device
2371 (without buffering etc.) The transport drives the antenna
2372 directly by feeding it with samples and everything the transport
2373 adds will be introduced to radio as-is. So if the transport
2374 causes additional frequency error that shows immediately on the
2375 radio as well. Note: performance guarantees of low nanosecond
2376 values such as these are considered to be below the DetNet layer -
2377 it is assumed that the underlying implementation, e.g. the
2378 hardware, will provide sufficient support to enable this level of
2379 performance. These values are maintained in the use case to give
2380 an indication of the overall application.
2382 The above listed time synchronization requirements are difficult to
2383 meet with point-to-point connected networks, and more difficult when
2384 the network includes multiple hops. It is expected that networks
2385 must include buffering at the ends of the connections as imposed by
2386 the jitter requirements, since trying to meet the jitter requirements
2387 in every intermediate node is likely to be too costly. However,
2388 every measure to reduce jitter and delay on the path makes it easier
2389 to meet the end-to-end requirements.
2391 In order to meet the timing requirements both senders and receivers
2392 must remain time synchronized, demanding very accurate clock
2393 distribution, for example support for IEEE 1588 transparent clocks or
2394 boundary clocks in every intermediate node.
2396 In cellular networks from the LTE radio era onward, phase
2397 synchronization is needed in addition to frequency synchronization
2398 ([TS36300], [TS23401]). Time constraints are also important due to
2399 their impact on packet loss. If a packet is delivered too late, then
2400 the packet may be dropped by the host.
2402 6.1.4. Transport Loss Constraints
2404 Fronthaul and Midhaul networks assume almost error-free transport.
2405 Errors can result in a reset of the radio interfaces, which can cause
2406 reduced throughput or broken radio connectivity for mobile customers.
2408 For packetized Fronthaul and Midhaul connections packet loss may be
2409 caused by BER, congestion, or network failure scenarios. Different
2410 fronthaul functional splits are being considered by 3GPP, requiring
2411 strict frame loss ratio (FLR) guarantees. As one example (referring
2412 to the legacy CPRI split which is option 8 in 3GPP) lower layers
2413 splits may imply an FLR of less than 10E-7 for data traffic and less
2414 than 10E-6 for control and management traffic. Current tools for
2415 eliminating packet loss for Fronthaul and Midhaul networks have
2416 serious challenges, for example retransmitting lost packets and/or
2417 using forward error correction (FEC) to circumvent bit errors is
2418 practically impossible due to the additional delay incurred. Using
2419 redundant streams for better guarantees for delivery is also
2420 practically impossible in many cases due to high bandwidth
2421 requirements of Fronthaul and Midhaul networks. Protection switching
2422 is also a candidate but current technologies for the path switch are
2423 too slow to avoid reset of mobile interfaces.
2425 Fronthaul links are assumed to be symmetric, and all Fronthaul
2426 streams (i.e. those carrying radio data) have equal priority and
2427 cannot delay or pre-empt each other. This implies that the network
2428 must guarantee that each time-sensitive flow meets their schedule.
2430 6.1.5. Security Considerations
2432 Establishing time-sensitive streams in the network entails reserving
2433 networking resources for long periods of time. It is important that
2434 these reservation requests be authenticated to prevent malicious
2435 reservation attempts from hostile nodes (or accidental
2436 misconfiguration). This is particularly important in the case where
2437 the reservation requests span administrative domains. Furthermore,
2438 the reservation information itself should be digitally signed to
2439 reduce the risk of a legitimate node pushing a stale or hostile
2440 configuration into another networking node.
2442 Note: This is considered important for the security policy of the
2443 network, but does not affect the core DetNet architecture and design.
2445 6.2. Cellular Radio Networks Today
2447 6.2.1. Fronthaul
2449 Today's Fronthaul networks typically consist of:
2451 o Dedicated point-to-point fiber connection is common
2453 o Proprietary protocols and framings
2455 o Custom equipment and no real networking
2457 Current solutions for Fronthaul are direct optical cables or
2458 Wavelength-Division Multiplexing (WDM) connections.
2460 6.2.2. Midhaul and Backhaul
2462 Today's Midhaul and Backhaul networks typically consist of:
2464 o Mostly normal IP networks, MPLS-TP, etc.
2466 o Clock distribution and sync using 1588 and SyncE
2468 Telecommunication networks in the Mid- and Backhaul are already
2469 heading towards transport networks where precise time synchronization
2470 support is one of the basic building blocks. While the transport
2471 networks themselves have practically transitioned to all-IP packet-
2472 based networks to meet the bandwidth and cost requirements, highly
2473 accurate clock distribution has become a challenge.
2475 In the past, Mid- and Backhaul connections were typically based on
2476 Time Division Multiplexing (TDM-based) and provided frequency
2477 synchronization capabilities as a part of the transport media.
2478 Alternatively other technologies such as Global Positioning System
2479 (GPS) or Synchronous Ethernet (SyncE) are used [SyncE].
2481 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985]
2482 for legacy transport support) have become popular tools to build and
2483 manage new all-IP Radio Access Networks (RANs)
2484 [I-D.kh-spring-ip-ran-use-case]. Although various timing and
2485 synchronization optimizations have already been proposed and
2486 implemented including 1588 PTP enhancements
2487 [I-D.ietf-tictoc-1588overmpls] and [I-D.ietf-mpls-residence-time],
2488 these solution are not necessarily sufficient for the forthcoming RAN
2489 architectures nor do they guarantee the more stringent time-
2490 synchronization requirements such as [CPRI].
2492 There are also existing solutions for TDM over IP such as [RFC4553],
2493 [RFC5086], and [RFC5087], as well as TDM over Ethernet transports
2494 such as [MEF8].
2496 6.3. Cellular Radio Networks Future
2498 Future Cellular Radio Networks will be based on a mix of different
2499 xHaul networks (xHaul = front-, mid- and backhaul), and future
2500 transport networks should be able to support all of them
2501 simultaneously. It is already envisioned today that:
2503 o Not all "cellular radio network" traffic will be IP, for example
2504 some will remain at Layer 2 (e.g. Ethernet based). DetNet
2505 solutions must address all traffic types (Layer 2, Layer 3) with
2506 the same tools and allow their transport simultaneously.
2508 o All forms of xHaul networks will need some form of DetNet
2509 solutions. For example with the advent of 5G some Backhaul
2510 traffic will also have DetNet requirements, for example traffic
2511 belonging to time-critical 5G applications.
2513 o Different splits of the functionality run on the base stations and
2514 the on-site units could co-exist on the same Fronthaul and
2515 Backhaul network.
2517 We would like to see the following in future Cellular Radio networks:
2519 o Unified standards-based transport protocols and standard
2520 networking equipment that can make use of underlying deterministic
2521 link-layer services
2523 o Unified and standards-based network management systems and
2524 protocols in all parts of the network (including Fronthaul)
2526 New radio access network deployment models and architectures may
2527 require time- sensitive networking services with strict requirements
2528 on other parts of the network that previously were not considered to
2529 be packetized at all. Time and synchronization support are already
2530 topical for Backhaul and Midhaul packet networks [MEF22.1.1] and are
2531 becoming a real issue for Fronthaul networks also. Specifically in
2532 Fronthaul networks the timing and synchronization requirements can be
2533 extreme for packet based technologies, for example, on the order of
2534 sub +-20 ns packet delay variation (PDV) and frequency accuracy of
2535 +0.002 PPM [Fronthaul].
2537 The actual transport protocols and/or solutions to establish required
2538 transport "circuits" (pinned-down paths) for Fronthaul traffic are
2539 still undefined. Those are likely to include (but are not limited
2540 to) solutions directly over Ethernet, over IP, and using MPLS/
2541 PseudoWire transport.
2543 Even the current time-sensitive networking features may not be
2544 sufficient for Fronthaul traffic. Therefore, having specific
2545 profiles that take the requirements of Fronthaul into account is
2546 desirable [IEEE8021CM].
2548 Interesting and important work for time-sensitive networking has been
2549 done for Ethernet [TSNTG], which specifies the use of IEEE 1588 time
2550 precision protocol (PTP) [IEEE1588] in the context of IEEE 802.1D and
2551 IEEE 802.1Q. [IEEE8021AS] specifies a Layer 2 time synchronizing
2552 service, and other specifications such as IEEE 1722 [IEEE1722]
2553 specify Ethernet-based Layer-2 transport for time-sensitive streams.
2555 New promising work seeks to enable the transport of time-sensitive
2556 fronthaul streams in Ethernet bridged networks [IEEE8021CM].
2557 Analogous to IEEE 1722 there is an ongoing standardization effort to
2558 define the Layer-2 transport encapsulation format for transporting
2559 radio over Ethernet (RoE) in the IEEE 1904.3 Task Force [IEEE19143].
2561 As mentioned in Section 6.1.2, 5G communications will provide one of
2562 the most challenging cases for delay sensitive networking. In order
2563 to meet the challenges of ultra-low latency and ultra-high
2564 throughput, 3GPP has studied various "functional splits" for 5G,
2565 i.e., physical decomposition of the gNodeB base station and
2566 deployment of its functional blocks in different locations [TR38801].
2568 These splits are numbered from split option 1 (Dual Connectivity, a
2569 split in which the radio resource control is centralized and other
2570 radio stack layers are in distributed units) to split option 8 (a
2571 PHY-RF split in which RF functionality is in a distributed unit and
2572 the rest of the radio stack is in the centralized unit), with each
2573 intermediate split having its own data rate and delay requirements.
2574 Packetized versions of different splits have recently been proposed
2575 including eCPRI [eCPRI] and RoE (as previously noted). Both provide
2576 Ethernet encapsulations, and eCPRI is also capable of IP
2577 encapsulation.
2579 All-IP RANs and xHaul networks would benefit from time
2580 synchronization and time-sensitive transport services. Although
2581 Ethernet appears to be the unifying technology for the transport,
2582 there is still a disconnect providing Layer 3 services. The protocol
2583 stack typically has a number of layers below the Ethernet Layer 2
2584 that shows up to the Layer 3 IP transport. It is not uncommon that
2585 on top of the lowest layer (optical) transport there is the first
2586 layer of Ethernet followed one or more layers of MPLS, PseudoWires
2587 and/or other tunneling protocols finally carrying the Ethernet layer
2588 visible to the user plane IP traffic.
2590 While there are existing technologies to establish circuits through
2591 the routed and switched networks (especially in MPLS/PWE space),
2592 there is still no way to signal the time synchronization and time-
2593 sensitive stream requirements/reservations for Layer-3 flows in a way
2594 that addresses the entire transport stack, including the Ethernet
2595 layers that need to be configured.
2597 Furthermore, not all "user plane" traffic will be IP. Therefore, the
2598 same solution also must address the use cases where the user plane
2599 traffic is a different layer, for example Ethernet frames.
2601 There is existing work describing the problem statement
2602 [I-D.ietf-detnet-problem-statement] and the architecture
2603 [I-D.ietf-detnet-architecture] for deterministic networking (DetNet)
2604 that targets solutions for time-sensitive (IP/transport) streams with
2605 deterministic properties over Ethernet-based switched networks.
2607 6.4. Cellular Radio Networks Asks
2609 A standard for data plane transport specification which is:
2611 o Unified among all xHauls (meaning that different flows with
2612 diverse DetNet requirements can coexist in the same network and
2613 traverse the same nodes without interfering with each other)
2615 o Deployed in a highly deterministic network environment
2617 o Capable of supporting multiple functional splits simultaneously,
2618 including existing Backhaul and CPRI Fronthaul and potentially new
2619 modes as defined for example in 3GPP; these goals can be supported
2620 by the existing DetNet Use Case Common Themes, notably "Mix of
2621 Deterministic and Best-Effort Traffic", "Bounded Latency", "Low
2622 Latency", "Symmetrical Path Delays", and "Deterministic Flows".
2624 o Capable of supporting Network Slicing and Multi-tenancy; these
2625 goals can be supported by the same DetNet themes noted above.
2627 o Capable of transporting both in-band and out-band control traffic
2628 (OAM info, ...).
2630 o Deployable over multiple data link technologies (e.g., IEEE 802.3,
2631 mmWave, etc.).
2633 A standard for data flow information models that are:
2635 o Aware of the time sensitivity and constraints of the target
2636 networking environment
2638 o Aware of underlying deterministic networking services (e.g., on
2639 the Ethernet layer)
2641 7. Industrial M2M
2643 7.1. Use Case Description
2645 Industrial Automation in general refers to automation of
2646 manufacturing, quality control and material processing. In this
2647 "machine to machine" (M2M) use case we consider machine units in a
2648 plant floor which periodically exchange data with upstream or
2649 downstream machine modules and/or a supervisory controller within a
2650 local area network.
2652 The actors of M2M communication are Programmable Logic Controllers
2653 (PLCs). Communication between PLCs and between PLCs and the
2654 supervisory PLC (S-PLC) is achieved via critical control/data streams
2655 Figure 11.
2657 S (Sensor)
2658 \ +-----+
2659 PLC__ \.--. .--. ---| MES |
2660 \_( `. _( `./ +-----+
2661 A------( Local )-------------( L2 )
2662 ( Net ) ( Net ) +-------+
2663 /`--(___.-' `--(___.-' ----| S-PLC |
2664 S_/ / PLC .--. / +-------+
2665 A_/ \_( `.
2666 (Actuator) ( Local )
2667 ( Net )
2668 /`--(___.-'\
2669 / \ A
2670 S A
2672 Figure 11: Current Generic Industrial M2M Network Architecture
2674 This use case focuses on PLC-related communications; communication to
2675 Manufacturing-Execution-Systems (MESs) are not addressed.
2677 This use case covers only critical control/data streams; non-critical
2678 traffic between industrial automation applications (such as
2679 communication of state, configuration, set-up, and database
2680 communication) are adequately served by currently available
2681 prioritizing techniques. Such traffic can use up to 80% of the total
2682 bandwidth required. There is also a subset of non-time-critical
2683 traffic that must be reliable even though it is not time sensitive.
2685 In this use case the primary need for deterministic networking is to
2686 provide end-to-end delivery of M2M messages within specific timing
2687 constraints, for example in closed loop automation control. Today
2688 this level of determinism is provided by proprietary networking
2689 technologies. In addition, standard networking technologies are used
2690 to connect the local network to remote industrial automation sites,
2691 e.g. over an enterprise or metro network which also carries other
2692 types of traffic. Therefore, flows that should be forwarded with
2693 deterministic guarantees need to be sustained regardless of the
2694 amount of other flows in those networks.
2696 7.2. Industrial M2M Communication Today
2698 Today, proprietary networks fulfill the needed timing and
2699 availability for M2M networks.
2701 The network topologies used today by industrial automation are
2702 similar to those used by telecom networks: Daisy Chain, Ring, Hub and
2703 Spoke, and Comb (a subset of Daisy Chain).
2705 PLC-related control/data streams are transmitted periodically and
2706 carry either a pre-configured payload or a payload configured during
2707 runtime.
2709 Some industrial applications require time synchronization at the end
2710 nodes. For such time-coordinated PLCs, accuracy of 1 microsecond is
2711 required. Even in the case of "non-time-coordinated" PLCs time sync
2712 may be needed e.g. for timestamping of sensor data.
2714 Industrial network scenarios require advanced security solutions.
2715 Many of the current industrial production networks are physically
2716 separated. Preventing critical flows from be leaked outside a domain
2717 is handled today by filtering policies that are typically enforced in
2718 firewalls.
2720 7.2.1. Transport Parameters
2722 The Cycle Time defines the frequency of message(s) between industrial
2723 actors. The Cycle Time is application dependent, in the range of 1ms
2724 - 100ms for critical control/data streams.
2726 Because industrial applications assume deterministic transport for
2727 critical Control-Data-Stream parameters (instead of defining latency
2728 and delay variation parameters) it is sufficient to fulfill the upper
2729 bound of latency (maximum latency). The underlying networking
2730 infrastructure must ensure a maximum end-to-end delivery time of
2731 messages in the range of 100 microseconds to 50 milliseconds
2732 depending on the control loop application.
2734 The bandwidth requirements of control/data streams are usually
2735 calculated directly from the bytes-per-cycle parameter of the control
2736 loop. For PLC-to-PLC communication one can expect 2 - 32 streams
2737 with packet size in the range of 100 - 700 bytes. For S-PLC to PLCs
2738 the number of streams is higher - up to 256 streams. Usually no more
2739 than 20% of available bandwidth is used for critical control/data
2740 streams. In today's networks 1Gbps links are commonly used.
2742 Most PLC control loops are rather tolerant of packet loss, however
2743 critical control/data streams accept no more than 1 packet loss per
2744 consecutive communication cycle (i.e. if a packet gets lost in cycle
2745 "n", then the next cycle ("n+1") must be lossless). After two or
2746 more consecutive packet losses the network may be considered to be
2747 "down" by the Application.
2749 As network downtime may impact the whole production system the
2750 required network availability is rather high (99,999%).
2752 Based on the above parameters we expect that some form of redundancy
2753 will be required for M2M communications, however any individual
2754 solution depends on several parameters including cycle time, delivery
2755 time, etc.
2757 7.2.2. Stream Creation and Destruction
2759 In an industrial environment, critical control/data streams are
2760 created rather infrequently, on the order of ~10 times per day / week
2761 / month. Most of these critical control/data streams get created at
2762 machine startup, however flexibility is also needed during runtime,
2763 for example when adding or removing a machine. Going forward as
2764 production systems become more flexible, we expect a significant
2765 increase in the rate at which streams are created, changed and
2766 destroyed.
2768 7.3. Industrial M2M Future
2770 We would like to see a converged IP-standards-based network with
2771 deterministic properties that can satisfy the timing, security and
2772 reliability constraints described above. Today's proprietary
2773 networks could then be interfaced to such a network via gateways or,
2774 in the case of new installations, devices could be connected directly
2775 to the converged network.
2777 For this use case we expect time synchronization accuracy on the
2778 order of 1us.
2780 7.4. Industrial M2M Asks
2782 o Converged IP-based network
2784 o Deterministic behavior (bounded latency and jitter )
2786 o High availability (presumably through redundancy) (99.999 %)
2788 o Low message delivery time (100us - 50ms)
2790 o Low packet loss (burstless, 0.1-1 %)
2792 o Security (e.g. prevent critical flows from being leaked between
2793 physically separated networks)
2795 8. Mining Industry
2797 8.1. Use Case Description
2799 The mining industry is highly dependent on networks to monitor and
2800 control their systems both in open-pit and underground extraction,
2801 transport and refining processes. In order to reduce risks and
2802 increase operational efficiency in mining operations, a number of
2803 processes have migrated the operators from the extraction site to
2804 remote control and monitoring.
2806 In the case of open pit mining, autonomous trucks are used to
2807 transport the raw materials from the open pit to the refining factory
2808 where the final product (e.g. Copper) is obtained. Although the
2809 operation is autonomous, the tracks are remotely monitored from a
2810 central facility.
2812 In pit mines, the monitoring of the tailings or mine dumps is
2813 critical in order to avoid any environmental pollution. In the past,
2814 monitoring has been conducted through manual inspection of pre-
2815 installed dataloggers. Cabling is not usually exploited in such
2816 scenarios due to the cost and complex deployment requirements.
2817 Currently, wireless technologies are being employed to monitor these
2818 cases permanently. Slopes are also monitored in order to anticipate
2819 possible mine collapse. Due to the unstable terrain, cable
2820 maintenance is costly and complex and hence wireless technologies are
2821 employed.
2823 In the underground monitoring case, autonomous vehicles with
2824 extraction tools travel autonomously through the tunnels, but their
2825 operational tasks (such as excavation, stone breaking and transport)
2826 are controlled remotely from a central facility. This generates
2827 video and feedback upstream traffic plus downstream actuator control
2828 traffic.
2830 8.2. Mining Industry Today
2832 Currently the mining industry uses a packet switched architecture
2833 supported by high speed ethernet. However in order to achieve the
2834 delay and packet loss requirements the network bandwidth is
2835 overestimated, thus providing very low efficiency in terms of
2836 resource usage.
2838 QoS is implemented at the Routers to separate video, management,
2839 monitoring and process control traffic for each stream.
2841 Since mobility is involved in this process, the connection between
2842 the backbone and the mobile devices (e.g. trucks, trains and
2843 excavators) is solved using a wireless link. These links are based
2844 on 802.11 for open-pit mining and leaky feeder for underground
2845 mining.
2847 Lately in pit mines the use of LPWAN technologies has been extended:
2848 Tailings, slopes and mine dumps are monitored by battery-powered
2849 dataloggers that make use of robust long range radio technologies.
2850 Reliability is usually ensured through retransmissions at L2.
2851 Gateways or concentrators act as bridges forwarding the data to the
2852 backbone ethernet network. Deterministic requirements are biased
2853 towards reliability rather than latency as events are slowly
2854 triggered or can be anticipated in advance.
2856 At the mineral processing stage, conveyor belts and refining
2857 processes are controlled by a SCADA system, which provides the in-
2858 factory delay-constrained networking requirements.
2860 Voice communications are currently served by a redundant trunking
2861 infrastructure, independent from current data networks.
2863 8.3. Mining Industry Future
2865 Mining operations and management are currently converging towards a
2866 combination of autonomous operation and teleoperation of transport
2867 and extraction machines. This means that video, audio, monitoring
2868 and process control traffic will increase dramatically. Ideally, all
2869 activities on the mine will rely on network infrastructure.
2871 Wireless for open-pit mining is already a reality with LPWAN
2872 technologies and it is expected to evolve to more advanced LPWAN
2873 technologies such as those based on LTE to increase last hop
2874 reliability or novel LPWAN flavours with deterministic access.
2876 One area in which DetNet can improve this use case is in the wired
2877 networks that make up the "backbone network" of the system, which
2878 connect together many wireless access points (APs). The mobile
2879 machines (which are connected to the network via wireless) transition
2880 from one AP to the next as they move about. A deterministic,
2881 reliable, low latency backbone can enable these transitions to be
2882 more reliable.
2884 Connections which extend all the way from the base stations to the
2885 machinery via a mix of wired and wireless hops would also be
2886 beneficial, for example to improve remote control responsiveness of
2887 digging machines. However to guarantee deterministic performance of
2888 a DetNet, the end-to-end underlying network must be deterministic.
2889 Thus for this use case if a deterministic wireless transport is
2890 integrated with a wire-based DetNet network, it could create the
2891 desired wired plus wireless end-to-end deterministic network.
2893 8.4. Mining Industry Asks
2895 o Improved bandwidth efficiency
2897 o Very low delay to enable machine teleoperation
2899 o Dedicated bandwidth usage for high resolution video streams
2901 o Predictable delay to enable realtime monitoring
2903 o Potential to construct a unified DetNet network over a combination
2904 of wired and deterministic wireless links
2906 9. Private Blockchain
2908 9.1. Use Case Description
2910 Blockchain was created with bitcoin, as a 'public' blockchain on the
2911 open Internet, however blockchain has also spread far beyond its
2912 original host into various industries such as smart manufacturing,
2913 logistics, security, legal rights and others. In these industries
2914 blockchain runs in designated and carefully managed network in which
2915 deterministic networking requirements could be addressed by Detnet.
2916 Such implementations are referred to as 'private' blockchain.
2918 The sole distinction between public and private blockchain is related
2919 to who is allowed to participate in the network, execute the
2920 consensus protocol and maintain the shared ledger.
2922 Today's networks treat the traffic from blockchain on a best-effort
2923 basis, but blockchain operation could be made much more efficient if
2924 deterministic networking service were available to minimize latency
2925 and packet loss in the network.
2927 9.1.1. Blockchain Operation
2929 A 'block' runs as a container of a batch of primary items such as
2930 transactions, property records etc. The blocks are chained in such a
2931 way that the hash of the previous block works as the pointer header
2932 of the new block, where confirmation of each block requires a
2933 consensus mechanism. When an item arrives at a blockchain node, the
2934 latter broadcasts this item to the rest of nodes which receive and
2935 verify it and put it in the ongoing block. Block confirmation
2936 process begins as the amount of items reaches the predefined block
2937 capacity, and the node broadcasts its proved block to the rest of
2938 nodes to be verified and chained.
2940 9.1.2. Blockchain Network Architecture
2942 Blockchain node communication and coordination is achieved mainly
2943 through frequent point to multi-point communication, however
2944 persistent point-to-point connections are used to transport both the
2945 items and the blocks to the other nodes.
2947 When a node initiates, it first requests the other nodes' address
2948 from a specific entity such as DNS, then it creates persistent
2949 connections each of with other nodes. If node A confirms an item, it
2950 sends the item to the other nodes via the persistent connections.
2952 As a new block in a node completes and gets proved among the nodes,
2953 it starts propagating this block towards its neighbor nodes. Assume
2954 node A receives a block, it sends invite message after verification
2955 to its neighbor B, B checks if the designated block is available, it
2956 responds get message to A if it is unavailable, and A send the
2957 complete block to B. B repeats the process as A to start the next
2958 round of block propagation.
2960 The challenge of blockchain network operation is not overall data
2961 rates, since the volume from both block and item stays between
2962 hundreds of bytes to a couple of mega bytes per second, but is in
2963 transporting the blocks with minimum latency to maximize efficiency
2964 of the blockchain consensus process.
2966 9.1.3. Security Considerations
2968 Security is crucial to blockchain applications, and todayt blockchain
2969 addresses its security issues mainly at the application level, where
2970 cryptography as well as hash-based consensus play a leading role
2971 preventing both double-spending and malicious service attack.
2972 However, there is concern that in the proposed use case of a private
2973 blockchain network which is dependent on deterministic properties,
2974 the network could be vulnerable to delays and other specific attacks
2975 against determinism which could interrupt service.
2977 9.2. Private Blockchain Today
2979 Today private blockchain runs in L2 or L3 VPN, in general without
2980 guaranteed determinism. The industry players are starting to realize
2981 that improving determinism in their blockchain networks could improve
2982 the performance of their service, but as of today these goals are not
2983 being met.
2985 9.3. Private Blockchain Future
2987 Blockchain system performance can be greatly improved through
2988 deterministic networking service primarily because it would
2989 accelerate the consensus process. It would be valuable to be able to
2990 design a private blockchain network with the following properties:
2992 o Transport of point to multi-point traffic in a coordinated network
2993 architecture rather than at the application layer (which typically
2994 uses point-to-point connections)
2996 o Guaranteed transport latency
2998 o Reduced packet loss (to the point where packet retransmission-
2999 incurred delay would be negligible.)
3001 9.4. Private Blockchain Asks
3003 o Layer 2 and Layer 3 multicast of blockchain traffic
3005 o Item and block delivery with bounded, low latency and negligible
3006 packet loss
3008 o Coexistence in a single network of blockchain and IT traffic.
3010 o Ability to scale the network by distributing the centralized
3011 control of the network across multiple control entities.
3013 10. Network Slicing
3015 10.1. Use Case Description
3017 Network Slicing divides one physical network infrastructure into
3018 multiple logical networks. Each slice, corresponding to a logical
3019 network, uses resources and network functions independently from each
3020 other. Network Slicing provides flexibility of resource allocation
3021 and service quality customization.
3023 Future services will demand network performance with a wide variety
3024 of characteristics such as high data rate, low latency, low loss
3025 rate, security and many other parameters. Ideally every service
3026 would have its own physical network satisfying its particular
3027 performance requirements, however that would be prohibitively
3028 expensive. Network Slicing can provide a customized slice for a
3029 single service, and multiple slices can share the same physical
3030 network. This method can optimize the performance for the service at
3031 lower cost, and the flexibility of setting up and release the slices
3032 also allows the user to allocate the network resources dynamically.
3034 Unlike the other use cases presented here, Network Slicing is not a
3035 specific application that depends on specific deterministic
3036 properties; rather it is introduced as an area of networking to which
3037 DetNet might be applicable.
3039 10.2. DetNet Applied to Network Slicing
3041 10.2.1. Resource Isolation Across Slices
3043 One of the requirements discussed for Network Slicing is the "hard"
3044 separation of various users' deterministic performance. That is, it
3045 should be impossible for activity, lack of activity, or changes in
3046 activity of one or more users to have any appreciable effect on the
3047 deterministic performance parameters of any other slices. Typical
3048 techniques used today, which share a physical network among users, do
3049 not offer this level of isolation. DetNet can supply point-to-point
3050 or point-to-multipoint paths that offer bandwidth and latency
3051 guarantees to a user that cannot be affected by other users' data
3052 traffic. Thus DetNet is a powerful tool when latency and reliability
3053 are required in Network Slicing.
3055 10.2.2. Deterministic Services Within Slices
3057 Slices may need to provide services with DetNet-type performance
3058 guarantees, however we note that a system can be implemented to
3059 provide such services in more than one way. For example the slice
3060 itself might be implemented using DetNet, and thus the slice can
3061 provide service guarantees and isolation to its users without any
3062 particular DetNet awareness on the part of the users' applications.
3063 Alternatively, a "non-DetNet-aware" slice may host an application
3064 that itself implements DetNet services and thus can enjoy similar
3065 service guarantees.
3067 10.3. A Network Slicing Use Case Example - 5G Bearer Network
3069 Network Slicing is a core feature of 5G defined in 3GPP, which is
3070 currently under development. A network slice in a mobile network is
3071 a complete logical network including Radio Access Network (RAN) and
3072 Core Network (CN). It provides telecommunication services and
3073 network capabilities, which may vary from slice to slice. A 5G
3074 bearer network is a typical use case of Network Slicing; for example
3075 consider three 5G service scenarios: eMMB, URLLC, and mMTC.
3077 o eMBB (Enhanced Mobile Broadband) focuses on services characterized
3078 by high data rates, such as high definition videos, virtual
3079 reality, augmented reality, and fixed mobile convergence.
3081 o URLLC (Ultra-Reliable and Low Latency Communications) focuses on
3082 latency-sensitive services, such as self-driving vehicles, remote
3083 surgery, or drone control.
3085 o mMTC (massive Machine Type Communications) focuses on services
3086 that have high requirements for connection density, such as those
3087 typical for smart city and smart agriculture use cases.
3089 A 5G bearer network could use DetNet to provide hard resource
3090 isolation across slices and within the slice. For example consider
3091 Slice-A and Slice-B, with DetNet used to transit services URLLC-A and
3092 URLLC-B over them. Without DetNet, URLLC-A and URLLC-B would compete
3093 for bandwidth resource, and latency and reliability would not be
3094 guaranteed. With DetNet, URLLC-A and URLLC-B have separate bandwidth
3095 reservation and there is no resource conflict between them, as though
3096 they were in different logical networks.
3098 10.4. Non-5G Applications of Network Slicing
3100 Although operation of services not related to 5G is not part of the
3101 5G Network Slicing definition and scope, Network Slicing is likely to
3102 become a preferred approach to providing various services across a
3103 shared physical infrastructure. Examples include providing
3104 electrical utilities services and pro audio services via slices. Use
3105 cases like these could become more common once the work for the 5G
3106 core network evolves to include wired as well as wireless access.
3108 10.5. Limitations of DetNet in Network Slicing
3110 DetNet cannot cover every Network Slicing use case. One issue is
3111 that DetNet is a point-to-point or point-to-multipoint technology,
3112 however Network Slicing ultimately needs multi-point to multi-point
3113 guarantees. Another issue is that the number of flows that can be
3114 carried by DetNet is limited by DetNet scalability; flow aggregation
3115 and queuing management modification may help address this.
3116 Additional work and discussion are needed to address these topics.
3118 10.6. Network Slicing Today and Future
3120 Network Slicing has the promise to satisfy many requirements of
3121 future network deployment scenarios, but it is still a collection of
3122 ideas and analysis, without a specific technical solution. DetNet is
3123 one of various technologies that have potential to be used in Network
3124 Slicing, along with for example Flex-E and Segment Routing. For more
3125 information please see the IETF99 Network Slicing BOF session agenda
3126 and materials.
3128 10.7. Network Slicing Asks
3130 o Isolation from other flows through Queuing Management
3132 o Service Quality Customization and Guarantee
3134 o Security
3136 11. Use Case Common Themes
3138 This section summarizes the expected properties of a DetNet network,
3139 based on the use cases as described in this draft.
3141 11.1. Unified, standards-based network
3143 11.1.1. Extensions to Ethernet
3145 A DetNet network is not "a new kind of network" - it based on
3146 extensions to existing Ethernet standards, including elements of IEEE
3147 802.1 AVB/TSN and related standards. Presumably it will be possible
3148 to run DetNet over other underlying transports besides Ethernet, but
3149 Ethernet is explicitly supported.
3151 11.1.2. Centrally Administered
3153 In general a DetNet network is not expected to be "plug and play" -
3154 it is expected that there is some centralized network configuration
3155 and control system. Such a system may be in a single central
3156 location, or it maybe distributed across multiple control entities
3157 that function together as a unified control system for the network.
3158 However, the ability to "hot swap" components (e.g. due to
3159 malfunction) is similar enough to "plug and play" that this kind of
3160 behavior may be expected in DetNet networks, depending on the
3161 implementation.
3163 11.1.3. Standardized Data Flow Information Models
3165 Data Flow Information Models to be used with DetNet networks are to
3166 be specified by DetNet.
3168 11.1.4. L2 and L3 Integration
3170 A DetNet network is intended to integrate between Layer 2 (bridged)
3171 network(s) (e.g. AVB/TSN LAN) and Layer 3 (routed) network(s) (e.g.
3172 using IP-based protocols). One example of this is "making AVB/TSN-
3173 type deterministic performance available from Layer 3 applications,
3174 e.g. using RTP". Another example is "connecting two AVB/TSN LANs
3175 ("islands") together through a standard router".
3177 11.1.5. Consideration for IPv4
3179 This Use Cases draft explicitly does not specify any particular
3180 implementation or protocol, however it has been observed that various
3181 of the use cases described (and their associated industries) are
3182 explicitly based on IPv4 (as opposed to IPv6) and it is not
3183 considered practical to expect them to migrate to IPv6 in order to
3184 use DetNet. Thus the expectation is that even if not every feature
3185 of DetNet is available in an IPv4 context, at least some of the
3186 significant benefits (such as guaranteed end-to-end delivery and low
3187 latency) are expected to be available.
3189 11.1.6. Guaranteed End-to-End Delivery
3191 Packets sent over DetNet are guaranteed not to be dropped by the
3192 network due to congestion. However, the network may drop packets for
3193 intended reasons, e.g. per security measures. Also note that this
3194 guarantee applies to the actions of DetNet protocol software, and
3195 does not provide any guarantee against lower level errors such as
3196 media errors or checksum errors.
3198 11.1.7. Replacement for Multiple Proprietary Deterministic Networks
3200 There are many proprietary non-interoperable deterministic Ethernet-
3201 based networks currently available; DetNet is intended to provide an
3202 open-standards-based alternative to such networks.
3204 11.1.8. Mix of Deterministic and Best-Effort Traffic
3206 DetNet is intended to support coexistance of time-sensitive
3207 operational (OT) traffic and information (IT) traffic on the same
3208 ("unified") network.
3210 11.1.9. Unused Reserved BW to be Available to Best Effort Traffic
3212 If bandwidth reservations are made for a stream but the associated
3213 bandwidth is not used at any point in time, that bandwidth is made
3214 available on the network for best-effort traffic. If the owner of
3215 the reserved stream then starts transmitting again, the bandwidth is
3216 no longer available for best-effort traffic, on a moment-to-moment
3217 basis. Note that such "temporarily available" bandwidth is not
3218 available for time-sensitive traffic, which must have its own
3219 reservation.
3221 11.1.10. Lower Cost, Multi-Vendor Solutions
3223 The DetNet network specifications are intended to enable an ecosystem
3224 in which multiple vendors can create interoperable products, thus
3225 promoting device diversity and potentially higher numbers of each
3226 device manufactured, promoting cost reduction and cost competition
3227 among vendors. The intent is that DetNet networks should be able to
3228 be created at lower cost and with greater diversity of available
3229 devices than existing proprietary networks.
3231 11.2. Scalable Size
3233 DetNet networks range in size from very small, e.g. inside a single
3234 industrial machine, to very large, for example a Utility Grid network
3235 spanning a whole country, and involving many "hops" over various
3236 kinds of links for example radio repeaters, microwave linkes, fiber
3237 optic links, etc.. However recall that the scope of DetNet is
3238 confined to networks that are centrally administered, and explicitly
3239 excludes unbounded decentralized networks such as the Internet.
3241 11.3. Scalable Timing Parameters and Accuracy
3243 11.3.1. Bounded Latency
3245 The DetNet Data Flow Information Model is expected to provide means
3246 to configure the network that include parameters for querying network
3247 path latency, requesting bounded latency for a given stream,
3248 requesting worst case maximum and/or minimum latency for a given path
3249 or stream, and so on. It is an expected case that the network may
3250 not be able to provide a given requested service level, and if so the
3251 network control system should reply that the requested services is
3252 not available (as opposed to accepting the parameter but then not
3253 delivering the desired behavior).
3255 11.3.2. Low Latency
3257 Applications may require "extremely low latency" however depending on
3258 the application these may mean very different latency values; for
3259 example "low latency" across a Utility grid network is on a different
3260 time scale than "low latency" in a motor control loop in a small
3261 machine. The intent is that the mechanisms for specifying desired
3262 latency include wide ranges, and that architecturally there is
3263 nothing to prevent arbirtrarily low latencies from being implemented
3264 in a given network.
3266 11.3.3. Symmetrical Path Delays
3268 Some applications would like to specify that the transit delay time
3269 values be equal for both the transmit and return paths.
3271 11.4. High Reliability and Availability
3273 Reliablity is of critical importance to many DetNet applications, in
3274 which consequences of failure can be extraordinarily high in terms of
3275 cost and even human life. DetNet based systems are expected to be
3276 implemented with essentially arbitrarily high availability (for
3277 example 99.9999% up time, or even 12 nines). The intent is that the
3278 DetNet designs should not make any assumptions about the level of
3279 reliability and availability that may be required of a given system,
3280 and should define parameters for communicating these kinds of metrics
3281 within the network.
3283 A strategy used by DetNet for providing such extraordinarily high
3284 levels of reliability is to provide redundant paths that can be
3285 seamlessly switched between, while maintaining the required
3286 performance of that system.
3288 11.5. Security
3290 Security is of critical importance to many DetNet applications. A
3291 DetNet network must be able to be made secure against devices
3292 failures, attackers, misbehaving devices, and so on. In a DetNet
3293 network the data traffic is expected to be be time-sensitive, thus in
3294 addition to arriving with the data content as intended, the data must
3295 also arrive at the expected time. This may present "new" security
3296 challenges to implementers, and must be addressed accordingly. There
3297 are other security implications, including (but not limited to) the
3298 change in attack surface presented by packet replication and
3299 elimination.
3301 11.6. Deterministic Flows
3303 Reserved bandwidth data flows must be isolated from each other and
3304 from best-effort traffic, so that even if the network is saturated
3305 with best-effort (and/or reserved bandwidth) traffic, the configured
3306 flows are not adversely affected.
3308 12. Use Cases Explicitly Out of Scope for DetNet
3310 This section contains use case text that has been determined to be
3311 outside of the scope of the present DetNet work.
3313 12.1. DetNet Scope Limitations
3315 The scope of DetNet is deliberately limited to specific use cases
3316 that are consistent with the WG charter, subject to the
3317 interpretation of the WG. At the time the DetNet Use Cases were
3318 solicited and provided by the authors the scope of DetNet was not
3319 clearly defined, and as that clarity has emerged, certain of the use
3320 cases have been determined to be outside the scope of the present
3321 DetNet work. Such text has been moved into this section to clarify
3322 that these use cases will not be supported by the DetNet work.
3324 The text in this section was moved here based on the following
3325 "exclusion" principles. Or, as an alternative to moving all such
3326 text to this section, some draft text has been modified in situ to
3327 reflect these same principles.
3329 The following principles have been established to clarify the scope
3330 of the present DetNet work.
3332 o The scope of network addressed by DetNet is limited to networks
3333 that can be centrally controlled, i.e. an "enterprise" aka
3334 "corporate" network. This explicitly excludes "the open
3335 Internet".
3337 o Maintaining synchronized time across a DetNet network is crucial
3338 to its operation, however DetNet assumes that time is to be
3339 maintained using other means, for example (but not limited to)
3340 Precision Time Protocol ([IEEE1588]). A use case may state the
3341 accuracy and reliability that it expects from the DetNet network
3342 as part of a whole system, however it is understood that such
3343 timing properties are not guaranteed by DetNet itself. It is
3344 currently an open question as to whether DetNet protocols will
3345 include a way for an application to communicate such timing
3346 expectations to the network, and if so whether they would be
3347 expected to materially affect the performance they would receive
3348 from the network as a result.
3350 12.2. Internet-based Applications
3352 There are many applications that communicate over the open Internet
3353 that could benefit from guaranteed delivery and bounded latency.
3354 However as noted above, all such applications when run over the open
3355 Internet are out of scope for DetNet. These same applications may be
3356 in-scope when run in constrained environments, i.e. within a
3357 centrally controlled DetNet network. The following are some examples
3358 of such applications.
3360 12.2.1. Use Case Description
3362 12.2.1.1. Media Content Delivery
3364 Media content delivery continues to be an important use of the
3365 Internet, yet users often experience poor quality audio and video due
3366 to the delay and jitter inherent in today's Internet.
3368 12.2.1.2. Online Gaming
3370 Online gaming is a significant part of the gaming market, however
3371 latency can degrade the end user experience. For example "First
3372 Person Shooter" games are highly delay-sensitive.
3374 12.2.1.3. Virtual Reality
3376 Virtual reality has many commercial applications including real
3377 estate presentations, remote medical procedures, and so on. Low
3378 latency is critical to interacting with the virtual world because
3379 perceptual delays can cause motion sickness.
3381 12.2.2. Internet-Based Applications Today
3383 Internet service today is by definition "best effort", with no
3384 guarantees on delivery or bandwidth.
3386 12.2.3. Internet-Based Applications Future
3388 We imagine an Internet from which we will be able to play a video
3389 without glitches and play games without lag.
3391 For online gaming, the maximum round-trip delay can be 100ms and
3392 stricter for FPS gaming which can be 10-50ms. Transport delay is the
3393 dominate part with a 5-20ms budget.
3395 For VR, 1-10ms maximum delay is needed and total network budget is
3396 1-5ms if doing remote VR.
3398 Flow identification can be used for gaming and VR, i.e. it can
3399 recognize a critical flow and provide appropriate latency bounds.
3401 12.2.4. Internet-Based Applications Asks
3403 o Unified control and management protocols to handle time-critical
3404 data flow
3406 o Application-aware flow filtering mechanism to recognize the timing
3407 critical flow without doing 5-tuple matching
3409 o Unified control plane to provide low latency service on Layer-3
3410 without changing the data plane
3412 o OAM system and protocols which can help to provide E2E-delay
3413 sensitive service provisioning
3415 12.3. Pro Audio and Video - Digital Rights Management (DRM)
3417 This section was moved here because this is considered a Link layer
3418 topic, not direct responsibility of DetNet.
3420 Digital Rights Management (DRM) is very important to the audio and
3421 video industries. Any time protected content is introduced into a
3422 network there are DRM concerns that must be maintained (see
3423 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of
3424 network technology, however there are cases when a secure link
3425 supporting authentication and encryption is required by content
3426 owners to carry their audio or video content when it is outside their
3427 own secure environment (for example see [DCI]).
3429 As an example, two techniques are Digital Transmission Content
3430 Protection (DTCP) and High-Bandwidth Digital Content Protection
3431 (HDCP). HDCP content is not approved for retransmission within any
3432 other type of DRM, while DTCP may be retransmitted under HDCP.
3433 Therefore if the source of a stream is outside of the network and it
3434 uses HDCP protection it is only allowed to be placed on the network
3435 with that same HDCP protection.
3437 12.4. Pro Audio and Video - Link Aggregation
3439 Note: The term "Link Aggregation" is used here as defined by the text
3440 in the following paragraph, i.e. not following a more common Network
3441 Industry definition. Current WG consensus is that this item won't be
3442 directly supported by the DetNet architecture, for example because it
3443 implies guarantee of in-order delivery of packets which conflicts
3444 with the core goal of achieving the lowest possible latency.
3446 For transmitting streams that require more bandwidth than a single
3447 link in the target network can support, link aggregation is a
3448 technique for combining (aggregating) the bandwidth available on
3449 multiple physical links to create a single logical link of the
3450 required bandwidth. However, if aggregation is to be used, the
3451 network controller (or equivalent) must be able to determine the
3452 maximum latency of any path through the aggregate link.
3454 13. Contributors
3456 RFC7322 limits the number of authors listed on the front page of a
3457 draft to a maximum of 5, far fewer than the 20 individuals below who
3458 made important contributions to this draft. The editor wishes to
3459 thank and acknowledge each of the following authors for contributing
3460 text to this draft. See also Section 14.
3462 Craig Gunther (Harman International)
3463 10653 South River Front Parkway, South Jordan,UT 84095
3464 phone +1 801 568-7675, email craig.gunther@harman.com
3466 Pascal Thubert (Cisco Systems, Inc)
3467 Building D, 45 Allee des Ormes - BP1200, MOUGINS
3468 Sophia Antipolis 06254 FRANCE
3469 phone +33 497 23 26 34, email pthubert@cisco.com
3471 Patrick Wetterwald (Cisco Systems)
3472 45 Allees des Ormes, Mougins, 06250 FRANCE
3473 phone +33 4 97 23 26 36, email pwetterw@cisco.com
3475 Jean Raymond (Hydro-Quebec)
3476 1500 University, Montreal, H3A3S7, Canada
3477 phone +1 514 840 3000, email raymond.jean@hydro.qc.ca
3479 Jouni Korhonen (Broadcom Corporation)
3480 3151 Zanker Road, San Jose, 95134, CA, USA
3481 email jouni.nospam@gmail.com
3483 Yu Kaneko (Toshiba)
3484 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi, Kanagawa, Japan
3485 email yu1.kaneko@toshiba.co.jp
3487 Subir Das (Vencore Labs)
3488 150 Mount Airy Road, Basking Ridge, New Jersey, 07920, USA
3489 email sdas@appcomsci.com
3491 Balazs Varga (Ericsson)
3492 Konyves Kalman krt. 11/B, Budapest, Hungary, 1097
3493 email balazs.a.varga@ericsson.com
3494 Janos Farkas (Ericsson)
3495 Konyves Kalman krt. 11/B, Budapest, Hungary, 1097
3496 email janos.farkas@ericsson.com
3498 Franz-Josef Goetz (Siemens)
3499 Gleiwitzerstr. 555, Nurnberg, Germany, 90475
3500 email franz-josef.goetz@siemens.com
3502 Juergen Schmitt (Siemens)
3503 Gleiwitzerstr. 555, Nurnberg, Germany, 90475
3504 email juergen.jues.schmitt@siemens.com
3506 Xavier Vilajosana (Worldsensing)
3507 483 Arago, Barcelona, Catalonia, 08013, Spain
3508 email xvilajosana@worldsensing.com
3510 Toktam Mahmoodi (King's College London)
3511 Strand, London WC2R 2LS, United Kingdom
3512 email toktam.mahmoodi@kcl.ac.uk
3514 Spiros Spirou (Intracom Telecom)
3515 19.7 km Markopoulou Ave., Peania, Attiki, 19002, Greece
3516 email spiros.spirou@gmail.com
3518 Petra Vizarreta (Technical University of Munich)
3519 Maxvorstadt, ArcisstraBe 21, Munich, 80333, Germany
3520 email petra.stojsavljevic@tum.de
3522 Daniel Huang (ZTE Corporation, Inc.)
3523 No. 50 Software Avenue, Nanjing, Jiangsu, 210012, P.R. China
3524 email huang.guangping@zte.com.cn
3526 Xuesong Geng (Huawei Technologies)
3527 email gengxuesong@huawei.com
3529 Diego Dujovne (Universidad Diego Portales)
3530 email diego.dujovne@mail.udp.cl
3532 Maik Seewald (Cisco Systems)
3533 email maseewal@cisco.com
3535 14. Acknowledgments
3537 14.1. Pro Audio
3539 This section was derived from draft-gunther-detnet-proaudio-req-01.
3541 The editors would like to acknowledge the help of the following
3542 individuals and the companies they represent:
3544 Jeff Koftinoff, Meyer Sound
3546 Jouni Korhonen, Associate Technical Director, Broadcom
3548 Pascal Thubert, CTAO, Cisco
3550 Kieran Tyrrell, Sienda New Media Technologies GmbH
3552 14.2. Utility Telecom
3554 This section was derived from draft-wetterwald-detnet-utilities-reqs-
3555 02.
3557 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy
3558 Practice Cisco
3560 Pascal Thubert, CTAO Cisco
3562 14.3. Building Automation Systems
3564 This section was derived from draft-bas-usecase-detnet-00.
3566 14.4. Wireless for Industrial
3568 This section was derived from draft-thubert-6tisch-4detnet-01.
3570 This specification derives from the 6TiSCH architecture, which is the
3571 result of multiple interactions, in particular during the 6TiSCH
3572 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at
3573 the IETF.
3575 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier
3576 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael
3577 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon,
3578 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey,
3579 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria
3580 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation
3581 and various contributions.
3583 14.5. Cellular Radio
3585 This section was derived from draft-korhonen-detnet-telreq-00.
3587 14.6. Industrial M2M
3589 The authors would like to thank Feng Chen and Marcel Kiessling for
3590 their comments and suggestions.
3592 14.7. Internet Applications and CoMP
3594 This section was derived from draft-zha-detnet-use-case-00 by Yiyong
3595 Zha.
3597 This document has benefited from reviews, suggestions, comments and
3598 proposed text provided by the following members, listed in
3599 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver
3600 Huang.
3602 14.8. Electrical Utilities
3604 The wind power generation use case has been extracted from the study
3605 of Wind Farms conducted within the 5GPPP Virtuwind Project. The
3606 project is funded by the European Union's Horizon 2020 research and
3607 innovation programme under grant agreement No 671648 (VirtuWind).
3609 14.9. Network Slicing
3611 This section was written by Xuesong Geng, who would like to
3612 acknowledge Norm Finn and Mach Chen for their useful comments.
3614 14.10. Mining
3616 This section was written by Diego Dujovne in conjunction with Xavier
3617 Vilasojana.
3619 14.11. Private Blockchain
3621 This section was written by Daniel Huang.
3623 15. Informative References
3625 [ACE] IETF, "Authentication and Authorization for Constrained
3626 Environments",
3627 .
3629 [Ahm14] Ahmed, M. and R. Kim, "Communication network architectures
3630 for smart-wind power farms.", Energies, p. 3900-3921. ,
3631 June 2014.
3633 [bacnetip]
3634 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP",
3635 January 1999.
3637 [CCAMP] IETF, "Common Control and Measurement Plane",
3638 .
3640 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND
3641 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_
3642 and_Enhancement_v2.0, March 2015,
3643 .
3646 [CONTENT_PROTECTION]
3647 Olsen, D., "1722a Content Protection", 2012,
3648 .
3651 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI);
3652 Interface Specification", CPRI Specification V6.1, July
3653 2014, .
3656 [CPRI-transp]
3657 CPRI TWG, "CPRI requirements for Ethernet Fronthaul",
3658 November 2015,
3659 .
3662 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification,
3663 Version 1.2", 2012, .
3665 [DICE] IETF, "DTLS In Constrained Environments",
3666 .
3668 [EA12] Evans, P. and M. Annunziata, "Industrial Internet: Pushing
3669 the Boundaries of Minds and Machines", November 2012.
3671 [eCPRI] IEEE Standards Association, "Common Public Radio
3672 Interface, "Common Public Radio Interface: eCPRI Interface
3673 Specification V1.0", 2017, .
3675 [ESPN_DC2]
3676 Daley, D., "ESPN's DC2 Scales AVB Large", 2014,
3677 .
3680 [flnet] Japan Electrical Manufacturers Association, "JEMA 1479 -
3681 English Edition", September 2012.
3683 [Fronthaul]
3684 Chen, D. and T. Mustala, "Ethernet Fronthaul
3685 Considerations", IEEE 1904.3, February 2015,
3686 .
3689 [HART] www.hartcomm.org, "Highway Addressable remote Transducer,
3690 a group of specifications for industrial process and
3691 control devices administered by the HART Foundation".
3693 [I-D.ietf-6tisch-6top-interface]
3694 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
3695 (6top) Interface", draft-ietf-6tisch-6top-interface-04
3696 (work in progress), July 2015.
3698 [I-D.ietf-6tisch-architecture]
3699 Thubert, P., "An Architecture for IPv6 over the TSCH mode
3700 of IEEE 802.15.4", draft-ietf-6tisch-architecture-14 (work
3701 in progress), April 2018.
3703 [I-D.ietf-6tisch-coap]
3704 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and
3705 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work
3706 in progress), March 2015.
3708 [I-D.ietf-6tisch-terminology]
3709 Palattella, M., Thubert, P., Watteyne, T., and Q. Wang,
3710 "Terms Used in IPv6 over the TSCH mode of IEEE 802.15.4e",
3711 draft-ietf-6tisch-terminology-10 (work in progress), March
3712 2018.
3714 [I-D.ietf-detnet-architecture]
3715 Finn, N., Thubert, P., Varga, B., and J. Farkas,
3716 "Deterministic Networking Architecture", draft-ietf-
3717 detnet-architecture-05 (work in progress), May 2018.
3719 [I-D.ietf-detnet-problem-statement]
3720 Finn, N. and P. Thubert, "Deterministic Networking Problem
3721 Statement", draft-ietf-detnet-problem-statement-05 (work
3722 in progress), June 2018.
3724 [I-D.ietf-ipv6-multilink-subnets]
3725 Thaler, D. and C. Huitema, "Multi-link Subnet Support in
3726 IPv6", draft-ietf-ipv6-multilink-subnets-00 (work in
3727 progress), July 2002.
3729 [I-D.ietf-mpls-residence-time]
3730 Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S.,
3731 and S. Vainshtein, "Residence Time Measurement in MPLS
3732 network", draft-ietf-mpls-residence-time-15 (work in
3733 progress), March 2017.
3735 [I-D.ietf-roll-rpl-industrial-applicability]
3736 Phinney, T., Thubert, P., and R. Assimiti, "RPL
3737 applicability in industrial networks", draft-ietf-roll-
3738 rpl-industrial-applicability-02 (work in progress),
3739 October 2013.
3741 [I-D.ietf-tictoc-1588overmpls]
3742 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L.
3743 Montini, "Transporting Timing messages over MPLS
3744 Networks", draft-ietf-tictoc-1588overmpls-07 (work in
3745 progress), October 2015.
3747 [I-D.kh-spring-ip-ran-use-case]
3748 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing
3749 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02
3750 (work in progress), November 2014.
3752 [I-D.svshah-tsvwg-deterministic-forwarding]
3753 Shah, S. and P. Thubert, "Deterministic Forwarding PHB",
3754 draft-svshah-tsvwg-deterministic-forwarding-04 (work in
3755 progress), August 2015.
3757 [I-D.thubert-6lowpan-backbone-router]
3758 Thubert, P., "6LoWPAN Backbone Router", draft-thubert-
3759 6lowpan-backbone-router-03 (work in progress), February
3760 2013.
3762 [I-D.wang-6tisch-6top-sublayer]
3763 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
3764 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in
3765 progress), November 2015.
3767 [IEC-60870-5-104]
3768 International Electrotechnical Commission, "International
3769 Standard IEC 60870-5-104: Network access for IEC
3770 60870-5-101 using standard transport profiles", June 2006.
3772 [IEC61400]
3773 "International standard 61400-25: Communications for
3774 monitoring and control of wind power plants", June 2013.
3776 [IEC61850-90-12]
3777 TC57 WG10, IEC., "IEC 61850-90-12 TR: Communication
3778 networks and systems for power utility automation - Part
3779 90-12: Wide area network engineering guidelines", 2015.
3781 [IEC62439-3:2012]
3782 TC65, IEC., "IEC 62439-3: Industrial communication
3783 networks - High availability automation networks - Part 3:
3784 Parallel Redundancy Protocol (PRP) and High-availability
3785 Seamless Redundancy (HSR)", 2012.
3787 [IEEE1588]
3788 IEEE, "IEEE Standard for a Precision Clock Synchronization
3789 Protocol for Networked Measurement and Control Systems",
3790 IEEE Std 1588-2008, 2008,
3791 .
3794 [IEEE1646]
3795 "Communication Delivery Time Performance Requirements for
3796 Electric Power Substation Automation", IEEE Standard
3797 1646-2004 , Apr 2004.
3799 [IEEE1722]
3800 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport
3801 Protocol for Time Sensitive Applications in a Bridged
3802 Local Area Network", IEEE Std 1722-2011, 2011,
3803 .
3806 [IEEE19143]
3807 IEEE Standards Association, "P1914.3/D3.1 Draft Standard
3808 for Radio over Ethernet Encapsulations and Mappings",
3809 IEEE 1914.3, 2018,
3810 .
3812 [IEEE802.1TSNTG]
3813 IEEE Standards Association, "IEEE 802.1 Time-Sensitive
3814 Networks Task Group", March 2013,
3815 .
3817 [IEEE802154]
3818 IEEE standard for Information Technology, "IEEE std.
3819 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC)
3820 and Physical Layer (PHY) Specifications for Low-Rate
3821 Wireless Personal Area Networks".
3823 [IEEE802154e]
3824 IEEE standard for Information Technology, "IEEE standard
3825 for Information Technology, IEEE std. 802.15.4, Part.
3826 15.4: Wireless Medium Access Control (MAC) and Physical
3827 Layer (PHY) Specifications for Low-Rate Wireless Personal
3828 Area Networks, June 2011 as amended by IEEE std.
3829 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area
3830 Networks (LR-WPANs) Amendment 1: MAC sublayer", April
3831 2012.
3833 [IEEE8021AS]
3834 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)",
3835 IEEE 802.1AS-2001, 2011,
3836 .
3839 [IEEE8021CM]
3840 Farkas, J., "Time-Sensitive Networking for Fronthaul",
3841 Unapproved PAR, PAR for a New IEEE Standard;
3842 IEEE P802.1CM, April 2015,
3843 .
3846 [IEEE8021TSN]
3847 IEEE 802.1, "The charter of the TG is to provide the
3848 specifications that will allow time-synchronized low
3849 latency streaming services through 802 networks.", 2016,
3850 .
3852 [IETFDetNet]
3853 IETF, "Charter for IETF DetNet Working Group", 2015,
3854 .
3856 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation",
3857 .
3859 [ISA100.11a]
3860 ISA/ANSI, "Wireless Systems for Industrial Automation:
3861 Process Control and Related Applications - ISA100.11a-2011
3862 - IEC 62734", 2011, .
3865 [ISO7240-16]
3866 ISO, "ISO 7240-16:2007 Fire detection and alarm systems --
3867 Part 16: Sound system control and indicating equipment",
3868 2007, .
3871 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006.
3873 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0",
3874 1994.
3876 [LTE-Latency]
3877 Johnston, S., "LTE Latency: How does it compare to other
3878 technologies", March 2014,
3879 .
3882 [MEF22.1.1]
3883 MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells",
3884 MEF 22.1.1, July 2014,
3885 .
3888 [MEF8] MEF, "Implementation Agreement for the Emulation of PDH
3889 Circuits over Metro Ethernet Networks", MEF 8, October
3890 2004,
3891 .
3894 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and
3895 wireless system", ICT-317669-METIS/D1.1 ICT-
3896 317669-METIS/D1.1, April 2013, .
3899 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL
3900 SPECIFICATION V1.1b", December 2006.
3902 [MODBUS] Modbus Organization, Inc., "MODBUS Application Protocol
3903 Specification", Apr 2012.
3905 [net5G] Ericsson, "5G Radio Access, Challenges for 2020 and
3906 Beyond", Ericsson white paper wp-5g, June 2013,
3907 .
3909 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0,
3910 February 2015, .
3913 [NGMN-fronth]
3914 NGMN Alliance, "Fronthaul Requirements for C-RAN", March
3915 2015, .
3918 [OPCXML] OPC Foundation, "OPC XML-Data Access Specification", Dec
3919 2004.
3921 [PCE] IETF, "Path Computation Element",
3922 .
3924 [profibus]
3925 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001.
3927 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
3928 Requirement Levels", BCP 14, RFC 2119,
3929 DOI 10.17487/RFC2119, March 1997,
3930 .
3932 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6
3933 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460,
3934 December 1998, .
3936 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black,
3937 "Definition of the Differentiated Services Field (DS
3938 Field) in the IPv4 and IPv6 Headers", RFC 2474,
3939 DOI 10.17487/RFC2474, December 1998,
3940 .
3942 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol
3943 Label Switching Architecture", RFC 3031,
3944 DOI 10.17487/RFC3031, January 2001,
3945 .
3947 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
3948 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
3949 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001,
3950 .
3952 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation
3953 Metric for IP Performance Metrics (IPPM)", RFC 3393,
3954 DOI 10.17487/RFC3393, November 2002,
3955 .
3957 [RFC3411] Harrington, D., Presuhn, R., and B. Wijnen, "An
3958 Architecture for Describing Simple Network Management
3959 Protocol (SNMP) Management Frameworks", STD 62, RFC 3411,
3960 DOI 10.17487/RFC3411, December 2002,
3961 .
3963 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between
3964 Information Models and Data Models", RFC 3444,
3965 DOI 10.17487/RFC3444, January 2003,
3966 .
3968 [RFC3972] Aura, T., "Cryptographically Generated Addresses (CGA)",
3969 RFC 3972, DOI 10.17487/RFC3972, March 2005,
3970 .
3972 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation
3973 Edge-to-Edge (PWE3) Architecture", RFC 3985,
3974 DOI 10.17487/RFC3985, March 2005,
3975 .
3977 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing
3978 Architecture", RFC 4291, DOI 10.17487/RFC4291, February
3979 2006, .
3981 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure-
3982 Agnostic Time Division Multiplexing (TDM) over Packet
3983 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006,
3984 .
3986 [RFC4903] Thaler, D., "Multi-Link Subnet Issues", RFC 4903,
3987 DOI 10.17487/RFC4903, June 2007,
3988 .
3990 [RFC4919] Kushalnagar, N., Montenegro, G., and C. Schumacher, "IPv6
3991 over Low-Power Wireless Personal Area Networks (6LoWPANs):
3992 Overview, Assumptions, Problem Statement, and Goals",
3993 RFC 4919, DOI 10.17487/RFC4919, August 2007,
3994 .
3996 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and
3997 P. Pate, "Structure-Aware Time Division Multiplexed (TDM)
3998 Circuit Emulation Service over Packet Switched Network
3999 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007,
4000 .
4002 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi,
4003 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087,
4004 DOI 10.17487/RFC5087, December 2007,
4005 .
4007 [RFC6282] Hui, J., Ed. and P. Thubert, "Compression Format for IPv6
4008 Datagrams over IEEE 802.15.4-Based Networks", RFC 6282,
4009 DOI 10.17487/RFC6282, September 2011,
4010 .
4012 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J.,
4013 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur,
4014 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for
4015 Low-Power and Lossy Networks", RFC 6550,
4016 DOI 10.17487/RFC6550, March 2012,
4017 .
4019 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N.,
4020 and D. Barthel, "Routing Metrics Used for Path Calculation
4021 in Low-Power and Lossy Networks", RFC 6551,
4022 DOI 10.17487/RFC6551, March 2012,
4023 .
4025 [RFC6775] Shelby, Z., Ed., Chakrabarti, S., Nordmark, E., and C.
4026 Bormann, "Neighbor Discovery Optimization for IPv6 over
4027 Low-Power Wireless Personal Area Networks (6LoWPANs)",
4028 RFC 6775, DOI 10.17487/RFC6775, November 2012,
4029 .
4031 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using
4032 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the
4033 Internet of Things (IoT): Problem Statement", RFC 7554,
4034 DOI 10.17487/RFC7554, May 2015,
4035 .
4037 [Spe09] Sperotto, A., Sadre, R., Vliet, F., and A. Pras, "A First
4038 Look into SCADA Network Traffic", IP Operations and
4039 Management, p. 518-521. , June 2009.
4041 [SRP_LATENCY]
4042 Gunther, C., "Specifying SRP Latency", 2014,
4043 .
4046 [STUDIO_IP]
4047 Mace, G., "IP Networked Studio Infrastructure for
4048 Synchronized & Real-Time Multimedia Transmissions", 2007,
4049 .
4052 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in
4053 packet networks", Recommendation G.8261, August 2013,
4054 .
4056 [TEAS] IETF, "Traffic Engineering Architecture and Signaling",
4057 .
4059 [TR38801] IEEE Standards Association, "3GPP TR 38.801, Technical
4060 Specification Group Radio Access Network; Study on new
4061 radio access technology: Radio access architecture and
4062 interfaces (Release 14)", 2017,
4063 .
4066 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements
4067 for Evolved Universal Terrestrial Radio Access Network
4068 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013.
4070 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception
4071 (FDD)", 3GPP TS 25.104 3.14.0, March 2007.
4073 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access
4074 (E-UTRA); Base Station (BS) radio transmission and
4075 reception", 3GPP TS 36.104 10.11.0, July 2013.
4077 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access
4078 (E-UTRA); Requirements for support of radio resource
4079 management", 3GPP TS 36.133 12.7.0, April 2015.
4081 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access
4082 (E-UTRA); Physical channels and modulation", 3GPP
4083 TS 36.211 10.7.0, March 2013.
4085 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA)
4086 and Evolved Universal Terrestrial Radio Access Network
4087 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300
4088 10.11.0, September 2013.
4090 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive
4091 Networks Task Group", 2013,
4092 .
4094 [UHD-video]
4095 Holub, P., "Ultra-High Definition Videos and Their
4096 Applications over the Network", The 7th International
4097 Symposium on VICTORIES Project PetrHolub_presentation,
4098 October 2014, .
4101 [WirelessHART]
4102 www.hartcomm.org, "Industrial Communication Networks -
4103 Wireless Communication Network and Communication Profiles
4104 - WirelessHART - IEC 62591", 2010.
4106 Author's Address
4108 Ethan Grossman (editor)
4109 Dolby Laboratories, Inc.
4110 1275 Market Street
4111 San Francisco, CA 94103
4112 USA
4114 Phone: +1 415 645 4726
4115 Email: ethan.grossman@dolby.com
4116 URI: http://www.dolby.com