idnits 2.17.1
draft-ietf-detnet-use-cases-10.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
** The document seems to lack an IANA Considerations section. (See Section
2.2 of https://www.ietf.org/id-info/checklist for how to handle the case
when there are no actions for IANA.)
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (July 4, 2016) is 2847 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Unused Reference: 'ACE' is defined on line 2727, but no explicit
reference was found in the text
== Unused Reference: 'CCAMP' is defined on line 2735, but no explicit
reference was found in the text
== Unused Reference: 'CPRI-transp' is defined on line 2754, but no explicit
reference was found in the text
== Unused Reference: 'DICE' is defined on line 2763, but no explicit
reference was found in the text
== Unused Reference: 'EA12' is defined on line 2766, but no explicit
reference was found in the text
== Unused Reference: 'HART' is defined on line 2783, but no explicit
reference was found in the text
== Unused Reference: 'I-D.ietf-6tisch-terminology' is defined on line 2812,
but no explicit reference was found in the text
== Unused Reference: 'I-D.ietf-ipv6-multilink-subnets' is defined on line
2818, but no explicit reference was found in the text
== Unused Reference: 'I-D.ietf-roll-rpl-industrial-applicability' is
defined on line 2829, but no explicit reference was found in the text
== Unused Reference: 'I-D.thubert-6lowpan-backbone-router' is defined on
line 2851, but no explicit reference was found in the text
== Unused Reference: 'IEC61850-90-12' is defined on line 2861, but no
explicit reference was found in the text
== Unused Reference: 'IEEE8021TSN' is defined on line 2924, but no explicit
reference was found in the text
== Unused Reference: 'IETFDetNet' is defined on line 2930, but no explicit
reference was found in the text
== Unused Reference: 'LTE-Latency' is defined on line 2954, but no explicit
reference was found in the text
== Unused Reference: 'RFC2119' is defined on line 2992, but no explicit
reference was found in the text
== Unused Reference: 'RFC2460' is defined on line 2997, but no explicit
reference was found in the text
== Unused Reference: 'RFC2474' is defined on line 3001, but no explicit
reference was found in the text
== Unused Reference: 'RFC3209' is defined on line 3012, but no explicit
reference was found in the text
== Unused Reference: 'RFC3393' is defined on line 3017, but no explicit
reference was found in the text
== Unused Reference: 'RFC3444' is defined on line 3022, but no explicit
reference was found in the text
== Unused Reference: 'RFC3972' is defined on line 3027, but no explicit
reference was found in the text
== Unused Reference: 'RFC4291' is defined on line 3036, but no explicit
reference was found in the text
== Unused Reference: 'RFC4903' is defined on line 3045, but no explicit
reference was found in the text
== Unused Reference: 'RFC4919' is defined on line 3049, but no explicit
reference was found in the text
== Unused Reference: 'RFC6282' is defined on line 3066, but no explicit
reference was found in the text
== Unused Reference: 'RFC6775' is defined on line 3084, but no explicit
reference was found in the text
== Unused Reference: 'TEAS' is defined on line 3111, but no explicit
reference was found in the text
== Unused Reference: 'UHD-video' is defined on line 3142, but no explicit
reference was found in the text
== Outdated reference: A later version (-08) exists of
draft-finn-detnet-architecture-04
== Outdated reference: A later version (-30) exists of
draft-ietf-6tisch-architecture-10
== Outdated reference: A later version (-10) exists of
draft-ietf-6tisch-terminology-07
== Outdated reference: A later version (-15) exists of
draft-ietf-mpls-residence-time-09
-- Obsolete informational reference (is this intentional?): RFC 2460
(Obsoleted by RFC 8200)
Summary: 1 error (**), 0 flaws (~~), 33 warnings (==), 2 comments (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Internet Engineering Task Force E. Grossman, Ed.
3 Internet-Draft DOLBY
4 Intended status: Informational C. Gunther
5 Expires: January 5, 2017 HARMAN
6 P. Thubert
7 P. Wetterwald
8 CISCO
9 J. Raymond
10 HYDRO-QUEBEC
11 J. Korhonen
12 BROADCOM
13 Y. Kaneko
14 Toshiba
15 S. Das
16 Applied Communication Sciences
17 Y. Zha
18 HUAWEI
19 B. Varga
20 J. Farkas
21 Ericsson
22 F. Goetz
23 J. Schmitt
24 Siemens
25 July 4, 2016
27 Deterministic Networking Use Cases
28 draft-ietf-detnet-use-cases-10
30 Abstract
32 This draft documents requirements in several diverse industries to
33 establish multi-hop paths for characterized flows with deterministic
34 properties. In this context deterministic implies that streams can
35 be established which provide guaranteed bandwidth and latency which
36 can be established from either a Layer 2 or Layer 3 (IP) interface,
37 and which can co-exist on an IP network with best-effort traffic.
39 Additional requirements include optional redundant paths, very high
40 reliability paths, time synchronization, and clock distribution.
41 Industries considered include wireless for industrial applications,
42 professional audio, electrical utilities, building automation
43 systems, radio/mobile access networks, automotive, and gaming.
45 For each case, this document will identify the application, identify
46 representative solutions used today, and what new uses an IETF DetNet
47 solution may enable.
49 Status of This Memo
51 This Internet-Draft is submitted in full conformance with the
52 provisions of BCP 78 and BCP 79.
54 Internet-Drafts are working documents of the Internet Engineering
55 Task Force (IETF). Note that other groups may also distribute
56 working documents as Internet-Drafts. The list of current Internet-
57 Drafts is at http://datatracker.ietf.org/drafts/current/.
59 Internet-Drafts are draft documents valid for a maximum of six months
60 and may be updated, replaced, or obsoleted by other documents at any
61 time. It is inappropriate to use Internet-Drafts as reference
62 material or to cite them other than as "work in progress."
64 This Internet-Draft will expire on January 5, 2017.
66 Copyright Notice
68 Copyright (c) 2016 IETF Trust and the persons identified as the
69 document authors. All rights reserved.
71 This document is subject to BCP 78 and the IETF Trust's Legal
72 Provisions Relating to IETF Documents
73 (http://trustee.ietf.org/license-info) in effect on the date of
74 publication of this document. Please review these documents
75 carefully, as they describe your rights and restrictions with respect
76 to this document. Code Components extracted from this document must
77 include Simplified BSD License text as described in Section 4.e of
78 the Trust Legal Provisions and are provided without warranty as
79 described in the Simplified BSD License.
81 Table of Contents
83 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5
84 2. Pro Audio and Video . . . . . . . . . . . . . . . . . . . . . 5
85 2.1. Use Case Description . . . . . . . . . . . . . . . . . . 5
86 2.1.1. Uninterrupted Stream Playback . . . . . . . . . . . . 6
87 2.1.2. Synchronized Stream Playback . . . . . . . . . . . . 6
88 2.1.3. Sound Reinforcement . . . . . . . . . . . . . . . . . 7
89 2.1.4. Deterministic Time to Establish Streaming . . . . . . 8
90 2.1.5. Secure Transmission . . . . . . . . . . . . . . . . . 8
91 2.1.5.1. Safety . . . . . . . . . . . . . . . . . . . . . 8
92 2.2. Pro Audio Today . . . . . . . . . . . . . . . . . . . . . 8
93 2.3. Pro Audio Future . . . . . . . . . . . . . . . . . . . . 9
94 2.3.1. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 9
95 2.3.2. High Reliability Stream Paths . . . . . . . . . . . . 9
96 2.3.3. Integration of Reserved Streams into IT Networks . . 9
97 2.3.4. Use of Unused Reservations by Best-Effort Traffic . . 9
98 2.3.5. Traffic Segregation . . . . . . . . . . . . . . . . . 10
99 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets . . . 10
100 2.3.5.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 10
101 2.3.6. Latency Optimization by a Central Controller . . . . 11
102 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory . . 11
103 2.4. Pro Audio Asks . . . . . . . . . . . . . . . . . . . . . 12
104 3. Electrical Utilities . . . . . . . . . . . . . . . . . . . . 12
105 3.1. Use Case Description . . . . . . . . . . . . . . . . . . 12
106 3.1.1. Transmission Use Cases . . . . . . . . . . . . . . . 12
107 3.1.1.1. Protection . . . . . . . . . . . . . . . . . . . 12
108 3.1.1.2. Intra-Substation Process Bus Communications . . . 18
109 3.1.1.3. Wide Area Monitoring and Control Systems . . . . 19
110 3.1.1.4. IEC 61850 WAN engineering guidelines requirement
111 classification . . . . . . . . . . . . . . . . . 20
112 3.1.2. Generation Use Case . . . . . . . . . . . . . . . . . 21
113 3.1.3. Distribution use case . . . . . . . . . . . . . . . . 22
114 3.1.3.1. Fault Location Isolation and Service Restoration
115 (FLISR) . . . . . . . . . . . . . . . . . . . . . 22
116 3.2. Electrical Utilities Today . . . . . . . . . . . . . . . 23
117 3.2.1. Security Current Practices and Limitations . . . . . 23
118 3.3. Electrical Utilities Future . . . . . . . . . . . . . . . 25
119 3.3.1. Migration to Packet-Switched Network . . . . . . . . 25
120 3.3.2. Telecommunications Trends . . . . . . . . . . . . . . 26
121 3.3.2.1. General Telecommunications Requirements . . . . . 26
122 3.3.2.2. Specific Network topologies of Smart Grid
123 Applications . . . . . . . . . . . . . . . . . . 27
124 3.3.2.3. Precision Time Protocol . . . . . . . . . . . . . 28
125 3.3.3. Security Trends in Utility Networks . . . . . . . . . 29
126 3.4. Electrical Utilities Asks . . . . . . . . . . . . . . . . 31
127 4. Building Automation Systems . . . . . . . . . . . . . . . . . 31
128 4.1. Use Case Description . . . . . . . . . . . . . . . . . . 31
129 4.2. Building Automation Systems Today . . . . . . . . . . . . 31
130 4.2.1. BAS Architecture . . . . . . . . . . . . . . . . . . 32
131 4.2.2. BAS Deployment Model . . . . . . . . . . . . . . . . 33
132 4.2.3. Use Cases for Field Networks . . . . . . . . . . . . 35
133 4.2.3.1. Environmental Monitoring . . . . . . . . . . . . 35
134 4.2.3.2. Fire Detection . . . . . . . . . . . . . . . . . 35
135 4.2.3.3. Feedback Control . . . . . . . . . . . . . . . . 36
136 4.2.4. Security Considerations . . . . . . . . . . . . . . . 36
137 4.3. BAS Future . . . . . . . . . . . . . . . . . . . . . . . 36
138 4.4. BAS Asks . . . . . . . . . . . . . . . . . . . . . . . . 37
139 5. Wireless for Industrial . . . . . . . . . . . . . . . . . . . 37
140 5.1. Use Case Description . . . . . . . . . . . . . . . . . . 37
141 5.1.1. Network Convergence using 6TiSCH . . . . . . . . . . 38
142 5.1.2. Common Protocol Development for 6TiSCH . . . . . . . 38
143 5.2. Wireless Industrial Today . . . . . . . . . . . . . . . . 39
144 5.3. Wireless Industrial Future . . . . . . . . . . . . . . . 39
145 5.3.1. Unified Wireless Network and Management . . . . . . . 39
146 5.3.1.1. PCE and 6TiSCH ARQ Retries . . . . . . . . . . . 41
147 5.3.2. Schedule Management by a PCE . . . . . . . . . . . . 42
148 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests . . . . . . 42
149 5.3.2.2. 6TiSCH IP Interface . . . . . . . . . . . . . . . 43
150 5.3.3. 6TiSCH Security Considerations . . . . . . . . . . . 44
151 5.4. Wireless Industrial Asks . . . . . . . . . . . . . . . . 44
152 6. Cellular Radio . . . . . . . . . . . . . . . . . . . . . . . 44
153 6.1. Use Case Description . . . . . . . . . . . . . . . . . . 44
154 6.1.1. Network Architecture . . . . . . . . . . . . . . . . 44
155 6.1.2. Delay Constraints . . . . . . . . . . . . . . . . . . 45
156 6.1.3. Time Synchronization Constraints . . . . . . . . . . 46
157 6.1.4. Transport Loss Constraints . . . . . . . . . . . . . 48
158 6.1.5. Security Considerations . . . . . . . . . . . . . . . 48
159 6.2. Cellular Radio Networks Today . . . . . . . . . . . . . . 49
160 6.2.1. Fronthaul . . . . . . . . . . . . . . . . . . . . . . 49
161 6.2.2. Midhaul and Backhaul . . . . . . . . . . . . . . . . 49
162 6.3. Cellular Radio Networks Future . . . . . . . . . . . . . 50
163 6.4. Cellular Radio Networks Asks . . . . . . . . . . . . . . 52
164 7. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 52
165 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 52
166 7.2. Industrial M2M Communication Today . . . . . . . . . . . 53
167 7.2.1. Transport Parameters . . . . . . . . . . . . . . . . 54
168 7.2.2. Stream Creation and Destruction . . . . . . . . . . . 55
169 7.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 55
170 7.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 55
171 8. Use Case Common Elements . . . . . . . . . . . . . . . . . . 55
172 9. Use Cases Explicitly Out of Scope for DetNet . . . . . . . . 56
173 9.1. DetNet Scope Limitations . . . . . . . . . . . . . . . . 57
174 9.2. Internet-based Applications . . . . . . . . . . . . . . . 57
175 9.2.1. Use Case Description . . . . . . . . . . . . . . . . 57
176 9.2.1.1. Media Content Delivery . . . . . . . . . . . . . 58
177 9.2.1.2. Online Gaming . . . . . . . . . . . . . . . . . . 58
178 9.2.1.3. Virtual Reality . . . . . . . . . . . . . . . . . 58
179 9.2.2. Internet-Based Applications Today . . . . . . . . . . 58
180 9.2.3. Internet-Based Applications Future . . . . . . . . . 58
181 9.2.4. Internet-Based Applications Asks . . . . . . . . . . 58
182 9.3. Pro Audio and Video - Digital Rights Management (DRM) . . 59
183 9.4. Pro Audio and Video - Link Aggregation . . . . . . . . . 59
184 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 60
185 10.1. Pro Audio . . . . . . . . . . . . . . . . . . . . . . . 60
186 10.2. Utility Telecom . . . . . . . . . . . . . . . . . . . . 60
187 10.3. Building Automation Systems . . . . . . . . . . . . . . 60
188 10.4. Wireless for Industrial . . . . . . . . . . . . . . . . 60
189 10.5. Cellular Radio . . . . . . . . . . . . . . . . . . . . . 61
190 10.6. Industrial M2M . . . . . . . . . . . . . . . . . . . . . 61
191 10.7. Internet Applications and CoMP . . . . . . . . . . . . . 61
192 11. Informative References . . . . . . . . . . . . . . . . . . . 61
193 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 70
195 1. Introduction
197 This draft presents use cases from diverse industries which have in
198 common a need for deterministic streams, but which also differ
199 notably in their network topologies and specific desired behavior.
200 Together, they provide broad industry context for DetNet and a
201 yardstick against which proposed DetNet designs can be measured (to
202 what extent does a proposed design satisfy these various use cases?)
204 For DetNet, use cases explicitly do not define requirements; The
205 DetNet WG will consider the use cases, decide which elements are in
206 scope for DetNet, and the results will be incorporated into future
207 drafts. Similarly, the DetNet use case draft explicitly does not
208 suggest any specific design, architecture or protocols, which will be
209 topics of future drafts.
211 We present for each use case the answers to the following questions:
213 o What is the use case?
215 o How is it addressed today?
217 o How would you like it to be addressed in the future?
219 o What do you want the IETF to deliver?
221 The level of detail in each use case should be sufficient to express
222 the relevant elements of the use case, but not more.
224 At the end we consider the use cases collectively, and examine the
225 most significant goals they have in common.
227 2. Pro Audio and Video
229 2.1. Use Case Description
231 The professional audio and video industry ("ProAV") includes:
233 o Music and film content creation
235 o Broadcast
237 o Cinema
239 o Live sound
240 o Public address, media and emergency systems at large venues
241 (airports, stadiums, churches, theme parks).
243 These industries have already transitioned audio and video signals
244 from analog to digital. However, the digital interconnect systems
245 remain primarily point-to-point with a single (or small number of)
246 signals per link, interconnected with purpose-built hardware.
248 These industries are now transitioning to packet-based infrastructure
249 to reduce cost, increase routing flexibility, and integrate with
250 existing IT infrastructure.
252 Today ProAV applications have no way to establish deterministic
253 streams from a standards-based Layer 3 (IP) interface, which is a
254 fundamental limitation to the use cases described here. Today
255 deterministic streams can be created within standards-based layer 2
256 LANs (e.g. using IEEE 802.1 AVB) however these are not routable via
257 IP and thus are not effective for distribution over wider areas (for
258 example broadcast events that span wide geographical areas).
260 It would be highly desirable if such streams could be routed over the
261 open Internet, however solutions with more limited scope (e.g.
262 enterprise networks) would still provide a substantial improvement.
264 The following sections describe specific ProAV use cases.
266 2.1.1. Uninterrupted Stream Playback
268 Transmitting audio and video streams for live playback is unlike
269 common file transfer because uninterrupted stream playback in the
270 presence of network errors cannot be achieved by re-trying the
271 transmission; by the time the missing or corrupt packet has been
272 identified it is too late to execute a re-try operation. Buffering
273 can be used to provide enough delay to allow time for one or more
274 retries, however this is not an effective solution in applications
275 where large delays (latencies) are not acceptable (as discussed
276 below).
278 Streams with guaranteed bandwidth can eliminate congestion on the
279 network as a cause of transmission errors that would lead to playback
280 interruption. Use of redundant paths can further mitigate
281 transmission errors to provide greater stream reliability.
283 2.1.2. Synchronized Stream Playback
285 Latency in this context is the time between when a signal is
286 initially sent over a stream and when it is received. A common
287 example in ProAV is time-synchronizing audio and video when they take
288 separate paths through the playback system. In this case the latency
289 of both the audio and video streams must be bounded and consistent if
290 the sound is to remain matched to the movement in the video. A
291 common tolerance for audio/video sync is one NTSC video frame (about
292 33ms) and to maintain the audience perception of correct lip sync the
293 latency needs to be consistent within some reasonable tolerance, for
294 example 10%.
296 A common architecture for synchronizing multiple streams that have
297 different paths through the network (and thus potentially different
298 latencies) is to enable measurement of the latency of each path, and
299 have the data sinks (for example speakers) delay (buffer) all packets
300 on all but the slowest path. Each packet of each stream is assigned
301 a presentation time which is based on the longest required delay.
302 This implies that all sinks must maintain a common time reference of
303 sufficient accuracy, which can be achieved by any of various
304 techniques.
306 This type of architecture is commonly implemented using a central
307 controller that determines path delays and arbitrates buffering
308 delays.
310 2.1.3. Sound Reinforcement
312 Consider the latency (delay) from when a person speaks into a
313 microphone to when their voice emerges from the speaker. If this
314 delay is longer than about 10-15 milliseconds it is noticeable and
315 can make a sound reinforcement system unusable (see slide 6 of
316 [SRP_LATENCY]). (If you have ever tried to speak in the presence of
317 a delayed echo of your voice you may know this experience).
319 Note that the 15ms latency bound includes all parts of the signal
320 path, not just the network, so the network latency must be
321 significantly less than 15ms.
323 In some cases local performers must perform in synchrony with a
324 remote broadcast. In such cases the latencies of the broadcast
325 stream and the local performer must be adjusted to match each other,
326 with a worst case of one video frame (33ms for NTSC video).
328 In cases where audio phase is a consideration, for example beam-
329 forming using multiple speakers, latency requirements can be in the
330 10 microsecond range (1 audio sample at 96kHz).
332 2.1.4. Deterministic Time to Establish Streaming
334 Note: It is still under WG discussion whether this topic (stream
335 startup time) is within scope of DetNet.
337 Some audio systems installed in public environments (airports,
338 hospitals) have unique requirements with regards to health, safety
339 and fire concerns. One such requirement is a maximum of 3 seconds
340 for a system to respond to an emergency detection and begin sending
341 appropriate warning signals and alarms without human intervention.
342 For this requirement to be met, the system must support a bounded and
343 acceptable time from a notification signal to specific stream
344 establishment. For further details see [ISO7240-16].
346 Similar requirements apply when the system is restarted after a power
347 cycle, cable re-connection, or system reconfiguration.
349 In many cases such re-establishment of streaming state must be
350 achieved by the peer devices themselves, i.e. without a central
351 controller (since such a controller may only be present during
352 initial network configuration).
354 Video systems introduce related requirements, for example when
355 transitioning from one camera feed (video stream) to another (see
356 [STUDIO_IP] and [ESPN_DC2]).
358 2.1.5. Secure Transmission
360 2.1.5.1. Safety
362 Professional audio systems can include amplifiers that are capable of
363 generating hundreds or thousands of watts of audio power which if
364 used incorrectly can cause hearing damage to those in the vicinity.
365 Apart from the usual care required by the systems operators to
366 prevent such incidents, the network traffic that controls these
367 devices must be secured (as with any sensitive application traffic).
369 2.2. Pro Audio Today
371 Some proprietary systems have been created which enable deterministic
372 streams at Layer 3 however they are "engineered networks" which
373 require careful configuration to operate, often require that the
374 system be over-provisioned, and it is implied that all devices on the
375 network voluntarily play by the rules of that network. To enable
376 these industries to successfully transition to an interoperable
377 multi-vendor packet-based infrastructure requires effective open
378 standards, and we believe that establishing relevant IETF standards
379 is a crucial factor.
381 2.3. Pro Audio Future
383 2.3.1. Layer 3 Interconnecting Layer 2 Islands
385 It would be valuable to enable IP to connect multiple Layer 2 LANs.
387 As an example, ESPN recently constructed a state-of-the-art 194,000
388 sq ft, $125 million broadcast studio called DC2. The DC2 network is
389 capable of handling 46 Tbps of throughput with 60,000 simultaneous
390 signals. Inside the facility are 1,100 miles of fiber feeding four
391 audio control rooms (see [ESPN_DC2] ).
393 In designing DC2 they replaced as much point-to-point technology as
394 they could with packet-based technology. They constructed seven
395 individual studios using layer 2 LANS (using IEEE 802.1 AVB) that
396 were entirely effective at routing audio within the LANs. However to
397 interconnect these layer 2 LAN islands together they ended up using
398 dedicated paths in a custom SDN (Software Defined Networking) router
399 because there is no standards-based routing solution available.
401 2.3.2. High Reliability Stream Paths
403 On-air and other live media streams are often backed up with
404 redundant links that seamlessly act to deliver the content when the
405 primary link fails for any reason. In point-to-point systems this is
406 provided by an additional point-to-point link; the analogous
407 requirement in a packet-based system is to provide an alternate path
408 through the network such that no individual link can bring down the
409 system.
411 2.3.3. Integration of Reserved Streams into IT Networks
413 A commonly cited goal of moving to a packet based media
414 infrastructure is that costs can be reduced by using off the shelf,
415 commodity network hardware. In addition, economy of scale can be
416 realized by combining media infrastructure with IT infrastructure.
417 In keeping with these goals, stream reservation technology should be
418 compatible with existing protocols, and not compromise use of the
419 network for best effort (non-time-sensitive) traffic.
421 2.3.4. Use of Unused Reservations by Best-Effort Traffic
423 In cases where stream bandwidth is reserved but not currently used
424 (or is under-utilized) that bandwidth must be available to best-
425 effort (i.e. non-time-sensitive) traffic. For example a single
426 stream may be nailed up (reserved) for specific media content that
427 needs to be presented at different times of the day, ensuring timely
428 delivery of that content, yet in between those times the full
429 bandwidth of the network can be utilized for best-effort tasks such
430 as file transfers.
432 This also addresses a concern of IT network administrators that are
433 considering adding reserved bandwidth traffic to their networks that
434 ("users will reserve large quantities of bandwidth and then never un-
435 reserve it even though they are not using it, and soon the network
436 will have no bandwidth left").
438 2.3.5. Traffic Segregation
440 Note: It is still under WG discussion whether this topic will be
441 addressed by DetNet.
443 Sink devices may be low cost devices with limited processing power.
444 In order to not overwhelm the CPUs in these devices it is important
445 to limit the amount of traffic that these devices must process.
447 As an example, consider the use of individual seat speakers in a
448 cinema. These speakers are typically required to be cost reduced
449 since the quantities in a single theater can reach hundreds of seats.
450 Discovery protocols alone in a one thousand seat theater can generate
451 enough broadcast traffic to overwhelm a low powered CPU. Thus an
452 installation like this will benefit greatly from some type of traffic
453 segregation that can define groups of seats to reduce traffic within
454 each group. All seats in the theater must still be able to
455 communicate with a central controller.
457 There are many techniques that can be used to support this
458 requirement including (but not limited to) the following examples.
460 2.3.5.1. Packet Forwarding Rules, VLANs and Subnets
462 Packet forwarding rules can be used to eliminate some extraneous
463 streaming traffic from reaching potentially low powered sink devices,
464 however there may be other types of broadcast traffic that should be
465 eliminated using other means for example VLANs or IP subnets.
467 2.3.5.2. Multicast Addressing (IPv4 and IPv6)
469 Multicast addressing is commonly used to keep bandwidth utilization
470 of shared links to a minimum.
472 Because of the MAC Address forwarding nature of Layer 2 bridges it is
473 important that a multicast MAC address is only associated with one
474 stream. This will prevent reservations from forwarding packets from
475 one stream down a path that has no interested sinks simply because
476 there is another stream on that same path that shares the same
477 multicast MAC address.
479 Since each multicast MAC Address can represent 32 different IPv4
480 multicast addresses there must be a process put in place to make sure
481 this does not occur. Requiring use of IPv6 address can achieve this,
482 however due to their continued prevalence, solutions that are
483 effective for IPv4 installations are also required.
485 2.3.6. Latency Optimization by a Central Controller
487 A central network controller might also perform optimizations based
488 on the individual path delays, for example sinks that are closer to
489 the source can inform the controller that they can accept greater
490 latency since they will be buffering packets to match presentation
491 times of farther away sinks. The controller might then move a stream
492 reservation on a short path to a longer path in order to free up
493 bandwidth for other critical streams on that short path. See slides
494 3-5 of [SRP_LATENCY].
496 Additional optimization can be achieved in cases where sinks have
497 differing latency requirements, for example in a live outdoor concert
498 the speaker sinks have stricter latency requirements than the
499 recording hardware sinks. See slide 7 of [SRP_LATENCY].
501 2.3.7. Reduced Device Cost Due To Reduced Buffer Memory
503 Device cost can be reduced in a system with guaranteed reservations
504 with a small bounded latency due to the reduced requirements for
505 buffering (i.e. memory) on sink devices. For example, a theme park
506 might broadcast a live event across the globe via a layer 3 protocol;
507 in such cases the size of the buffers required is proportional to the
508 latency bounds and jitter caused by delivery, which depends on the
509 worst case segment of the end-to-end network path. For example on
510 todays open internet the latency is typically unacceptable for audio
511 and video streaming without many seconds of buffering. In such
512 scenarios a single gateway device at the local network that receives
513 the feed from the remote site would provide the expensive buffering
514 required to mask the latency and jitter issues associated with long
515 distance delivery. Sink devices in the local location would have no
516 additional buffering requirements, and thus no additional costs,
517 beyond those required for delivery of local content. The sink device
518 would be receiving the identical packets as those sent by the source
519 and would be unaware that there were any latency or jitter issues
520 along the path.
522 2.4. Pro Audio Asks
524 o Layer 3 routing on top of AVB (and/or other high QoS networks)
526 o Content delivery with bounded, lowest possible latency
528 o IntServ and DiffServ integration with AVB (where practical)
530 o Single network for A/V and IT traffic
532 o Standards-based, interoperable, multi-vendor
534 o IT department friendly
536 o Enterprise-wide networks (e.g. size of San Francisco but not the
537 whole Internet (yet...))
539 3. Electrical Utilities
541 3.1. Use Case Description
543 Many systems that an electrical utility deploys today rely on high
544 availability and deterministic behavior of the underlying networks.
545 Here we present use cases in Transmission, Generation and
546 Distribution, including key timing and reliability metrics. We also
547 discuss security issues and industry trends which affect the
548 architecture of next generation utility networks
550 3.1.1. Transmission Use Cases
552 3.1.1.1. Protection
554 Protection means not only the protection of human operators but also
555 the protection of the electrical equipment and the preservation of
556 the stability and frequency of the grid. If a fault occurs in the
557 transmission or distribution of electricity then severe damage can
558 occur to human operators, electrical equipment and the grid itself,
559 leading to blackouts.
561 Communication links in conjunction with protection relays are used to
562 selectively isolate faults on high voltage lines, transformers,
563 reactors and other important electrical equipment. The role of the
564 teleprotection system is to selectively disconnect a faulty part by
565 transferring command signals within the shortest possible time.
567 3.1.1.1.1. Key Criteria
569 The key criteria for measuring teleprotection performance are command
570 transmission time, dependability and security. These criteria are
571 defined by the IEC standard 60834 as follows:
573 o Transmission time (Speed): The time between the moment where state
574 changes at the transmitter input and the moment of the
575 corresponding change at the receiver output, including propagation
576 delay. Overall operating time for a teleprotection system
577 includes the time for initiating the command at the transmitting
578 end, the propagation delay over the network (including equipments)
579 and the selection and decision time at the receiving end,
580 including any additional delay due to a noisy environment.
582 o Dependability: The ability to issue and receive valid commands in
583 the presence of interference and/or noise, by minimizing the
584 probability of missing command (PMC). Dependability targets are
585 typically set for a specific bit error rate (BER) level.
587 o Security: The ability to prevent false tripping due to a noisy
588 environment, by minimizing the probability of unwanted commands
589 (PUC). Security targets are also set for a specific bit error
590 rate (BER) level.
592 Additional elements of the the teleprotection system that impact its
593 performance include:
595 o Network bandwidth
597 o Failure recovery capacity (aka resiliency)
599 3.1.1.1.2. Fault Detection and Clearance Timing
601 Most power line equipment can tolerate short circuits or faults for
602 up to approximately five power cycles before sustaining irreversible
603 damage or affecting other segments in the network. This translates
604 to total fault clearance time of 100ms. As a safety precaution,
605 however, actual operation time of protection systems is limited to
606 70- 80 percent of this period, including fault recognition time,
607 command transmission time and line breaker switching time.
609 Some system components, such as large electromechanical switches,
610 require particularly long time to operate and take up the majority of
611 the total clearance time, leaving only a 10ms window for the
612 telecommunications part of the protection scheme, independent of the
613 distance to travel. Given the sensitivity of the issue, new networks
614 impose requirements that are even more stringent: IEC standard 61850
615 limits the transfer time for protection messages to 1/4 - 1/2 cycle
616 or 4 - 8ms (for 60Hz lines) for the most critical messages.
618 3.1.1.1.3. Symmetric Channel Delay
620 Note: It is currently under WG discussion whether symmetric path
621 delays are to be guaranteed by DetNet.
623 Teleprotection channels which are differential must be synchronous,
624 which means that any delays on the transmit and receive paths must
625 match each other. Teleprotection systems ideally support zero
626 asymmetric delay; typical legacy relays can tolerate delay
627 discrepancies of up to 750us.
629 Some tools available for lowering delay variation below this
630 threshold are:
632 o For legacy systems using Time Division Multiplexing (TDM), jitter
633 buffers at the multiplexers on each end of the line can be used to
634 offset delay variation by queuing sent and received packets. The
635 length of the queues must balance the need to regulate the rate of
636 transmission with the need to limit overall delay, as larger
637 buffers result in increased latency.
639 o For jitter-prone IP packet networks, traffic management tools can
640 ensure that the teleprotection signals receive the highest
641 transmission priority to minimize jitter.
643 o Standard packet-based synchronization technologies, such as
644 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet
645 (Sync-E), can help keep networks stable by maintaining a highly
646 accurate clock source on the various network devices.
648 3.1.1.1.4. Teleprotection Network Requirements (IEC 61850)
650 The following table captures the main network metrics as based on the
651 IEC 61850 standard.
653 +-----------------------------+-------------------------------------+
654 | Teleprotection Requirement | Attribute |
655 +-----------------------------+-------------------------------------+
656 | One way maximum delay | 4-10 ms |
657 | Asymetric delay required | Yes |
658 | Maximum jitter | less than 250 us (750 us for legacy |
659 | | IED) |
660 | Topology | Point to point, point to Multi- |
661 | | point |
662 | Availability | 99.9999 |
663 | precise timing required | Yes |
664 | Recovery time on node | less than 50ms - hitless |
665 | failure | |
666 | performance management | Yes, Mandatory |
667 | Redundancy | Yes |
668 | Packet loss | 0.1% to 1% |
669 +-----------------------------+-------------------------------------+
671 Table 1: Teleprotection network requirements
673 3.1.1.1.5. Inter-Trip Protection scheme
675 "Inter-tripping" is the signal-controlled tripping of a circuit
676 breaker to complete the isolation of a circuit or piece of apparatus
677 in concert with the tripping of other circuit breakers.
679 +--------------------------------+----------------------------------+
680 | Inter-Trip protection | Attribute |
681 | Requirement | |
682 +--------------------------------+----------------------------------+
683 | One way maximum delay | 5 ms |
684 | Asymetric delay required | No |
685 | Maximum jitter | Not critical |
686 | Topology | Point to point, point to Multi- |
687 | | point |
688 | Bandwidth | 64 Kbps |
689 | Availability | 99.9999 |
690 | precise timing required | Yes |
691 | Recovery time on node failure | less than 50ms - hitless |
692 | performance management | Yes, Mandatory |
693 | Redundancy | Yes |
694 | Packet loss | 0.1% |
695 +--------------------------------+----------------------------------+
697 Table 2: Inter-Trip protection network requirements
699 3.1.1.1.6. Current Differential Protection Scheme
701 Current differential protection is commonly used for line protection,
702 and is typical for protecting parallel circuits. At both end of the
703 lines the current is measured by the differential relays, and both
704 relays will trip the circuit breaker if the current going into the
705 line does not equal the current going out of the line. This type of
706 protection scheme assumes some form of communications being present
707 between the relays at both end of the line, to allow both relays to
708 compare measured current values. Line differential protection
709 schemes assume a very low telecommunications delay between both
710 relays, often as low as 5ms. Moreover, as those systems are often
711 not time-synchronized, they also assume symmetric telecommunications
712 paths with constant delay, which allows comparing current measurement
713 values taken at the exact same time.
715 +----------------------------------+--------------------------------+
716 | Current Differential protection | Attribute |
717 | Requirement | |
718 +----------------------------------+--------------------------------+
719 | One way maximum delay | 5 ms |
720 | Asymetric delay Required | Yes |
721 | Maximum jitter | less than 250 us (750us for |
722 | | legacy IED) |
723 | Topology | Point to point, point to |
724 | | Multi-point |
725 | Bandwidth | 64 Kbps |
726 | Availability | 99.9999 |
727 | precise timing required | Yes |
728 | Recovery time on node failure | less than 50ms - hitless |
729 | performance management | Yes, Mandatory |
730 | Redundancy | Yes |
731 | Packet loss | 0.1% |
732 +----------------------------------+--------------------------------+
734 Table 3: Current Differential Protection metrics
736 3.1.1.1.7. Distance Protection Scheme
738 Distance (Impedance Relay) protection scheme is based on voltage and
739 current measurements. The network metrics are similar (but not
740 identical to) Current Differential protection.
742 +-------------------------------+-----------------------------------+
743 | Distance protection | Attribute |
744 | Requirement | |
745 +-------------------------------+-----------------------------------+
746 | One way maximum delay | 5 ms |
747 | Asymetric delay Required | No |
748 | Maximum jitter | Not critical |
749 | Topology | Point to point, point to Multi- |
750 | | point |
751 | Bandwidth | 64 Kbps |
752 | Availability | 99.9999 |
753 | precise timing required | Yes |
754 | Recovery time on node failure | less than 50ms - hitless |
755 | performance management | Yes, Mandatory |
756 | Redundancy | Yes |
757 | Packet loss | 0.1% |
758 +-------------------------------+-----------------------------------+
760 Table 4: Distance Protection requirements
762 3.1.1.1.8. Inter-Substation Protection Signaling
764 This use case describes the exchange of Sampled Value and/or GOOSE
765 (Generic Object Oriented Substation Events) message between
766 Intelligent Electronic Devices (IED) in two substations for
767 protection and tripping coordination. The two IEDs are in a master-
768 slave mode.
770 The Current Transformer or Voltage Transformer (CT/VT) in one
771 substation sends the sampled analog voltage or current value to the
772 Merging Unit (MU) over hard wire. The MU sends the time-synchronized
773 61850-9-2 sampled values to the slave IED. The slave IED forwards
774 the information to the Master IED in the other substation. The
775 master IED makes the determination (for example based on sampled
776 value differentials) to send a trip command to the originating IED.
777 Once the slave IED/Relay receives the GOOSE trip for breaker
778 tripping, it opens the breaker. It then sends a confirmation message
779 back to the master. All data exchanges between IEDs are either
780 through Sampled Value and/or GOOSE messages.
782 +----------------------------------+--------------------------------+
783 | Inter-Substation protection | Attribute |
784 | Requirement | |
785 +----------------------------------+--------------------------------+
786 | One way maximum delay | 5 ms |
787 | Asymetric delay Required | No |
788 | Maximum jitter | Not critical |
789 | Topology | Point to point, point to |
790 | | Multi-point |
791 | Bandwidth | 64 Kbps |
792 | Availability | 99.9999 |
793 | precise timing required | Yes |
794 | Recovery time on node failure | less than 50ms - hitless |
795 | performance management | Yes, Mandatory |
796 | Redundancy | Yes |
797 | Packet loss | 1% |
798 +----------------------------------+--------------------------------+
800 Table 5: Inter-Substation Protection requirements
802 3.1.1.2. Intra-Substation Process Bus Communications
804 This use case describes the data flow from the CT/VT to the IEDs in
805 the substation via the MU. The CT/VT in the substation send the
806 sampled value (analog voltage or current) to the MU over hard wire.
807 The MU sends the time-synchronized 61850-9-2 sampled values to the
808 IEDs in the substation in GOOSE message format. The GPS Master Clock
809 can send 1PPS or IRIG-B format to the MU through a serial port or
810 IEEE 1588 protocol via a network. Process bus communication using
811 61850 simplifies connectivity within the substation and removes the
812 requirement for multiple serial connections and removes the slow
813 serial bus architectures that are typically used. This also ensures
814 increased flexibility and increased speed with the use of multicast
815 messaging between multiple devices.
817 +----------------------------------+--------------------------------+
818 | Intra-Substation protection | Attribute |
819 | Requirement | |
820 +----------------------------------+--------------------------------+
821 | One way maximum delay | 5 ms |
822 | Asymetric delay Required | No |
823 | Maximum jitter | Not critical |
824 | Topology | Point to point, point to |
825 | | Multi-point |
826 | Bandwidth | 64 Kbps |
827 | Availability | 99.9999 |
828 | precise timing required | Yes |
829 | Recovery time on Node failure | less than 50ms - hitless |
830 | performance management | Yes, Mandatory |
831 | Redundancy | Yes - No |
832 | Packet loss | 0.1% |
833 +----------------------------------+--------------------------------+
835 Table 6: Intra-Substation Protection requirements
837 3.1.1.3. Wide Area Monitoring and Control Systems
839 The application of synchrophasor measurement data from Phasor
840 Measurement Units (PMU) to Wide Area Monitoring and Control Systems
841 promises to provide important new capabilities for improving system
842 stability. Access to PMU data enables more timely situational
843 awareness over larger portions of the grid than what has been
844 possible historically with normal SCADA (Supervisory Control and Data
845 Acquisition) data. Handling the volume and real-time nature of
846 synchrophasor data presents unique challenges for existing
847 application architectures. Wide Area management System (WAMS) makes
848 it possible for the condition of the bulk power system to be observed
849 and understood in real-time so that protective, preventative, or
850 corrective action can be taken. Because of the very high sampling
851 rate of measurements and the strict requirement for time
852 synchronization of the samples, WAMS has stringent telecommunications
853 requirements in an IP network that are captured in the following
854 table:
856 +----------------------+--------------------------------------------+
857 | WAMS Requirement | Attribute |
858 +----------------------+--------------------------------------------+
859 | One way maximum | 50 ms |
860 | delay | |
861 | Asymetric delay | No |
862 | Required | |
863 | Maximum jitter | Not critical |
864 | Topology | Point to point, point to Multi-point, |
865 | | Multi-point to Multi-point |
866 | Bandwidth | 100 Kbps |
867 | Availability | 99.9999 |
868 | precise timing | Yes |
869 | required | |
870 | Recovery time on | less than 50ms - hitless |
871 | Node failure | |
872 | performance | Yes, Mandatory |
873 | management | |
874 | Redundancy | Yes |
875 | Packet loss | 1% |
876 +----------------------+--------------------------------------------+
878 Table 7: WAMS Special Communication Requirements
880 3.1.1.4. IEC 61850 WAN engineering guidelines requirement
881 classification
883 The IEC (International Electrotechnical Commission) has recently
884 published a Technical Report which offers guidelines on how to define
885 and deploy Wide Area Networks for the interconnections of electric
886 substations, generation plants and SCADA operation centers. The IEC
887 61850-90-12 is providing a classification of WAN communication
888 requirements into 4 classes. Table 8 summarizes these requirements:
890 +----------------+------------+------------+------------+-----------+
891 | WAN | Class WA | Class WB | Class WC | Class WD |
892 | Requirement | | | | |
893 +----------------+------------+------------+------------+-----------+
894 | Application | EHV (Extra | HV (High | MV (Medium | General |
895 | field | High | Voltage) | Voltage) | purpose |
896 | | Voltage) | | | |
897 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms |
898 | Jitter | 10 us | 100 us | 1 ms | 10 ms |
899 | Latency | 100 us | 1 ms | 10 ms | 100 ms |
900 | Asymetry | | | | |
901 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 |
902 | | | | | ms |
903 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | |
904 | | 10-6 | 10-4 | | |
905 | Unavailability | 10-7 to | 10-5 to | 10-3 | |
906 | | 10-6 | 10-4 | | |
907 | Recovery delay | Zero | 50 ms | 5 s | 50 s |
908 | Cyber security | extremely | High | Medium | Medium |
909 | | high | | | |
910 +----------------+------------+------------+------------+-----------+
912 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC
914 3.1.2. Generation Use Case
916 The electrical power generation frequency should be maintained within
917 a very narrow band. Deviations from the acceptable frequency range
918 are detected and the required signals are sent to the power plants
919 for frequency regulation.
921 Automatic generation control (AGC) is a system for adjusting the
922 power output of generators at different power plants, in response to
923 changes in the load.
925 +---------------------------------------------------+---------------+
926 | FCAG (Frequency Control Automatic Generation) | Attribute |
927 | Requirement | |
928 +---------------------------------------------------+---------------+
929 | One way maximum delay | 500 ms |
930 | Asymetric delay Required | No |
931 | Maximum jitter | Not critical |
932 | Topology | Point to |
933 | | point |
934 | Bandwidth | 20 Kbps |
935 | Availability | 99.999 |
936 | precise timing required | Yes |
937 | Recovery time on Node failure | N/A |
938 | performance management | Yes, |
939 | | Mandatory |
940 | Redundancy | Yes |
941 | Packet loss | 1% |
942 +---------------------------------------------------+---------------+
944 Table 9: FCAG Communication Requirements
946 3.1.3. Distribution use case
948 3.1.3.1. Fault Location Isolation and Service Restoration (FLISR)
950 Fault Location, Isolation, and Service Restoration (FLISR) refers to
951 the ability to automatically locate the fault, isolate the fault, and
952 restore service in the distribution network. This will likely be the
953 first widespread application of distributed intelligence in the grid.
955 Static power switch status (open/closed) in the network dictates the
956 power flow to secondary substations. Reconfiguring the network in
957 the event of a fault is typically done manually on site to energize/
958 de-energize alternate paths. Automating the operation of substation
959 switchgear allows the flow of power to be altered automatically under
960 fault conditions.
962 FLISR can be managed centrally from a Distribution Management System
963 (DMS) or executed locally through distributed control via intelligent
964 switches and fault sensors.
966 +----------------------+--------------------------------------------+
967 | FLISR Requirement | Attribute |
968 +----------------------+--------------------------------------------+
969 | One way maximum | 80 ms |
970 | delay | |
971 | Asymetric delay | No |
972 | Required | |
973 | Maximum jitter | 40 ms |
974 | Topology | Point to point, point to Multi-point, |
975 | | Multi-point to Multi-point |
976 | Bandwidth | 64 Kbps |
977 | Availability | 99.9999 |
978 | precise timing | Yes |
979 | required | |
980 | Recovery time on | Depends on customer impact |
981 | Node failure | |
982 | performance | Yes, Mandatory |
983 | management | |
984 | Redundancy | Yes |
985 | Packet loss | 0.1% |
986 +----------------------+--------------------------------------------+
988 Table 10: FLISR Communication Requirements
990 3.2. Electrical Utilities Today
992 Many utilities still rely on complex environments formed of multiple
993 application-specific proprietary networks, including TDM networks.
995 In this kind of environment there is no mixing of OT and IT
996 applications on the same network, and information is siloed between
997 operational areas.
999 Specific calibration of the full chain is required, which is costly.
1001 This kind of environment prevents utility operations from realizing
1002 the operational efficiency benefits, visibility, and functional
1003 integration of operational information across grid applications and
1004 data networks.
1006 In addition, there are many security-related issues as discussed in
1007 the following section.
1009 3.2.1. Security Current Practices and Limitations
1011 Grid monitoring and control devices are already targets for cyber
1012 attacks, and legacy telecommunications protocols have many intrinsic
1013 network-related vulnerabilities. For example, DNP3, Modbus,
1014 PROFIBUS/PROFINET, and other protocols are designed around a common
1015 paradigm of request and respond. Each protocol is designed for a
1016 master device such as an HMI (Human Machine Interface) system to send
1017 commands to subordinate slave devices to retrieve data (reading
1018 inputs) or control (writing to outputs). Because many of these
1019 protocols lack authentication, encryption, or other basic security
1020 measures, they are prone to network-based attacks, allowing a
1021 malicious actor or attacker to utilize the request-and-respond system
1022 as a mechanism for command-and-control like functionality. Specific
1023 security concerns common to most industrial control, including
1024 utility telecommunication protocols include the following:
1026 o Network or transport errors (e.g. malformed packets or excessive
1027 latency) can cause protocol failure.
1029 o Protocol commands may be available that are capable of forcing
1030 slave devices into inoperable states, including powering-off
1031 devices, forcing them into a listen-only state, disabling
1032 alarming.
1034 o Protocol commands may be available that are capable of restarting
1035 communications and otherwise interrupting processes.
1037 o Protocol commands may be available that are capable of clearing,
1038 erasing, or resetting diagnostic information such as counters and
1039 diagnostic registers.
1041 o Protocol commands may be available that are capable of requesting
1042 sensitive information about the controllers, their configurations,
1043 or other need-to-know information.
1045 o Most protocols are application layer protocols transported over
1046 TCP; therefore it is easy to transport commands over non-standard
1047 ports or inject commands into authorized traffic flows.
1049 o Protocol commands may be available that are capable of
1050 broadcasting messages to many devices at once (i.e. a potential
1051 DoS).
1053 o Protocol commands may be available to query the device network to
1054 obtain defined points and their values (i.e. a configuration
1055 scan).
1057 o Protocol commands may be available that will list all available
1058 function codes (i.e. a function scan).
1060 These inherent vulnerabilities, along with increasing connectivity
1061 between IT an OT networks, make network-based attacks very feasible.
1063 Simple injection of malicious protocol commands provides control over
1064 the target process. Altering legitimate protocol traffic can also
1065 alter information about a process and disrupt the legitimate controls
1066 that are in place over that process. A man-in-the-middle attack
1067 could provide both control over a process and misrepresentation of
1068 data back to operator consoles.
1070 3.3. Electrical Utilities Future
1072 The business and technology trends that are sweeping the utility
1073 industry will drastically transform the utility business from the way
1074 it has been for many decades. At the core of many of these changes
1075 is a drive to modernize the electrical grid with an integrated
1076 telecommunications infrastructure. However, interoperability
1077 concerns, legacy networks, disparate tools, and stringent security
1078 requirements all add complexity to the grid transformation. Given
1079 the range and diversity of the requirements that should be addressed
1080 by the next generation telecommunications infrastructure, utilities
1081 need to adopt a holistic architectural approach to integrate the
1082 electrical grid with digital telecommunications across the entire
1083 power delivery chain.
1085 The key to modernizing grid telecommunications is to provide a
1086 common, adaptable, multi-service network infrastructure for the
1087 entire utility organization. Such a network serves as the platform
1088 for current capabilities while enabling future expansion of the
1089 network to accommodate new applications and services.
1091 To meet this diverse set of requirements, both today and in the
1092 future, the next generation utility telecommunnications network will
1093 be based on open-standards-based IP architecture. An end-to-end IP
1094 architecture takes advantage of nearly three decades of IP technology
1095 development, facilitating interoperability across disparate networks
1096 and devices, as it has been already demonstrated in many mission-
1097 critical and highly secure networks.
1099 IPv6 is seen as a future telecommunications technology for the Smart
1100 Grid; the IEC (International Electrotechnical Commission) and
1101 different National Committees have mandated a specific adhoc group
1102 (AHG8) to define the migration strategy to IPv6 for all the IEC TC57
1103 power automation standards.
1105 3.3.1. Migration to Packet-Switched Network
1107 Throughout the world, utilities are increasingly planning for a
1108 future based on smart grid applications requiring advanced
1109 telecommunications systems. Many of these applications utilize
1110 packet connectivity for communicating information and control signals
1111 across the utility's Wide Area Network (WAN), made possible by
1112 technologies such as multiprotocol label switching (MPLS). The data
1113 that traverses the utility WAN includes:
1115 o Grid monitoring, control, and protection data
1117 o Non-control grid data (e.g. asset data for condition-based
1118 monitoring)
1120 o Physical safety and security data (e.g. voice and video)
1122 o Remote worker access to corporate applications (voice, maps,
1123 schematics, etc.)
1125 o Field area network backhaul for smart metering, and distribution
1126 grid management
1128 o Enterprise traffic (email, collaboration tools, business
1129 applications)
1131 WANs support this wide variety of traffic to and from substations,
1132 the transmission and distribution grid, generation sites, between
1133 control centers, and between work locations and data centers. To
1134 maintain this rapidly expanding set of applications, many utilities
1135 are taking steps to evolve present time-division multiplexing (TDM)
1136 based and frame relay infrastructures to packet systems. Packet-
1137 based networks are designed to provide greater functionalities and
1138 higher levels of service for applications, while continuing to
1139 deliver reliability and deterministic (real-time) traffic support.
1141 3.3.2. Telecommunications Trends
1143 These general telecommunications topics are in addition to the use
1144 cases that have been addressed so far. These include both current
1145 and future telecommunications related topics that should be factored
1146 into the network architecture and design.
1148 3.3.2.1. General Telecommunications Requirements
1150 o IP Connectivity everywhere
1152 o Monitoring services everywhere and from different remote centers
1154 o Move services to a virtual data center
1156 o Unify access to applications / information from the corporate
1157 network
1159 o Unify services
1161 o Unified Communications Solutions
1163 o Mix of fiber and microwave technologies - obsolescence of SONET/
1164 SDH or TDM
1166 o Standardize grid telecommunications protocol to opened standard to
1167 ensure interoperability
1169 o Reliable Telecommunications for Transmission and Distribution
1170 Substations
1172 o IEEE 1588 time synchronization Client / Server Capabilities
1174 o Integration of Multicast Design
1176 o QoS Requirements Mapping
1178 o Enable Future Network Expansion
1180 o Substation Network Resilience
1182 o Fast Convergence Design
1184 o Scalable Headend Design
1186 o Define Service Level Agreements (SLA) and Enable SLA Monitoring
1188 o Integration of 3G/4G Technologies and future technologies
1190 o Ethernet Connectivity for Station Bus Architecture
1192 o Ethernet Connectivity for Process Bus Architecture
1194 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP
1196 3.3.2.2. Specific Network topologies of Smart Grid Applications
1198 Utilities often have very large private telecommunications networks.
1199 It covers an entire territory / country. The main purpose of the
1200 network, until now, has been to support transmission network
1201 monitoring, control, and automation, remote control of generation
1202 sites, and providing FCAPS (Fault, Configuration, Accounting,
1203 Performance, Security) services from centralized network operation
1204 centers.
1206 Going forward, one network will support operation and maintenance of
1207 electrical networks (generation, transmission, and distribution),
1208 voice and data services for ten of thousands of employees and for
1209 exchange with neighboring interconnections, and administrative
1210 services. To meet those requirements, utility may deploy several
1211 physical networks leveraging different technologies across the
1212 country: an optical network and a microwave network for instance.
1213 Each protection and automatism system between two points has two
1214 telecommunications circuits, one on each network. Path diversity
1215 between two substations is key. Regardless of the event type
1216 (hurricane, ice storm, etc.), one path shall stay available so the
1217 system can still operate.
1219 In the optical network, signals are transmitted over more than tens
1220 of thousands of circuits using fiber optic links, microwave and
1221 telephone cables. This network is the nervous system of the
1222 utility's power transmission operations. The optical network
1223 represents ten of thousands of km of cable deployed along the power
1224 lines, with individual runs as long as 280 km.
1226 3.3.2.3. Precision Time Protocol
1228 Some utilities do not use GPS clocks in generation substations. One
1229 of the main reasons is that some of the generation plants are 30 to
1230 50 meters deep under ground and the GPS signal can be weak and
1231 unreliable. Instead, atomic clocks are used. Clocks are
1232 synchronized amongst each other. Rubidium clocks provide clock and
1233 1ms timestamps for IRIG-B.
1235 Some companies plan to transition to the Precision Time Protocol
1236 (PTP, [IEEE1588]), distributing the synchronization signal over the
1237 IP/MPLS network. PTP provides a mechanism for synchronizing the
1238 clocks of participating nodes to a high degree of accuracy and
1239 precision.
1241 PTP operates based on the following assumptions:
1243 It is assumed that the network eliminates cyclic forwarding of PTP
1244 messages within each communication path (e.g. by using a spanning
1245 tree protocol).
1247 PTP is tolerant of an occasional missed message, duplicated
1248 message, or message that arrived out of order. However, PTP
1249 assumes that such impairments are relatively rare.
1251 PTP was designed assuming a multicast communication model, however
1252 PTP also supports a unicast communication model as long as the
1253 behavior of the protocol is preserved.
1255 Like all message-based time transfer protocols, PTP time accuracy
1256 is degraded by delay asymmetry in the paths taken by event
1257 messages. Asymmetry is not detectable by PTP, however, if such
1258 delays are known a priori, PTP can correct for asymmetry.
1260 IEC 61850 will recommend the use of the IEEE PTP 1588 Utility Profile
1261 (as defined in [IEC62439-3:2012] Annex B) which offers the support of
1262 redundant attachment of clocks to Parallel Redundancy Protcol (PRP)
1263 and High-availability Seamless Redundancy (HSR) networks.
1265 3.3.3. Security Trends in Utility Networks
1267 Although advanced telecommunications networks can assist in
1268 transforming the energy industry by playing a critical role in
1269 maintaining high levels of reliability, performance, and
1270 manageability, they also introduce the need for an integrated
1271 security infrastructure. Many of the technologies being deployed to
1272 support smart grid projects such as smart meters and sensors can
1273 increase the vulnerability of the grid to attack. Top security
1274 concerns for utilities migrating to an intelligent smart grid
1275 telecommunications platform center on the following trends:
1277 o Integration of distributed energy resources
1279 o Proliferation of digital devices to enable management, automation,
1280 protection, and control
1282 o Regulatory mandates to comply with standards for critical
1283 infrastructure protection
1285 o Migration to new systems for outage management, distribution
1286 automation, condition-based maintenance, load forecasting, and
1287 smart metering
1289 o Demand for new levels of customer service and energy management
1291 This development of a diverse set of networks to support the
1292 integration of microgrids, open-access energy competition, and the
1293 use of network-controlled devices is driving the need for a converged
1294 security infrastructure for all participants in the smart grid,
1295 including utilities, energy service providers, large commercial and
1296 industrial, as well as residential customers. Securing the assets of
1297 electric power delivery systems (from the control center to the
1298 substation, to the feeders and down to customer meters) requires an
1299 end-to-end security infrastructure that protects the myriad of
1300 telecommunications assets used to operate, monitor, and control power
1301 flow and measurement.
1303 "Cyber security" refers to all the security issues in automation and
1304 telecommunications that affect any functions related to the operation
1305 of the electric power systems. Specifically, it involves the
1306 concepts of:
1308 o Integrity : data cannot be altered undetectably
1310 o Authenticity : the telecommunications parties involved must be
1311 validated as genuine
1313 o Authorization : only requests and commands from the authorized
1314 users can be accepted by the system
1316 o Confidentiality : data must not be accessible to any
1317 unauthenticated users
1319 When designing and deploying new smart grid devices and
1320 telecommunications systems, it is imperative to understand the
1321 various impacts of these new components under a variety of attack
1322 situations on the power grid. Consequences of a cyber attack on the
1323 grid telecommunications network can be catastrophic. This is why
1324 security for smart grid is not just an ad hoc feature or product,
1325 it's a complete framework integrating both physical and Cyber
1326 security requirements and covering the entire smart grid networks
1327 from generation to distribution. Security has therefore become one
1328 of the main foundations of the utility telecom network architecture
1329 and must be considered at every layer with a defense-in-depth
1330 approach. Migrating to IP based protocols is key to address these
1331 challenges for two reasons:
1333 o IP enables a rich set of features and capabilities to enhance the
1334 security posture
1336 o IP is based on open standards, which allows interoperability
1337 between different vendors and products, driving down the costs
1338 associated with implementing security solutions in OT networks.
1340 Securing OT (Operation technology) telecommunications over packet-
1341 switched IP networks follow the same principles that are foundational
1342 for securing the IT infrastructure, i.e., consideration must be given
1343 to enforcing electronic access control for both person-to-machine and
1344 machine-to-machine communications, and providing the appropriate
1345 levels of data privacy, device and platform integrity, and threat
1346 detection and mitigation.
1348 3.4. Electrical Utilities Asks
1350 o Mixed L2 and L3 topologies
1352 o Deterministic behavior
1354 o Bounded latency and jitter
1356 o High availability, low recovery time
1358 o Redundancy, low packet loss
1360 o Precise timing
1362 o Centralized computing of deterministic paths
1364 o Distributed configuration may also be useful
1366 4. Building Automation Systems
1368 4.1. Use Case Description
1370 A Building Automation System (BAS) manages equipment and sensors in a
1371 building for improving residents' comfort, reducing energy
1372 consumption, and responding to failures and emergencies. For
1373 example, the BAS measures the temperature of a room using sensors and
1374 then controls the HVAC (heating, ventilating, and air conditioning)
1375 to maintain a set temperature and minimize energy consumption.
1377 A BAS primarily performs the following functions:
1379 o Periodically measures states of devices, for example humidity and
1380 illuminance of rooms, open/close state of doors, FAN speed, etc.
1382 o Stores the measured data.
1384 o Provides the measured data to BAS systems and operators.
1386 o Generates alarms for abnormal state of devices.
1388 o Controls devices (e.g. turn off room lights at 10:00 PM).
1390 4.2. Building Automation Systems Today
1391 4.2.1. BAS Architecture
1393 A typical BAS architecture of today is shown in Figure 1.
1395 +----------------------------+
1396 | |
1397 | BMS HMI |
1398 | | | |
1399 | +----------------------+ |
1400 | | Management Network | |
1401 | +----------------------+ |
1402 | | | |
1403 | LC LC |
1404 | | | |
1405 | +----------------------+ |
1406 | | Field Network | |
1407 | +----------------------+ |
1408 | | | | | |
1409 | Dev Dev Dev Dev |
1410 | |
1411 +----------------------------+
1413 BMS := Building Management Server
1414 HMI := Human Machine Interface
1415 LC := Local Controller
1417 Figure 1: BAS architecture
1419 There are typically two layers of network in a BAS. The upper one is
1420 called the Management Network and the lower one is called the Field
1421 Network. In management networks an IP-based communication protocol
1422 is used, while in field networks non-IP based communication protocols
1423 ("field protocols") are mainly used. Field networks have specific
1424 timing requirements, whereas management networks can be best-effort.
1426 A Human Machine Interface (HMI) is typically a desktop PC used by
1427 operators to monitor and display device states, send device control
1428 commands to Local Controllers (LCs), and configure building schedules
1429 (for example "turn off all room lights in the building at 10:00 PM").
1431 A Building Management Server (BMS) performs the following operations.
1433 o Collect and store device states from LCs at regular intervals.
1435 o Send control values to LCs according to a building schedule.
1437 o Send an alarm signal to operators if it detects abnormal devices
1438 states.
1440 The BMS and HMI communicate with LCs via IP-based "management
1441 protocols" (see standards [bacnetip], [knx]).
1443 A LC is typically a Programmable Logic Controller (PLC) which is
1444 connected to several tens or hundreds of devices using "field
1445 protocols". An LC performs the following kinds of operations:
1447 o Measure device states and provide the information to BMS or HMI.
1449 o Send control values to devices, unilaterally or as part of a
1450 feedback control loop.
1452 There are many field protocols used today; some are standards-based
1453 and others are proprietary (see standards [lontalk], [modbus],
1454 [profibus] and [flnet]). The result is that BASs have multiple MAC/
1455 PHY modules and interfaces. This makes BASs more expensive, slower
1456 to develop, and can result in "vendor lock-in" with multiple types of
1457 management applications.
1459 4.2.2. BAS Deployment Model
1461 An example BAS for medium or large buildings is shown in Figure 2.
1462 The physical layout spans multiple floors, and there is a monitoring
1463 room where the BAS management entities are located. Each floor will
1464 have one or more LCs depending upon the number of devices connected
1465 to the field network.
1467 +--------------------------------------------------+
1468 | Floor 3 |
1469 | +----LC~~~~+~~~~~+~~~~~+ |
1470 | | | | | |
1471 | | Dev Dev Dev |
1472 | | |
1473 |--- | ------------------------------------------|
1474 | | Floor 2 |
1475 | +----LC~~~~+~~~~~+~~~~~+ Field Network |
1476 | | | | | |
1477 | | Dev Dev Dev |
1478 | | |
1479 |--- | ------------------------------------------|
1480 | | Floor 1 |
1481 | +----LC~~~~+~~~~~+~~~~~+ +-----------------|
1482 | | | | | | Monitoring Room |
1483 | | Dev Dev Dev | |
1484 | | | BMS HMI |
1485 | | Management Network | | | |
1486 | +--------------------------------+-----+ |
1487 | | |
1488 +--------------------------------------------------+
1490 Figure 2: BAS Deployment model for Medium/Large Buildings
1492 Each LC is connected to the monitoring room via the Management
1493 network, and the management functions are performed within the
1494 building. In most cases, fast Ethernet (e.g. 100BASE-T) is used for
1495 the management network. Since the management network is non-
1496 realtime, use of Ethernet without quality of service is sufficient
1497 for today's deployment.
1499 In the field network a variety of physical interfaces such as RS232C
1500 and RS485 are used, which have specific timing requirements. Thus if
1501 a field network is to be replaced with an Ethernet or wireless
1502 network, such networks must support time-critical deterministic
1503 flows.
1505 In Figure 3, another deployment model is presented in which the
1506 management system is hosted remotely. This is becoming popular for
1507 small office and residential buildings in which a standalone
1508 monitoring system is not cost-effective.
1510 +---------------+
1511 | Remote Center |
1512 | |
1513 | BMS HMI |
1514 +------------------------------------+ | | | |
1515 | Floor 2 | | +---+---+ |
1516 | +----LC~~~~+~~~~~+ Field Network| | | |
1517 | | | | | | Router |
1518 | | Dev Dev | +-------|-------+
1519 | | | |
1520 |--- | ------------------------------| |
1521 | | Floor 1 | |
1522 | +----LC~~~~+~~~~~+ | |
1523 | | | | | |
1524 | | Dev Dev | |
1525 | | | |
1526 | | Management Network | WAN |
1527 | +------------------------Router-------------+
1528 | |
1529 +------------------------------------+
1531 Figure 3: Deployment model for Small Buildings
1533 Some interoperability is possible today in the Management Network,
1534 but not in today's field networks due to their non-IP-based design.
1536 4.2.3. Use Cases for Field Networks
1538 Below are use cases for Environmental Monitoring, Fire Detection, and
1539 Feedback Control, and their implications for field network
1540 performance.
1542 4.2.3.1. Environmental Monitoring
1544 The BMS polls each LC at a maximum measurement interval of 100ms (for
1545 example to draw a historical chart of 1 second granularity with a 10x
1546 sampling interval) and then performs the operations as specified by
1547 the operator. Each LC needs to measure each of its several hundred
1548 sensors once per measurement interval. Latency is not critical in
1549 this scenario as long as all sensor values are completed in the
1550 measurement interval. Availability is expected to be 99.999 %.
1552 4.2.3.2. Fire Detection
1554 On detection of a fire, the BMS must stop the HVAC, close the fire
1555 shutters, turn on the fire sprinklers, send an alarm, etc. There are
1556 typically ~10s of sensors per LC that BMS needs to manage. In this
1557 scenario the measurement interval is 10-50ms, the communication delay
1558 is 10ms, and the availability must be 99.9999 %.
1560 4.2.3.3. Feedback Control
1562 BAS systems utilize feedback control in various ways; the most time-
1563 critial is control of DC motors, which require a short feedback
1564 interval (1-5ms) with low communication delay (10ms) and jitter
1565 (1ms). The feedback interval depends on the characteristics of the
1566 device and a target quality of control value. There are typically
1567 ~10s of such devices per LC.
1569 Communication delay is expected to be less than 10 ms, jitter less
1570 than 1 sec while the availability must be 99.9999% .
1572 4.2.4. Security Considerations
1574 When BAS field networks were developed it was assumed that the field
1575 networks would always be physically isolated from external networks
1576 and therefore security was not a concern. In today's world many BASs
1577 are managed remotely and are thus connected to shared IP networks and
1578 so security is definitely a concern, yet security features are not
1579 available in the majority of BAS field network deployments .
1581 The management network, being an IP-based network, has the protocols
1582 available to enable network security, but in practice many BAS
1583 systems do not implement even the available security features such as
1584 device authentication or encryption for data in transit.
1586 4.3. BAS Future
1588 In the future we expect more fine-grained environmental monitoring
1589 and lower energy consumption, which will require more sensors and
1590 devices, thus requiring larger and more complex building networks.
1592 We expect building networks to be connected to or converged with
1593 other networks (Enterprise network, Home network, and Internet).
1595 Therefore better facilities for network management, control,
1596 reliability and security are critical in order to improve resident
1597 and operator convenience and comfort. For example the ability to
1598 monitor and control building devices via the internet would enable
1599 (for example) control of room lights or HVAC from a resident's
1600 desktop PC or phone application.
1602 4.4. BAS Asks
1604 The community would like to see an interoperable protocol
1605 specification that can satisfy the timing, security, availability and
1606 QoS constraints described above, such that the resulting converged
1607 network can replace the disparate field networks. Ideally this
1608 connectivity could extend to the open Internet.
1610 This would imply an architecture that can guarantee
1612 o Low communication delays (from <10ms to 100ms in a network of
1613 several hundred devices)
1615 o Low jitter (< 1 ms)
1617 o Tight feedback intervals (1ms - 10ms)
1619 o High network availability (up to 99.9999% )
1621 o Availability of network data in disaster scenario
1623 o Authentication between management and field devices (both local
1624 and remote)
1626 o Integrity and data origin authentication of communication data
1627 between field and management devices
1629 o Confidentiality of data when communicated to a remote device
1631 5. Wireless for Industrial
1633 5.1. Use Case Description
1635 Wireless networks are useful for industrial applications, for example
1636 when portable, fast-moving or rotating objects are involved, and for
1637 the resource-constrained devices found in the Internet of Things
1638 (IoT).
1640 Such network-connected sensors, actuators, control loops (etc.)
1641 typically require that the underlying network support real-time
1642 quality of service (QoS), as well as specific classes of other
1643 network properties such as reliability, redundancy, and security.
1645 These networks may also contain very large numbers of devices, for
1646 example for factories, "big data" acquisition, and the IoT. Given
1647 the large numbers of devices installed, and the potential
1648 pervasiveness of the IoT, this is a huge and very cost-sensitive
1649 market. For example, a 1% cost reduction in some areas could save
1650 $100B
1652 5.1.1. Network Convergence using 6TiSCH
1654 Some wireless network technologies support real-time QoS, and are
1655 thus useful for these kinds of networks, but others do not. For
1656 example WiFi is pervasive but does not provide guaranteed timing or
1657 delivery of packets, and thus is not useful in this context.
1659 In this use case we focus on one specific wireless network technology
1660 which does provide the required deterministic QoS, which is "IPv6
1661 over the TSCH mode of IEEE 802.15.4e" (6TiSCH, where TSCH stands for
1662 "Time-Slotted Channel Hopping", see [I-D.ietf-6tisch-architecture],
1663 [IEEE802154], [IEEE802154e], and [RFC7554]).
1665 There are other deterministic wireless busses and networks available
1666 today, however they are imcompatible with each other, and
1667 incompatible with IP traffic (for example [ISA100], [WirelessHART]).
1669 Thus the primary goal of this use case is to apply 6TiSH as a
1670 converged IP- and standards-based wireless network for industrial
1671 applications, i.e. to replace multiple proprietary and/or
1672 incompatible wireless networking and wireless network management
1673 standards.
1675 5.1.2. Common Protocol Development for 6TiSCH
1677 Today there are a number of protocols required by 6TiSCH which are
1678 still in development, and a second intent of this use case is to
1679 highlight the ways in which these "missing" protocols share goals in
1680 common with DetNet. Thus it is possible that some of the protocol
1681 technology developed for DetNet will also be applicable to 6TiSCH.
1683 These protocol goals are identified here, along with their
1684 relationship to DetNet. It is likely that ultimately the resulting
1685 protocols will not be identical, but will share design principles
1686 which contribute to the eficiency of enabling both DetNet and 6TiSCH.
1688 One such commonality is that although at a different time scale, in
1689 both TSN [IEEE802.1TSNTG] and TSCH a packet crosses the network from
1690 node to node follows a precise schedule, as a train that leaves
1691 intermediate stations at precise times along its path. This kind of
1692 operation reduces collisions, saves energy, and enables engineering
1693 the network for deterministic properties.
1695 Another commonality is remote monitoring and scheduling management of
1696 a TSCH network by a Path Computation Element (PCE) and Network
1697 Management Entity (NME). The PCE/NME manage timeslots and device
1698 resources in a manner that minimizes the interaction with and the
1699 load placed on resource-constrained devices. For example, a tiny IoT
1700 device may have just enough buffers to store one or a few IPv6
1701 packets, and will have limited bandwidth between peers such that it
1702 can maintain only a small amount of peer information, and will not be
1703 able to store many packets waiting to be forwarded. It is
1704 advantageous then for it to only be required to carry out the
1705 specific behavior assigned to it by the PCE/NME (as opposed to
1706 maintaining its own IP stack, for example).
1708 Note: Current WG discussion indicates that some peer-to-peer
1709 communication must be assumed, i.e. the PCE may communicate only
1710 indirectly with any given device, enabling hierarchical configuration
1711 of the system.
1713 6TiSCH depends on [PCE] and [I-D.finn-detnet-architecture].
1715 6TiSCH also depends on the fact that DetNet will maintain consistency
1716 with [IEEE802.1TSNTG].
1718 5.2. Wireless Industrial Today
1720 Today industrial wireless is accomplished using multiple
1721 deterministic wireless networks which are incompatible with each
1722 other and with IP traffic.
1724 6TiSCH is not yet fully specified, so it cannot be used in today's
1725 applications.
1727 5.3. Wireless Industrial Future
1729 5.3.1. Unified Wireless Network and Management
1731 We expect DetNet and 6TiSCH together to enable converged transport of
1732 deterministic and best-effort traffic flows between real-time
1733 industrial devices and wide area networks via IP routing. A high
1734 level view of a basic such network is shown in Figure 4.
1736 ---+-------- ............ ------------
1737 | External Network |
1738 | +-----+
1739 +-----+ | NME |
1740 | | LLN Border | |
1741 | | router +-----+
1742 +-----+
1743 o o o
1744 o o o o
1745 o o LLN o o o
1746 o o o o
1747 o
1749 Figure 4: Basic 6TiSCH Network
1751 Figure 5 shows a backbone router federating multiple synchronized
1752 6TiSCH subnets into a single subnet connected to the external
1753 network.
1755 ---+-------- ............ ------------
1756 | External Network |
1757 | +-----+
1758 | +-----+ | NME |
1759 +-----+ | +-----+ | |
1760 | | Router | | PCE | +-----+
1761 | | +--| |
1762 +-----+ +-----+
1763 | |
1764 | Subnet Backbone |
1765 +--------------------+------------------+
1766 | | |
1767 +-----+ +-----+ +-----+
1768 | | Backbone | | Backbone | | Backbone
1769 o | | router | | router | | router
1770 +-----+ +-----+ +-----+
1771 o o o o o
1772 o o o o o o o o o o o
1773 o o o LLN o o o o
1774 o o o o o o o o o o o o
1776 Figure 5: Extended 6TiSCH Network
1778 The backbone router must ensure end-to-end deterministic behavior
1779 between the LLN and the backbone. We would like to see this
1780 accomplished in conformance with the work done in
1781 [I-D.finn-detnet-architecture] with respect to Layer-3 aspects of
1782 deterministic networks that span multiple Layer-2 domains.
1784 The PCE must compute a deterministic path end-to-end across the TSCH
1785 network and IEEE802.1 TSN Ethernet backbone, and DetNet protocols are
1786 expected to enable end-to-end deterministic forwarding.
1788 +-----+
1789 | IoT |
1790 | G/W |
1791 +-----+
1792 ^ <---- Elimination
1793 | |
1794 Track branch | |
1795 +-------+ +--------+ Subnet Backbone
1796 | |
1797 +--|--+ +--|--+
1798 | | | Backbone | | | Backbone
1799 o | | | router | | | router
1800 +--/--+ +--|--+
1801 o / o o---o----/ o
1802 o o---o--/ o o o o o
1803 o \ / o o LLN o
1804 o v <---- Replication
1805 o
1807 Figure 6: 6TiSCH Network with PRE
1809 5.3.1.1. PCE and 6TiSCH ARQ Retries
1811 Note: The possible use of ARQ techniques in DetNet is currently
1812 considered a possible design alternative.
1814 6TiSCH uses the IEEE802.15.4 Automatic Repeat-reQuest (ARQ) mechanism
1815 to provide higher reliability of packet delivery. ARQ is related to
1816 packet replication and elimination because there are two independent
1817 paths for packets to arrive at the destination, and if an expected
1818 packed does not arrive on one path then it checks for the packet on
1819 the second path.
1821 Although to date this mechanism is only used by wireless networks,
1822 this may be a technique that would be appropriate for DetNet and so
1823 aspects of the enabling protocol could be co-developed.
1825 For example, in Figure 6, a Track is laid out from a field device in
1826 a 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN
1827 backbone.
1829 In ARQ the Replication function in the field device sends a copy of
1830 each packet over two different branches, and the PCE schedules each
1831 hop of both branches so that the two copies arrive in due time at the
1832 gateway. In case of a loss on one branch, hopefully the other copy
1833 of the packet still arrives within the allocated time. If two copies
1834 make it to the IoT gateway, the Elimination function in the gateway
1835 ignores the extra packet and presents only one copy to upper layers.
1837 At each 6TiSCH hop along the Track, the PCE may schedule more than
1838 one timeSlot for a packet, so as to support Layer-2 retries (ARQ).
1840 In current deployments, a TSCH Track does not necessarily support PRE
1841 but is systematically multi-path. This means that a Track is
1842 scheduled so as to ensure that each hop has at least two forwarding
1843 solutions, and the forwarding decision is to try the preferred one
1844 and use the other in case of Layer-2 transmission failure as detected
1845 by ARQ.
1847 5.3.2. Schedule Management by a PCE
1849 A common feature of 6TiSCH and DetNet is the action of a PCE to
1850 configure paths through the network. Specifically, what is needed is
1851 a protocol and data model that the PCE will use to get/set the
1852 relevant configuration from/to the devices, as well as perform
1853 operations on the devices. We expect that this protocol will be
1854 developed by DetNet with consideration for its reuse by 6TiSCH. The
1855 remainder of this section provides a bit more context from the 6TiSCH
1856 side.
1858 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests
1860 The 6TiSCH device does not expect to place the request for bandwidth
1861 between itself and another device in the network. Rather, an
1862 operation control system invoked through a human interface specifies
1863 the required traffic specification and the end nodes (in terms of
1864 latency and reliability). Based on this information, the PCE must
1865 compute a path between the end nodes and provision the network with
1866 per-flow state that describes the per-hop operation for a given
1867 packet, the corresponding timeslots, and the flow identification that
1868 enables recognizing that a certain packet belongs to a certain path,
1869 etc.
1871 For a static configuration that serves a certain purpose for a long
1872 period of time, it is expected that a node will be provisioned in one
1873 shot with a full schedule, which incorporates the aggregation of its
1874 behavior for multiple paths. 6TiSCH expects that the programing of
1875 the schedule will be done over COAP as discussed in
1876 [I-D.ietf-6tisch-coap].
1878 6TiSCH expects that the PCE commands will be mapped back and forth
1879 into CoAP by a gateway function at the edge of the 6TiSCH network.
1880 For instance, it is possible that a mapping entity on the backbone
1881 transforms a non-CoAP protocol such as PCEP into the RESTful
1882 interfaces that the 6TiSCH devices support. This architecture will
1883 be refined to comply with DetNet [I-D.finn-detnet-architecture] when
1884 the work is formalized. Related information about 6TiSCH can be
1885 found at [I-D.ietf-6tisch-6top-interface] and RPL [RFC6550].
1887 A protocol may be used to update the state in the devices during
1888 runtime, for example if it appears that a path through the network
1889 has ceased to perform as expected, but in 6TiSCH that flow was not
1890 designed and no protocol was selected. We would like to see DetNet
1891 define the appropriate end-to-end protocols to be used in that case.
1892 The implication is that these state updates take place once the
1893 system is configured and running, i.e. they are not limited to the
1894 initial communication of the configuration of the system.
1896 A "slotFrame" is the base object that a PCE would manipulate to
1897 program a schedule into an LLN node ([I-D.ietf-6tisch-architecture]).
1899 We would like to see the PCE read energy data from devices, and
1900 compute paths that will implement policies on how energy in devices
1901 is consumed, for instance to ensure that the spent energy does not
1902 exceeded the available energy over a period of time. Note: this
1903 statement implies that an extensible protocol for communicating
1904 device info to the PCE and enabling the PCE to act on it will be part
1905 of the DetNet architecture, however for subnets with specific
1906 protocols (e.g. CoAP) a gateway may be required.
1908 6TiSCH devices can discover their neighbors over the radio using a
1909 mechanism such as beacons, but even though the neighbor information
1910 is available in the 6TiSCH interface data model, 6TiSCH does not
1911 describe a protocol to proactively push the neighborhood information
1912 to a PCE. We would like to see DetNet define such a protocol; one
1913 possible design alternative is that it could operate over CoAP,
1914 alternatively it could be converted to/from CoAP by a gateway. We
1915 would like to see such a protocol carry multiple metrics, for example
1916 similar to those used for RPL operations [RFC6551]
1918 5.3.2.2. 6TiSCH IP Interface
1920 "6top" ([I-D.wang-6tisch-6top-sublayer]) is a logical link control
1921 sitting between the IP layer and the TSCH MAC layer which provides
1922 the link abstraction that is required for IP operations. The 6top
1923 data model and management interfaces are further discussed in
1924 [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap].
1926 An IP packet that is sent along a 6TiSCH path uses the Differentiated
1927 Services Per-Hop-Behavior Group called Deterministic Forwarding, as
1928 described in [I-D.svshah-tsvwg-deterministic-forwarding].
1930 5.3.3. 6TiSCH Security Considerations
1932 On top of the classical requirements for protection of control
1933 signaling, it must be noted that 6TiSCH networks operate on limited
1934 resources that can be depleted rapidly in a DoS attack on the system,
1935 for instance by placing a rogue device in the network, or by
1936 obtaining management control and setting up unexpected additional
1937 paths.
1939 5.4. Wireless Industrial Asks
1941 6TiSCH depends on DetNet to define:
1943 o Configuration (state) and operations for deterministic paths
1945 o End-to-end protocols for deterministic forwarding (tagging, IP)
1947 o Protocol for packet replication and elimination
1949 6. Cellular Radio
1951 6.1. Use Case Description
1953 This use case describes the application of deterministic networking
1954 in the context of cellular telecom transport networks. Important
1955 elements include time synchronization, clock distribution, and ways
1956 of establishing time-sensitive streams for both Layer-2 and Layer-3
1957 user plane traffic.
1959 6.1.1. Network Architecture
1961 Figure 7 illustrates a typical 3GPP-defined cellular network
1962 architecture, which includes "Fronthaul" and "Midhaul" network
1963 segments. The "Fronthaul" is the network connecting base stations
1964 (baseband processing units) to the remote radio heads (antennas).
1965 The "Midhaul" is the network inter-connecting base stations (or small
1966 cell sites).
1968 In Figure 7 "eNB" ("E-UTRAN Node B") is the hardware that is
1969 connected to the mobile phone network which communicates directly
1970 with mobile handsets ([TS36300]).
1972 Y (remote radio heads (antennas))
1973 \
1974 Y__ \.--. .--. +------+
1975 \_( `. +---+ _(Back`. | 3GPP |
1976 Y------( Front )----|eNB|----( Haul )----| core |
1977 ( ` .Haul ) +---+ ( ` . ) ) | netw |
1978 /`--(___.-' \ `--(___.-' +------+
1979 Y_/ / \.--. \
1980 Y_/ _( Mid`. \
1981 ( Haul ) \
1982 ( ` . ) ) \
1983 `--(___.-'\_____+---+ (small cell sites)
1984 \ |SCe|__Y
1985 +---+ +---+
1986 Y__|eNB|__Y
1987 +---+
1988 Y_/ \_Y ("local" radios)
1990 Figure 7: Generic 3GPP-based Cellular Network Architecture
1992 6.1.2. Delay Constraints
1994 The available processing time for Fronthaul networking overhead is
1995 limited to the available time after the baseband processing of the
1996 radio frame has completed. For example in Long Term Evolution (LTE)
1997 radio, processing of a radio frame is allocated 3ms but typically the
1998 processing uses most of it, allowing only a small fraction to be used
1999 by the Fronthaul network (e.g. up to 250us one-way delay, though the
2000 existing spec ([NGMN-fronth]) supports delay only up to 100us). This
2001 ultimately determines the distance the remote radio heads can be
2002 located from the base stations (e.g., 100us equals roughly 20 km of
2003 optical fiber-based transport). Allocation options of the available
2004 time budget between processing and transport are under heavy
2005 discussions in the mobile industry.
2007 For packet-based transport the allocated transport time (e.g. CPRI
2008 would allow for 100us delay [CPRI]) is consumed by all nodes and
2009 buffering between the remote radio head and the baseband processing
2010 unit, plus the distance-incurred delay.
2012 The baseband processing time and the available "delay budget" for the
2013 fronthaul is likely to change in the forthcoming "5G" due to reduced
2014 radio round trip times and other architectural and service
2015 requirements [NGMN].
2017 [METIS] documents the fundamental challenges as well as overall
2018 technical goals of the future 5G mobile and wireless system as the
2019 starting point. These future systems should support much higher data
2020 volumes and rates and significantly lower end-to-end latency for 100x
2021 more connected devices (at similar cost and energy consumption levels
2022 as today's system).
2024 For Midhaul connections, delay constraints are driven by Inter-Site
2025 radio functions like Coordinated Multipoint Processing (CoMP, see
2026 [CoMP]). CoMP reception and transmission is a framework in which
2027 multiple geographically distributed antenna nodes cooperate to
2028 improve the performance of the users served in the common cooperation
2029 area. The design principal of CoMP is to extend the current single-
2030 cell to multi-UE (User Equipment) transmission to a multi-cell-to-
2031 multi-UEs transmission by base station cooperation.
2033 CoMP has delay-sensitive performance parameters, which are "midhaul
2034 latency" and "CSI (Channel State Information) reporting and
2035 accuracy". The essential feature of CoMP is signaling between eNBs,
2036 so Midhaul latency is the dominating limitation of CoMP performance.
2037 Generally, CoMP can benefit from coordinated scheduling (either
2038 distributed or centralized) of different cells if the signaling delay
2039 between eNBs is within 1-10ms. This delay requirement is both rigid
2040 and absolute because any uncertainty in delay will degrade the
2041 performance significantly.
2043 Inter-site CoMP is one of the key requirements for 5G and is also a
2044 near-term goal for the current 4.5G network architecture.
2046 6.1.3. Time Synchronization Constraints
2048 Fronthaul time synchronization requirements are given by [TS25104],
2049 [TS36104], [TS36211], and [TS36133]. These can be summarized for the
2050 current 3GPP LTE-based networks as:
2052 Delay Accuracy:
2053 +-8ns (i.e. +-1/32 Tc, where Tc is the UMTS Chip time of 1/3.84
2054 MHz) resulting in a round trip accuracy of +-16ns. The value is
2055 this low to meet the 3GPP Timing Alignment Error (TAE) measurement
2056 requirements. Note: performance guarantees of low nanosecond
2057 values such as these are considered to be below the DetNet layer -
2058 it is assumed that the underlying implementation, e.g. the
2059 hardware, will provide sufficient support (e.g. buffering) to
2060 enable this level of accuracy. These values are maintained in the
2061 use case to give an indication of the overall application.
2063 Timing Alignment Error:
2064 Timing Alignment Error (TAE) is problematic to Fronthaul networks
2065 and must be minimized. If the transport network cannot guarantee
2066 low enough TAE then additional buffering has to be introduced at
2067 the edges of the network to buffer out the jitter. Buffering is
2068 not desirable as it reduces the total available delay budget.
2069 Packet Delay Variation (PDV) requirements can be derived from TAE
2070 for packet based Fronthaul networks.
2072 * For multiple input multiple output (MIMO) or TX diversity
2073 transmissions, at each carrier frequency, TAE shall not exceed
2074 65 ns (i.e. 1/4 Tc).
2076 * For intra-band contiguous carrier aggregation, with or without
2077 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2
2078 Tc).
2080 * For intra-band non-contiguous carrier aggregation, with or
2081 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e.
2082 one Tc).
2084 * For inter-band carrier aggregation, with or without MIMO or TX
2085 diversity, TAE shall not exceed 260 ns.
2087 Transport link contribution to radio frequency error:
2088 +-2 PPB. This value is considered to be "available" for the
2089 Fronthaul link out of the total 50 PPB budget reserved for the
2090 radio interface. Note: the reason that the transport link
2091 contributes to radio frequency error is as follows. The current
2092 way of doing Fronthaul is from the radio unit to remote radio head
2093 directly. The remote radio head is essentially a passive device
2094 (without buffering etc.) The transport drives the antenna
2095 directly by feeding it with samples and everything the transport
2096 adds will be introduced to radio as-is. So if the transport
2097 causes additional frequency error that shows immediately on the
2098 radio as well. Note: performance guarantees of low nanosecond
2099 values such as these are considered to be below the DetNet layer -
2100 it is assumed that the underlying implementation, e.g. the
2101 hardware, will provide sufficient support to enable this level of
2102 performance. These values are maintained in the use case to give
2103 an indication of the overall application.
2105 The above listed time synchronization requirements are difficult to
2106 meet with point-to-point connected networks, and more difficult when
2107 the network includes multiple hops. It is expected that networks
2108 must include buffering at the ends of the connections as imposed by
2109 the jitter requirements, since trying to meet the jitter requirements
2110 in every intermediate node is likely to be too costly. However,
2111 every measure to reduce jitter and delay on the path makes it easier
2112 to meet the end-to-end requirements.
2114 In order to meet the timing requirements both senders and receivers
2115 must remain time synchronized, demanding very accurate clock
2116 distribution, for example support for IEEE 1588 transparent clocks in
2117 every intermediate node.
2119 In cellular networks from the LTE radio era onward, phase
2120 synchronization is needed in addition to frequency synchronization
2121 ([TS36300], [TS23401]).
2123 6.1.4. Transport Loss Constraints
2125 Fronthaul and Midhaul networks assume almost error-free transport.
2126 Errors can result in a reset of the radio interfaces, which can cause
2127 reduced throughput or broken radio connectivity for mobile customers.
2129 For packetized Fronthaul and Midhaul connections packet loss may be
2130 caused by BER, congestion, or network failure scenarios. Current
2131 tools for elminating packet loss for Fronthaul and Midhaul networks
2132 have serious challenges, for example retransmitting lost packets and/
2133 or using forward error correction (FEC) to circumvent bit errors is
2134 practically impossible due to the additional delay incurred. Using
2135 redundant streams for better guarantees for delivery is also
2136 practically impossible in many cases due to high bandwidth
2137 requirements of Fronthaul and Midhaul networks. Protection switching
2138 is also a candidate but current technologies for the path switch are
2139 too slow to avoid reset of mobile interfaces.
2141 Fronthaul links are assumed to be symmetric, and all Fronthaul
2142 streams (i.e. those carrying radio data) have equal priority and
2143 cannot delay or pre-empt each other. This implies that the network
2144 must guarantee that each time-sensitive flow meets their schedule.
2146 6.1.5. Security Considerations
2148 Establishing time-sensitive streams in the network entails reserving
2149 networking resources for long periods of time. It is important that
2150 these reservation requests be authenticated to prevent malicious
2151 reservation attempts from hostile nodes (or accidental
2152 misconfiguration). This is particularly important in the case where
2153 the reservation requests span administrative domains. Furthermore,
2154 the reservation information itself should be digitally signed to
2155 reduce the risk of a legitimate node pushing a stale or hostile
2156 configuration into another networking node.
2158 Note: This is considered important for the security policy of the
2159 network, but does not affect the core DetNet architecture and design.
2161 6.2. Cellular Radio Networks Today
2163 6.2.1. Fronthaul
2165 Today's Fronthaul networks typically consist of:
2167 o Dedicated point-to-point fiber connection is common
2169 o Proprietary protocols and framings
2171 o Custom equipment and no real networking
2173 Current solutions for Fronthaul are direct optical cables or
2174 Wavelength-Division Multiplexing (WDM) connections.
2176 6.2.2. Midhaul and Backhaul
2178 Today's Midhaul and Backhaul networks typically consist of:
2180 o Mostly normal IP networks, MPLS-TP, etc.
2182 o Clock distribution and sync using 1588 and SyncE
2184 Telecommunication networks in the Mid- and Backhaul are already
2185 heading towards transport networks where precise time synchronization
2186 support is one of the basic building blocks. While the transport
2187 networks themselves have practically transitioned to all-IP packet-
2188 based networks to meet the bandwidth and cost requirements, highly
2189 accurate clock distribution has become a challenge.
2191 In the past, Mid- and Backhaul connections were typically based on
2192 Time Division Multiplexing (TDM-based) and provided frequency
2193 synchronization capabilities as a part of the transport media.
2194 Alternatively other technologies such as Global Positioning System
2195 (GPS) or Synchronous Ethernet (SyncE) are used [SyncE].
2197 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985]
2198 for legacy transport support) have become popular tools to build and
2199 manage new all-IP Radio Access Networks (RANs)
2200 [I-D.kh-spring-ip-ran-use-case]. Although various timing and
2201 synchronization optimizations have already been proposed and
2202 implemented including 1588 PTP enhancements
2203 [I-D.ietf-tictoc-1588overmpls] and [I-D.ietf-mpls-residence-time],
2204 these solution are not necessarily sufficient for the forthcoming RAN
2205 architectures nor do they guarantee the more stringent time-
2206 synchronization requirements such as [CPRI].
2208 There are also existing solutions for TDM over IP such as [RFC5087]
2209 and [RFC4553], as well as TDM over Ethernet transports such as
2210 [RFC5086].
2212 6.3. Cellular Radio Networks Future
2214 Future Cellular Radio Networks will be based on a mix of different
2215 xHaul networks (xHaul = front-, mid- and backhaul), and future
2216 transport networks should be able to support all of them
2217 simultaneously. It is already envisioned today that:
2219 o Not all "cellular radio network" traffic will be IP, for example
2220 some will remain at Layer 2 (e.g. Ethernet based). DetNet
2221 solutions must address all traffic types (Layer 2, Layer 3) with
2222 the same tools and allow their transport simultaneously.
2224 o All form of xHaul networks will need some form of DetNet
2225 solutions. For example with the advent of 5G some Backhaul
2226 traffic will also have DetNet requirements (e.g. traffic belonging
2227 to time-critical 5G applications).
2229 We would like to see the following in future Cellular Radio networks:
2231 o Unified standards-based transport protocols and standard
2232 networking equipment that can make use of underlying deterministic
2233 link-layer services
2235 o Unified and standards-based network management systems and
2236 protocols in all parts of the network (including Fronthaul)
2238 New radio access network deployment models and architectures may
2239 require time- sensitive networking services with strict requirements
2240 on other parts of the network that previously were not considered to
2241 be packetized at all. Time and synchronization support are already
2242 topical for Backhaul and Midhaul packet networks [MEF] and are
2243 becoming a real issue for Fronthaul networks also. Specifically in
2244 Fronthaul networks the timing and synchronization requirements can be
2245 extreme for packet based technologies, for example, on the order of
2246 sub +-20 ns packet delay variation (PDV) and frequency accuracy of
2247 +0.002 PPM [Fronthaul].
2249 The actual transport protocols and/or solutions to establish required
2250 transport "circuits" (pinned-down paths) for Fronthaul traffic are
2251 still undefined. Those are likely to include (but are not limited
2252 to) solutions directly over Ethernet, over IP, and using MPLS/
2253 PseudoWire transport.
2255 Even the current time-sensitive networking features may not be
2256 sufficient for Fronthaul traffic. Therefore, having specific
2257 profiles that take the requirements of Fronthaul into account is
2258 desirable [IEEE8021CM].
2260 Interesting and important work for time-sensitive networking has been
2261 done for Ethernet [TSNTG], which specifies the use of IEEE 1588 time
2262 precision protocol (PTP) [IEEE1588] in the context of IEEE 802.1D and
2263 IEEE 802.1Q. [IEEE8021AS] specifies a Layer 2 time synchronizing
2264 service, and other specifications such as IEEE 1722 [IEEE1722]
2265 specify Ethernet-based Layer-2 transport for time-sensitive streams.
2267 New promising work seeks to enable the transport of time-sensitive
2268 fronthaul streams in Ethernet bridged networks [IEEE8021CM].
2269 Analogous to IEEE 1722 there is an ongoing standardization effort to
2270 define the Layer-2 transport encapsulation format for transporting
2271 radio over Ethernet (RoE) in the IEEE 1904.3 Task Force [IEEE19043].
2273 All-IP RANs and xHhaul networks would benefit from time
2274 synchronization and time-sensitive transport services. Although
2275 Ethernet appears to be the unifying technology for the transport,
2276 there is still a disconnect providing Layer 3 services. The protocol
2277 stack typically has a number of layers below the Ethernet Layer 2
2278 that shows up to the Layer 3 IP transport. It is not uncommon that
2279 on top of the lowest layer (optical) transport there is the first
2280 layer of Ethernet followed one or more layers of MPLS, PseudoWires
2281 and/or other tunneling protocols finally carrying the Ethernet layer
2282 visible to the user plane IP traffic.
2284 While there are existing technologies to establish circuits through
2285 the routed and switched networks (especially in MPLS/PWE space),
2286 there is still no way to signal the time synchronization and time-
2287 sensitive stream requirements/reservations for Layer-3 flows in a way
2288 that addresses the entire transport stack, including the Ethernet
2289 layers that need to be configured.
2291 Furthermore, not all "user plane" traffic will be IP. Therefore, the
2292 same solution also must address the use cases where the user plane
2293 traffic is a different layer, for example Ethernet frames.
2295 There is existing work describing the problem statement
2296 [I-D.finn-detnet-problem-statement] and the architecture
2297 [I-D.finn-detnet-architecture] for deterministic networking (DetNet)
2298 that targets solutions for time-sensitive (IP/transport) streams with
2299 deterministic properties over Ethernet-based switched networks.
2301 6.4. Cellular Radio Networks Asks
2303 A standard for data plane transport specification which is:
2305 o Unified among all xHauls (meaning that different flows with
2306 diverse DetNet requirements can coexist in the same network and
2307 traverse the same nodes without interfering with each other)
2309 o Deployed in a highly deterministic network environment
2311 A standard for data flow information models that are:
2313 o Aware of the time sensitivity and constraints of the target
2314 networking environment
2316 o Aware of underlying deterministic networking services (e.g., on
2317 the Ethernet layer)
2319 7. Industrial M2M
2321 7.1. Use Case Description
2323 Industrial Automation in general refers to automation of
2324 manufacturing, quality control and material processing. In this
2325 "machine to machine" (M2M) use case we consider machine units in a
2326 plant floor which periodically exchange data with upstream or
2327 downstream machine modules and/or a supervisory controller within a
2328 local area network.
2330 The actors of M2M communication are Programmable Logic Controllers
2331 (PLCs). Communication between PLCs and between PLCs and the
2332 supervisory PLC (S-PLC) is achieved via critical control/data streams
2333 Figure 8.
2335 S (Sensor)
2336 \ +-----+
2337 PLC__ \.--. .--. ---| MES |
2338 \_( `. _( `./ +-----+
2339 A------( Local )-------------( L2 )
2340 ( Net ) ( Net ) +-------+
2341 /`--(___.-' `--(___.-' ----| S-PLC |
2342 S_/ / PLC .--. / +-------+
2343 A_/ \_( `.
2344 (Actuator) ( Local )
2345 ( Net )
2346 /`--(___.-'\
2347 / \ A
2348 S A
2350 Figure 8: Current Generic Industrial M2M Network Architecture
2352 This use case focuses on PLC-related communications; communication to
2353 Manufacturing-Execution-Systems (MESs) are not addressed.
2355 This use case covers only critical control/data streams; non-critical
2356 traffic between industrial automation applications (such as
2357 communication of state, configuration, set-up, and database
2358 communication) are adequately served by currently available
2359 prioritizing techniques. Such traffic can use up to 80% of the total
2360 bandwidth required. There is also a subset of non-time-critical
2361 traffic that must be reliable even though it is not time sensitive.
2363 In this use case the primary need for deterministic networking is to
2364 provide end-to-end delivery of M2M messages within specific timing
2365 constraints, for example in closed loop automation control. Today
2366 this level of determinism is provided by proprietary networking
2367 technologies. In addition, standard networking technologies are used
2368 to connect the local network to remote industrial automation sites,
2369 e.g. over an enterprise or metro network which also carries other
2370 types of traffic. Therefore, flows that should be forwarded with
2371 deterministic guarantees need to be sustained regardless of the
2372 amount of other flows in those networks.
2374 7.2. Industrial M2M Communication Today
2376 Today, proprietary networks fulfill the needed timing and
2377 availability for M2M networks.
2379 The network topologies used today by industrial automation are
2380 similar to those used by telecom networks: Daisy Chain, Ring, Hub and
2381 Spoke, and Comb (a subset of Daisy Chain).
2383 PLC-related control/data streams are transmitted periodically and
2384 carry either a pre-configured payload or a payload configured during
2385 runtime.
2387 Some industrial applications require time synchronization at the end
2388 nodes. For such time-coordinated PLCs, accuracy of 1 microsecond is
2389 required. Even in the case of "non-time-coordinated" PLCs time sync
2390 may be needed e.g. for timestamping of sensor data.
2392 Industrial network scenarios require advanced security solutions.
2393 Many of the current industrial production networks are physically
2394 separated. Preventing critical flows from be leaked outside a domain
2395 is handled today by filtering policies that are typically enforced in
2396 firewalls.
2398 7.2.1. Transport Parameters
2400 The Cycle Time defines the frequency of message(s) between industrial
2401 actors. The Cycle Time is application dependent, in the range of 1ms
2402 - 100ms for critical control/data streams.
2404 Because industrial applications assume deterministic transport for
2405 critical Control-Data-Stream parameters (instead of defining latency
2406 and delay variation parameters) it is sufficient to fulfill the upper
2407 bound of latency (maximum latency). The underlying networking
2408 infrastructure must ensure a maximum end-to-end delivery time of
2409 messages in the range of 100 microseconds to 50 milliseconds
2410 depending on the control loop application.
2412 The bandwidth requirements of control/data streams are usually
2413 calculated directly from the bytes-per-cycle parameter of the control
2414 loop. For PLC-to-PLC communication one can expect 2 - 32 streams
2415 with packet size in the range of 100 - 700 bytes. For S-PLC to PLCs
2416 the number of streams is higher - up to 256 streams. Usually no more
2417 than 20% of available bandwidth is used for critical control/data
2418 streams. In today's networks 1Gbps links are commonly used.
2420 Most PLC control loops are rather tolerant of packet loss, however
2421 critical control/data streams accept no more than 1 packet loss per
2422 consecutive communication cycle (i.e. if a packet gets lost in cycle
2423 "n", then the next cycle ("n+1") must be lossless). After two or
2424 more consecutive packet losses the network may be considered to be
2425 "down" by the Application.
2427 As network downtime may impact the whole production system the
2428 required network availability is rather high (99,999%).
2430 Based on the above parameters we expect that some form of redundancy
2431 will be required for M2M communications, however any individual
2432 solution depends on several parameters including cycle time, delivery
2433 time, etc.
2435 7.2.2. Stream Creation and Destruction
2437 In an industrial environment, critical control/data streams are
2438 created rather infrequently, on the order of ~10 times per day / week
2439 / month. Most of these critical control/data streams get created at
2440 machine startup, however flexibility is also needed during runtime,
2441 for example when adding or removing a machine. Going forward as
2442 production systems become more flexible, we expect a significant
2443 increase in the rate at which streams are created, changed and
2444 destroyed.
2446 7.3. Industrial M2M Future
2448 We would like to see a converged IP-standards-based network with
2449 deterministic properties that can satisfy the timing, security and
2450 reliability constraints described above. Today's proprietary
2451 networks could then be interfaced to such a network via gateways or,
2452 in the case of new installations, devices could be connected directly
2453 to the converged network.
2455 For this use case we expect time synchronization accuracy on the
2456 order of 1us.
2458 7.4. Industrial M2M Asks
2460 o Converged IP-based network
2462 o Deterministic behavior (bounded latency and jitter )
2464 o High availability (presumably through redundancy) (99.999 %)
2466 o Low message delivery time (100us - 50ms)
2468 o Low packet loss (burstless, 0.1-1 %)
2470 o Security (e.g. prevent critical flows from being leaked between
2471 physically separated networks)
2473 8. Use Case Common Elements
2475 Looking at the use cases collectively, the following common desires
2476 for the DetNet-based networks of the future emerge:
2478 o Open standards-based network (replace various proprietary
2479 networks, reduce cost, create multi-vendor market)
2481 o Centrally administered (though such administration may be
2482 distributed for scale and resiliency)
2484 o Integrates L2 (bridged) and L3 (routed) environments (independent
2485 of the Link layer, e.g. can be used with Ethernet, 6TiSCH, etc.)
2487 o Carries both deterministic and best-effort traffic (guaranteed
2488 end-to-end delivery of deterministic flows, deterministic flows
2489 isolated from each other and from best-effort traffic congestion,
2490 unused deterministic BW available to best-effort traffic)
2492 o Ability to add or remove systems from the network with minimal,
2493 bounded service interruption (applications include replacement of
2494 failed devices as well as plug and play)
2496 o Uses standardized data flow information models capable of
2497 expressing deterministic properties (models express device
2498 capabilities, flow properties. Protocols for pushing models from
2499 controller to devices, devices to controller)
2501 o Scalable size (long distances (many km) and short distances
2502 (within a single machine), many hops (radio repeaters, microwave
2503 links, fiber links...) and short hops (single machine))
2505 o Scalable timing parameters and accuracy (bounded latency,
2506 guaranteed worst case maximum, minimum. Low latency, e.g. control
2507 loops may be less than 1ms, but larger for wide area networks)
2509 o High availability (99.9999 percent up time requested, but may be
2510 up to twelve 9s)
2512 o Reliability, redundancy (lives at stake)
2514 o Security (from failures, attackers, misbehaving devices -
2515 sensitive to both packet content and arrival time)
2517 9. Use Cases Explicitly Out of Scope for DetNet
2519 This section contains use case text that has been determined to be
2520 outside of the scope of the present DetNet work.
2522 9.1. DetNet Scope Limitations
2524 The scope of DetNet is deliberately limited to specific use cases
2525 that are consistent with the WG charter, subject to the
2526 interpretation of the WG. At the time the DetNet Use Cases were
2527 solicited and provided by the authors the scope of DetNet was not
2528 clearly defined, and as that clarity has emerged, certain of the use
2529 cases have been determined to be outside the scope of the present
2530 DetNet work. Such text has been moved into this section to clarify
2531 that these use cases will not be supported by the DetNet work.
2533 The text in this section was moved here based on the following
2534 "exclusion" principles. Or, as an alternative to moving all such
2535 text to this section, some draft text has been modified in situ to
2536 reflect these same principles.
2538 The following principles have been established to clarify the scope
2539 of the present DetNet work.
2541 o The scope of network addressed by DetNet is limited to networks
2542 that can be centrally controlled, i.e. an "enterprise" aka
2543 "corporate" network. This explicitly excludes "the open
2544 Internet".
2546 o Maintaining synchronized time across a DetNet network is crucial
2547 to its operation, however DetNet assumes that time is to be
2548 maintained using other means, for example (but not limited to)
2549 Precision Time Protocol ([IEEE1588]). A use case may state the
2550 accuracy and reliability that it expects from the DetNet network
2551 as part of a whole system, however it is understood that such
2552 timing properties are not guaranteed by DetNet itself. It is
2553 currently an open question as to whether DetNet protocols will
2554 include a way for an application to communicate such timing
2555 expectations to the network, and if so whether they would be
2556 expected to materially affect the performance they would receive
2557 from the network as a result.
2559 9.2. Internet-based Applications
2561 9.2.1. Use Case Description
2563 There are many applications that communicate across the open Internet
2564 that could benefit from guaranteed delivery and bounded latency. The
2565 following are some representative examples.
2567 9.2.1.1. Media Content Delivery
2569 Media content delivery continues to be an important use of the
2570 Internet, yet users often experience poor quality audio and video due
2571 to the delay and jitter inherent in today's Internet.
2573 9.2.1.2. Online Gaming
2575 Online gaming is a significant part of the gaming market, however
2576 latency can degrade the end user experience. For example "First
2577 Person Shooter" (FPS) games are highly delay-sensitive.
2579 9.2.1.3. Virtual Reality
2581 Virtual reality (VR) has many commercial applications including real
2582 estate presentations, remote medical procedures, and so on. Low
2583 latency is critical to interacting with the virtual world because
2584 perceptual delays can cause motion sickness.
2586 9.2.2. Internet-Based Applications Today
2588 Internet service today is by definition "best effort", with no
2589 guarantees on delivery or bandwidth.
2591 9.2.3. Internet-Based Applications Future
2593 We imagine an Internet from which we will be able to play a video
2594 without glitches and play games without lag.
2596 For online gaming, the maximum round-trip delay can be 100ms and
2597 stricter for FPS gaming which can be 10-50ms. Transport delay is the
2598 dominate part with a 5-20ms budget.
2600 For VR, 1-10ms maximum delay is needed and total network budget is
2601 1-5ms if doing remote VR.
2603 Flow identification can be used for gaming and VR, i.e. it can
2604 recognize a critical flow and provide appropriate latency bounds.
2606 9.2.4. Internet-Based Applications Asks
2608 o Unified control and management protocols to handle time-critical
2609 data flow
2611 o Application-aware flow filtering mechanism to recognize the timing
2612 critical flow without doing 5-tuple matching
2614 o Unified control plane to provide low latency service on Layer-3
2615 without changing the data plane
2617 o OAM system and protocols which can help to provide E2E-delay
2618 sensitive service provisioning
2620 9.3. Pro Audio and Video - Digital Rights Management (DRM)
2622 This section was moved here because this is considered a Link layer
2623 topic, not direct responsibility of DetNet.
2625 Digital Rights Management (DRM) is very important to the audio and
2626 video industries. Any time protected content is introduced into a
2627 network there are DRM concerns that must be maintained (see
2628 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of
2629 network technology, however there are cases when a secure link
2630 supporting authentication and encryption is required by content
2631 owners to carry their audio or video content when it is outside their
2632 own secure environment (for example see [DCI]).
2634 As an example, two techniques are Digital Transmission Content
2635 Protection (DTCP) and High-Bandwidth Digital Content Protection
2636 (HDCP). HDCP content is not approved for retransmission within any
2637 other type of DRM, while DTCP may be retransmitted under HDCP.
2638 Therefore if the source of a stream is outside of the network and it
2639 uses HDCP protection it is only allowed to be placed on the network
2640 with that same HDCP protection.
2642 9.4. Pro Audio and Video - Link Aggregation
2644 Note: The term "Link Aggregation" is used here as defined by the text
2645 in the following paragraph, i.e. not following a more common Network
2646 Industry definition. Current WG consensus is that this item won't be
2647 directly supported by the DetNet architecture, for example because it
2648 implies guarantee of in-order delivery of packets which conflicts
2649 with the core goal of achieving the lowest possible latency.
2651 For transmitting streams that require more bandwidth than a single
2652 link in the target network can support, link aggregation is a
2653 technique for combining (aggregating) the bandwidth available on
2654 multiple physical links to create a single logical link of the
2655 required bandwidth. However, if aggregation is to be used, the
2656 network controller (or equivalent) must be able to determine the
2657 maximum latency of any path through the aggregate link.
2659 10. Acknowledgments
2661 10.1. Pro Audio
2663 This section was derived from draft-gunther-detnet-proaudio-req-01.
2665 The editors would like to acknowledge the help of the following
2666 individuals and the companies they represent:
2668 Jeff Koftinoff, Meyer Sound
2670 Jouni Korhonen, Associate Technical Director, Broadcom
2672 Pascal Thubert, CTAO, Cisco
2674 Kieran Tyrrell, Sienda New Media Technologies GmbH
2676 10.2. Utility Telecom
2678 This section was derived from draft-wetterwald-detnet-utilities-reqs-
2679 02.
2681 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy
2682 Practice Cisco
2684 Pascal Thubert, CTAO Cisco
2686 10.3. Building Automation Systems
2688 This section was derived from draft-bas-usecase-detnet-00.
2690 10.4. Wireless for Industrial
2692 This section was derived from draft-thubert-6tisch-4detnet-01.
2694 This specification derives from the 6TiSCH architecture, which is the
2695 result of multiple interactions, in particular during the 6TiSCH
2696 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at
2697 the IETF.
2699 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier
2700 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael
2701 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon,
2702 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey,
2703 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria
2704 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation
2705 and various contributions.
2707 10.5. Cellular Radio
2709 This section was derived from draft-korhonen-detnet-telreq-00.
2711 10.6. Industrial M2M
2713 The authors would like to thank Feng Chen and Marcel Kiessling for
2714 their comments and suggestions.
2716 10.7. Internet Applications and CoMP
2718 This section was derived from draft-zha-detnet-use-case-00.
2720 This document has benefited from reviews, suggestions, comments and
2721 proposed text provided by the following members, listed in
2722 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver
2723 Huang.
2725 11. Informative References
2727 [ACE] IETF, "Authentication and Authorization for Constrained
2728 Environments", .
2731 [bacnetip]
2732 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP",
2733 January 1999.
2735 [CCAMP] IETF, "Common Control and Measurement Plane",
2736 .
2738 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND
2739 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_
2740 and_Enhancement_v2.0, March 2015,
2741 .
2744 [CONTENT_PROTECTION]
2745 Olsen, D., "1722a Content Protection", 2012,
2746 .
2749 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI);
2750 Interface Specification", CPRI Specification V6.1, July
2751 2014, .
2754 [CPRI-transp]
2755 CPRI TWG, "CPRI requirements for Ethernet Fronthaul",
2756 November 2015,
2757 .
2760 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification,
2761 Version 1.2", 2012, .
2763 [DICE] IETF, "DTLS In Constrained Environments",
2764 .
2766 [EA12] Evans, P. and M. Annunziata, "Industrial Internet: Pushing
2767 the Boundaries of Minds and Machines", November 2012.
2769 [ESPN_DC2]
2770 Daley, D., "ESPN's DC2 Scales AVB Large", 2014,
2771 .
2774 [flnet] Japan Electrical Manufacturers' Association, "JEMA 1479 -
2775 English Edition", September 2012.
2777 [Fronthaul]
2778 Chen, D. and T. Mustala, "Ethernet Fronthaul
2779 Considerations", IEEE 1904.3, February 2015,
2780 .
2783 [HART] www.hartcomm.org, "Highway Addressable remote Transducer,
2784 a group of specifications for industrial process and
2785 control devices administered by the HART Foundation".
2787 [I-D.finn-detnet-architecture]
2788 Finn, N., Thubert, P., and M. Teener, "Deterministic
2789 Networking Architecture", draft-finn-detnet-
2790 architecture-04 (work in progress), March 2016.
2792 [I-D.finn-detnet-problem-statement]
2793 Finn, N. and P. Thubert, "Deterministic Networking Problem
2794 Statement", draft-finn-detnet-problem-statement-05 (work
2795 in progress), March 2016.
2797 [I-D.ietf-6tisch-6top-interface]
2798 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
2799 (6top) Interface", draft-ietf-6tisch-6top-interface-04
2800 (work in progress), July 2015.
2802 [I-D.ietf-6tisch-architecture]
2803 Thubert, P., "An Architecture for IPv6 over the TSCH mode
2804 of IEEE 802.15.4", draft-ietf-6tisch-architecture-10 (work
2805 in progress), June 2016.
2807 [I-D.ietf-6tisch-coap]
2808 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and
2809 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work
2810 in progress), March 2015.
2812 [I-D.ietf-6tisch-terminology]
2813 Palattella, M., Thubert, P., Watteyne, T., and Q. Wang,
2814 "Terminology in IPv6 over the TSCH mode of IEEE
2815 802.15.4e", draft-ietf-6tisch-terminology-07 (work in
2816 progress), March 2016.
2818 [I-D.ietf-ipv6-multilink-subnets]
2819 Thaler, D. and C. Huitema, "Multi-link Subnet Support in
2820 IPv6", draft-ietf-ipv6-multilink-subnets-00 (work in
2821 progress), July 2002.
2823 [I-D.ietf-mpls-residence-time]
2824 Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S.,
2825 and S. Vainshtein, "Residence Time Measurement in MPLS
2826 network", draft-ietf-mpls-residence-time-09 (work in
2827 progress), April 2016.
2829 [I-D.ietf-roll-rpl-industrial-applicability]
2830 Phinney, T., Thubert, P., and R. Assimiti, "RPL
2831 applicability in industrial networks", draft-ietf-roll-
2832 rpl-industrial-applicability-02 (work in progress),
2833 October 2013.
2835 [I-D.ietf-tictoc-1588overmpls]
2836 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L.
2837 Montini, "Transporting Timing messages over MPLS
2838 Networks", draft-ietf-tictoc-1588overmpls-07 (work in
2839 progress), October 2015.
2841 [I-D.kh-spring-ip-ran-use-case]
2842 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing
2843 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02
2844 (work in progress), November 2014.
2846 [I-D.svshah-tsvwg-deterministic-forwarding]
2847 Shah, S. and P. Thubert, "Deterministic Forwarding PHB",
2848 draft-svshah-tsvwg-deterministic-forwarding-04 (work in
2849 progress), August 2015.
2851 [I-D.thubert-6lowpan-backbone-router]
2852 Thubert, P., "6LoWPAN Backbone Router", draft-thubert-
2853 6lowpan-backbone-router-03 (work in progress), February
2854 2013.
2856 [I-D.wang-6tisch-6top-sublayer]
2857 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
2858 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in
2859 progress), November 2015.
2861 [IEC61850-90-12]
2862 TC57 WG10, IEC., "IEC 61850-90-12 TR: Communication
2863 networks and systems for power utility automation - Part
2864 90-12: Wide area network engineering guidelines", 2015.
2866 [IEC62439-3:2012]
2867 TC65, IEC., "IEC 62439-3: Industrial communication
2868 networks - High availability automation networks - Part 3:
2869 Parallel Redundancy Protocol (PRP) and High-availability
2870 Seamless Redundancy (HSR)", 2012.
2872 [IEEE1588]
2873 IEEE, "IEEE Standard for a Precision Clock Synchronization
2874 Protocol for Networked Measurement and Control Systems",
2875 IEEE Std 1588-2008, 2008,
2876 .
2879 [IEEE1722]
2880 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport
2881 Protocol for Time Sensitive Applications in a Bridged
2882 Local Area Network", IEEE Std 1722-2011, 2011,
2883 .
2886 [IEEE19043]
2887 IEEE Standards Association, "IEEE 1904.3 TF", IEEE 1904.3,
2888 2015, .
2890 [IEEE802.1TSNTG]
2891 IEEE Standards Association, "IEEE 802.1 Time-Sensitive
2892 Networks Task Group", March 2013,
2893 .
2895 [IEEE802154]
2896 IEEE standard for Information Technology, "IEEE std.
2897 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC)
2898 and Physical Layer (PHY) Specifications for Low-Rate
2899 Wireless Personal Area Networks".
2901 [IEEE802154e]
2902 IEEE standard for Information Technology, "IEEE standard
2903 for Information Technology, IEEE std. 802.15.4, Part.
2904 15.4: Wireless Medium Access Control (MAC) and Physical
2905 Layer (PHY) Specifications for Low-Rate Wireless Personal
2906 Area Networks, June 2011 as amended by IEEE std.
2907 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area
2908 Networks (LR-WPANs) Amendment 1: MAC sublayer", April
2909 2012.
2911 [IEEE8021AS]
2912 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)",
2913 IEEE 802.1AS-2001, 2011,
2914 .
2917 [IEEE8021CM]
2918 Farkas, J., "Time-Sensitive Networking for Fronthaul",
2919 Unapproved PAR, PAR for a New IEEE Standard;
2920 IEEE P802.1CM, April 2015,
2921 .
2924 [IEEE8021TSN]
2925 IEEE 802.1, "The charter of the TG is to provide the
2926 specifications that will allow time-synchronized low
2927 latency streaming services through 802 networks.", 2016,
2928 .
2930 [IETFDetNet]
2931 IETF, "Charter for IETF DetNet Working Group", 2015,
2932 .
2934 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation",
2935 .
2937 [ISA100.11a]
2938 ISA/ANSI, "Wireless Systems for Industrial Automation:
2939 Process Control and Related Applications - ISA100.11a-2011
2940 - IEC 62734", 2011, .
2943 [ISO7240-16]
2944 ISO, "ISO 7240-16:2007 Fire detection and alarm systems --
2945 Part 16: Sound system control and indicating equipment",
2946 2007, .
2949 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006.
2951 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0",
2952 1994.
2954 [LTE-Latency]
2955 Johnston, S., "LTE Latency: How does it compare to other
2956 technologies", March 2014,
2957 .
2960 [MEF] MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells",
2961 MEF 22.1.1, July 2014,
2962 .
2965 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and
2966 wireless system", ICT-317669-METIS/D1.1 ICT-
2967 317669-METIS/D1.1, April 2013, .
2970 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL
2971 SPECIFICATION V1.1b", December 2006.
2973 [net5G] Ericsson, "5G Radio Access, Challenges for 2020 and
2974 Beyond", Ericsson white paper wp-5g, June 2013,
2975 .
2977 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0,
2978 February 2015, .
2981 [NGMN-fronth]
2982 NGMN Alliance, "Fronthaul Requirements for C-RAN", March
2983 2015, .
2986 [PCE] IETF, "Path Computation Element",
2987 .
2989 [profibus]
2990 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001.
2992 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
2993 Requirement Levels", BCP 14, RFC 2119,
2994 DOI 10.17487/RFC2119, March 1997,
2995 .
2997 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6
2998 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460,
2999 December 1998, .
3001 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black,
3002 "Definition of the Differentiated Services Field (DS
3003 Field) in the IPv4 and IPv6 Headers", RFC 2474,
3004 DOI 10.17487/RFC2474, December 1998,
3005 .
3007 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol
3008 Label Switching Architecture", RFC 3031,
3009 DOI 10.17487/RFC3031, January 2001,
3010 .
3012 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
3013 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
3014 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001,
3015 .
3017 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation
3018 Metric for IP Performance Metrics (IPPM)", RFC 3393,
3019 DOI 10.17487/RFC3393, November 2002,
3020 .
3022 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between
3023 Information Models and Data Models", RFC 3444,
3024 DOI 10.17487/RFC3444, January 2003,
3025 .
3027 [RFC3972] Aura, T., "Cryptographically Generated Addresses (CGA)",
3028 RFC 3972, DOI 10.17487/RFC3972, March 2005,
3029 .
3031 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation
3032 Edge-to-Edge (PWE3) Architecture", RFC 3985,
3033 DOI 10.17487/RFC3985, March 2005,
3034 .
3036 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing
3037 Architecture", RFC 4291, DOI 10.17487/RFC4291, February
3038 2006, .
3040 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure-
3041 Agnostic Time Division Multiplexing (TDM) over Packet
3042 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006,
3043 .
3045 [RFC4903] Thaler, D., "Multi-Link Subnet Issues", RFC 4903,
3046 DOI 10.17487/RFC4903, June 2007,
3047 .
3049 [RFC4919] Kushalnagar, N., Montenegro, G., and C. Schumacher, "IPv6
3050 over Low-Power Wireless Personal Area Networks (6LoWPANs):
3051 Overview, Assumptions, Problem Statement, and Goals",
3052 RFC 4919, DOI 10.17487/RFC4919, August 2007,
3053 .
3055 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and
3056 P. Pate, "Structure-Aware Time Division Multiplexed (TDM)
3057 Circuit Emulation Service over Packet Switched Network
3058 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007,
3059 .
3061 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi,
3062 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087,
3063 DOI 10.17487/RFC5087, December 2007,
3064 .
3066 [RFC6282] Hui, J., Ed. and P. Thubert, "Compression Format for IPv6
3067 Datagrams over IEEE 802.15.4-Based Networks", RFC 6282,
3068 DOI 10.17487/RFC6282, September 2011,
3069 .
3071 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J.,
3072 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur,
3073 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for
3074 Low-Power and Lossy Networks", RFC 6550,
3075 DOI 10.17487/RFC6550, March 2012,
3076 .
3078 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N.,
3079 and D. Barthel, "Routing Metrics Used for Path Calculation
3080 in Low-Power and Lossy Networks", RFC 6551,
3081 DOI 10.17487/RFC6551, March 2012,
3082 .
3084 [RFC6775] Shelby, Z., Ed., Chakrabarti, S., Nordmark, E., and C.
3085 Bormann, "Neighbor Discovery Optimization for IPv6 over
3086 Low-Power Wireless Personal Area Networks (6LoWPANs)",
3087 RFC 6775, DOI 10.17487/RFC6775, November 2012,
3088 .
3090 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using
3091 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the
3092 Internet of Things (IoT): Problem Statement", RFC 7554,
3093 DOI 10.17487/RFC7554, May 2015,
3094 .
3096 [SRP_LATENCY]
3097 Gunther, C., "Specifying SRP Latency", 2014,
3098 .
3101 [STUDIO_IP]
3102 Mace, G., "IP Networked Studio Infrastructure for
3103 Synchronized & Real-Time Multimedia Transmissions", 2007,
3104 .
3107 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in
3108 packet networks", Recommendation G.8261, August 2013,
3109 .
3111 [TEAS] IETF, "Traffic Engineering Architecture and Signaling",
3112 .
3114 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements
3115 for Evolved Universal Terrestrial Radio Access Network
3116 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013.
3118 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception
3119 (FDD)", 3GPP TS 25.104 3.14.0, March 2007.
3121 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access
3122 (E-UTRA); Base Station (BS) radio transmission and
3123 reception", 3GPP TS 36.104 10.11.0, July 2013.
3125 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access
3126 (E-UTRA); Requirements for support of radio resource
3127 management", 3GPP TS 36.133 12.7.0, April 2015.
3129 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access
3130 (E-UTRA); Physical channels and modulation", 3GPP
3131 TS 36.211 10.7.0, March 2013.
3133 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA)
3134 and Evolved Universal Terrestrial Radio Access Network
3135 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300
3136 10.11.0, September 2013.
3138 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive
3139 Networks Task Group", 2013,
3140 .
3142 [UHD-video]
3143 Holub, P., "Ultra-High Definition Videos and Their
3144 Applications over the Network", The 7th International
3145 Symposium on VICTORIES Project PetrHolub_presentation,
3146 October 2014, .
3149 [WirelessHART]
3150 www.hartcomm.org, "Industrial Communication Networks -
3151 Wireless Communication Network and Communication Profiles
3152 - WirelessHART - IEC 62591", 2010.
3154 Authors' Addresses
3156 Ethan Grossman (editor)
3157 Dolby Laboratories, Inc.
3158 1275 Market Street
3159 San Francisco, CA 94103
3160 USA
3162 Phone: +1 415 645 4726
3163 Email: ethan.grossman@dolby.com
3164 URI: http://www.dolby.com
3166 Craig Gunther
3167 Harman International
3168 10653 South River Front Parkway
3169 South Jordan, UT 84095
3170 USA
3172 Phone: +1 801 568-7675
3173 Email: craig.gunther@harman.com
3174 URI: http://www.harman.com
3175 Pascal Thubert
3176 Cisco Systems, Inc
3177 Building D
3178 45 Allee des Ormes - BP1200
3179 MOUGINS - Sophia Antipolis 06254
3180 FRANCE
3182 Phone: +33 497 23 26 34
3183 Email: pthubert@cisco.com
3185 Patrick Wetterwald
3186 Cisco Systems
3187 45 Allees des Ormes
3188 Mougins 06250
3189 FRANCE
3191 Phone: +33 4 97 23 26 36
3192 Email: pwetterw@cisco.com
3194 Jean Raymond
3195 Hydro-Quebec
3196 1500 University
3197 Montreal H3A3S7
3198 Canada
3200 Phone: +1 514 840 3000
3201 Email: raymond.jean@hydro.qc.ca
3203 Jouni Korhonen
3204 Broadcom Corporation
3205 3151 Zanker Road
3206 San Jose, CA 95134
3207 USA
3209 Email: jouni.nospam@gmail.com
3211 Yu Kaneko
3212 Toshiba
3213 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi
3214 Kanagawa, Japan
3216 Email: yu1.kaneko@toshiba.co.jp
3217 Subir Das
3218 Applied Communication Sciences
3219 150 Mount Airy Road, Basking Ridge
3220 New Jersey, 07920, USA
3222 Email: sdas@appcomsci.com
3224 Yiyong Zha
3225 Huawei Technologies
3227 Email: zhayiyong@huawei.com
3229 Balazs Varga
3230 Ericsson
3231 Konyves Kalman krt. 11/B
3232 Budapest 1097
3233 Hungary
3235 Email: balazs.a.varga@ericsson.com
3237 Janos Farkas
3238 Ericsson
3239 Konyves Kalman krt. 11/B
3240 Budapest 1097
3241 Hungary
3243 Email: janos.farkas@ericsson.com
3245 Franz-Josef Goetz
3246 Siemens
3247 Gleiwitzerstr. 555
3248 Nurnberg 90475
3249 Germany
3251 Email: franz-josef.goetz@siemens.com
3253 Juergen Schmitt
3254 Siemens
3255 Gleiwitzerstr. 555
3256 Nurnberg 90475
3257 Germany
3259 Email: juergen.jues.schmitt@siemens.com