idnits 2.17.1
draft-ietf-detnet-use-cases-06.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
** The document seems to lack an IANA Considerations section. (See Section
2.2 of https://www.ietf.org/id-info/checklist for how to handle the case
when there are no actions for IANA.)
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (March 4, 2016) is 2974 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Unused Reference: 'ACE' is defined on line 2768, but no explicit
reference was found in the text
== Unused Reference: 'CCAMP' is defined on line 2776, but no explicit
reference was found in the text
== Unused Reference: 'DICE' is defined on line 2798, but no explicit
reference was found in the text
== Unused Reference: 'EA12' is defined on line 2801, but no explicit
reference was found in the text
== Unused Reference: 'HART' is defined on line 2818, but no explicit
reference was found in the text
== Unused Reference: 'I-D.ietf-6tisch-terminology' is defined on line 2847,
but no explicit reference was found in the text
== Unused Reference: 'I-D.ietf-ipv6-multilink-subnets' is defined on line
2853, but no explicit reference was found in the text
== Unused Reference: 'I-D.ietf-roll-rpl-industrial-applicability' is
defined on line 2858, but no explicit reference was found in the text
== Unused Reference: 'I-D.thubert-6lowpan-backbone-router' is defined on
line 2886, but no explicit reference was found in the text
== Unused Reference: 'IEC61850-90-12' is defined on line 2896, but no
explicit reference was found in the text
== Unused Reference: 'IEEE8021TSN' is defined on line 2959, but no explicit
reference was found in the text
== Unused Reference: 'IETFDetNet' is defined on line 2965, but no explicit
reference was found in the text
== Unused Reference: 'RFC2119' is defined on line 3022, but no explicit
reference was found in the text
== Unused Reference: 'RFC2460' is defined on line 3027, but no explicit
reference was found in the text
== Unused Reference: 'RFC2474' is defined on line 3031, but no explicit
reference was found in the text
== Unused Reference: 'RFC3209' is defined on line 3042, but no explicit
reference was found in the text
== Unused Reference: 'RFC3393' is defined on line 3047, but no explicit
reference was found in the text
== Unused Reference: 'RFC3444' is defined on line 3052, but no explicit
reference was found in the text
== Unused Reference: 'RFC3972' is defined on line 3057, but no explicit
reference was found in the text
== Unused Reference: 'RFC4291' is defined on line 3066, but no explicit
reference was found in the text
== Unused Reference: 'RFC4903' is defined on line 3075, but no explicit
reference was found in the text
== Unused Reference: 'RFC4919' is defined on line 3079, but no explicit
reference was found in the text
== Unused Reference: 'RFC6282' is defined on line 3096, but no explicit
reference was found in the text
== Unused Reference: 'RFC6775' is defined on line 3114, but no explicit
reference was found in the text
== Unused Reference: 'TEAS' is defined on line 3141, but no explicit
reference was found in the text
== Unused Reference: 'UHD-video' is defined on line 3172, but no explicit
reference was found in the text
== Outdated reference: A later version (-08) exists of
draft-finn-detnet-architecture-03
== Outdated reference: A later version (-05) exists of
draft-finn-detnet-problem-statement-04
== Outdated reference: A later version (-30) exists of
draft-ietf-6tisch-architecture-09
== Outdated reference: A later version (-10) exists of
draft-ietf-6tisch-terminology-06
-- Obsolete informational reference (is this intentional?): RFC 2460
(Obsoleted by RFC 8200)
Summary: 1 error (**), 0 flaws (~~), 31 warnings (==), 2 comments (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Internet Engineering Task Force E. Grossman, Ed.
3 Internet-Draft DOLBY
4 Intended status: Informational C. Gunther
5 Expires: September 5, 2016 HARMAN
6 P. Thubert
7 P. Wetterwald
8 CISCO
9 J. Raymond
10 HYDRO-QUEBEC
11 J. Korhonen
12 BROADCOM
13 Y. Kaneko
14 Toshiba
15 S. Das
16 Applied Communication Sciences
17 Y. Zha
18 HUAWEI
19 B. Varga
20 J. Farkas
21 Ericsson
22 F. Goetz
23 J. Schmitt
24 Siemens
25 March 4, 2016
27 Deterministic Networking Use Cases
28 draft-ietf-detnet-use-cases-06
30 Abstract
32 This draft documents requirements in several diverse industries to
33 establish multi-hop paths for characterized flows with deterministic
34 properties. In this context deterministic implies that streams can
35 be established which provide guaranteed bandwidth and latency which
36 can be established from either a Layer 2 or Layer 3 (IP) interface,
37 and which can co-exist on an IP network with best-effort traffic.
39 Additional requirements include optional redundant paths, very high
40 reliability paths, time synchronization, and clock distribution.
41 Industries considered include wireless for industrial applications,
42 professional audio, electrical utilities, building automation
43 systems, radio/mobile access networks, automotive, and gaming.
45 For each case, this document will identify the application, identify
46 representative solutions used today, and what new uses an IETF DetNet
47 solution may enable.
49 Status of This Memo
51 This Internet-Draft is submitted in full conformance with the
52 provisions of BCP 78 and BCP 79.
54 Internet-Drafts are working documents of the Internet Engineering
55 Task Force (IETF). Note that other groups may also distribute
56 working documents as Internet-Drafts. The list of current Internet-
57 Drafts is at http://datatracker.ietf.org/drafts/current/.
59 Internet-Drafts are draft documents valid for a maximum of six months
60 and may be updated, replaced, or obsoleted by other documents at any
61 time. It is inappropriate to use Internet-Drafts as reference
62 material or to cite them other than as "work in progress."
64 This Internet-Draft will expire on September 5, 2016.
66 Copyright Notice
68 Copyright (c) 2016 IETF Trust and the persons identified as the
69 document authors. All rights reserved.
71 This document is subject to BCP 78 and the IETF Trust's Legal
72 Provisions Relating to IETF Documents
73 (http://trustee.ietf.org/license-info) in effect on the date of
74 publication of this document. Please review these documents
75 carefully, as they describe your rights and restrictions with respect
76 to this document. Code Components extracted from this document must
77 include Simplified BSD License text as described in Section 4.e of
78 the Trust Legal Provisions and are provided without warranty as
79 described in the Simplified BSD License.
81 Table of Contents
83 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5
84 2. Pro Audio Use Cases . . . . . . . . . . . . . . . . . . . . . 5
85 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 5
86 2.2. Fundamental Stream Requirements . . . . . . . . . . . . . 6
87 2.2.1. Guaranteed Bandwidth . . . . . . . . . . . . . . . . 7
88 2.2.2. Bounded and Consistent Latency . . . . . . . . . . . 7
89 2.2.2.1. Optimizations . . . . . . . . . . . . . . . . . . 8
90 2.3. Additional Stream Requirements . . . . . . . . . . . . . 9
91 2.3.1. Deterministic Time to Establish Streaming . . . . . . 9
92 2.3.2. Use of Unused Reservations by Best-Effort Traffic . . 9
93 2.3.3. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 10
94 2.3.4. Secure Transmission . . . . . . . . . . . . . . . . . 10
95 2.3.5. Redundant Paths . . . . . . . . . . . . . . . . . . . 10
96 2.3.6. Link Aggregation . . . . . . . . . . . . . . . . . . 11
97 2.3.7. Traffic Segregation . . . . . . . . . . . . . . . . . 11
98 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets . . . 11
99 2.3.7.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 11
100 2.4. Integration of Reserved Streams into IT Networks . . . . 12
101 2.5. Security Considerations . . . . . . . . . . . . . . . . . 12
102 2.5.1. Denial of Service . . . . . . . . . . . . . . . . . . 12
103 2.5.2. Control Protocols . . . . . . . . . . . . . . . . . . 12
104 2.6. A State-of-the-Art Broadcast Installation Hits Technology
105 Limits . . . . . . . . . . . . . . . . . . . . . . . . . 13
106 3. Electrical Utilities . . . . . . . . . . . . . . . . . . . . 13
107 3.1. Use Case Description . . . . . . . . . . . . . . . . . . 13
108 3.1.1. Transmission Use Cases . . . . . . . . . . . . . . . 13
109 3.1.1.1. Protection . . . . . . . . . . . . . . . . . . . 13
110 3.1.1.2. Intra-Substation Process Bus Communications . . . 19
111 3.1.1.3. Wide Area Monitoring and Control Systems . . . . 20
112 3.1.1.4. IEC 61850 WAN engineering guidelines requirement
113 classification . . . . . . . . . . . . . . . . . 21
114 3.1.2. Generation Use Case . . . . . . . . . . . . . . . . . 22
115 3.1.3. Distribution use case . . . . . . . . . . . . . . . . 23
116 3.1.3.1. Fault Location Isolation and Service Restoration
117 (FLISR) . . . . . . . . . . . . . . . . . . . . . 23
118 3.2. Electrical Utilities Today . . . . . . . . . . . . . . . 24
119 3.2.1. Security Current Practices and Limitations . . . . . 24
120 3.3. Electrical Utilities Future . . . . . . . . . . . . . . . 26
121 3.3.1. Migration to Packet-Switched Network . . . . . . . . 26
122 3.3.2. Telecommunications Trends . . . . . . . . . . . . . . 27
123 3.3.2.1. General Telecommunications Requirements . . . . . 27
124 3.3.2.2. Specific Network topologies of Smart Grid
125 Applications . . . . . . . . . . . . . . . . . . 28
126 3.3.2.3. Precision Time Protocol . . . . . . . . . . . . . 29
127 3.3.3. Security Trends in Utility Networks . . . . . . . . . 30
128 3.4. Electrical Utilities Asks . . . . . . . . . . . . . . . . 32
129 4. Building Automation Systems . . . . . . . . . . . . . . . . . 32
130 4.1. Use Case Description . . . . . . . . . . . . . . . . . . 32
131 4.2. Building Automation Systems Today . . . . . . . . . . . . 32
132 4.2.1. BAS Architecture . . . . . . . . . . . . . . . . . . 33
133 4.2.2. BAS Deployment Model . . . . . . . . . . . . . . . . 34
134 4.2.3. Use Cases for Field Networks . . . . . . . . . . . . 36
135 4.2.3.1. Environmental Monitoring . . . . . . . . . . . . 36
136 4.2.3.2. Fire Detection . . . . . . . . . . . . . . . . . 36
137 4.2.3.3. Feedback Control . . . . . . . . . . . . . . . . 37
138 4.2.4. Security Considerations . . . . . . . . . . . . . . . 37
139 4.3. BAS Future . . . . . . . . . . . . . . . . . . . . . . . 37
140 4.4. BAS Asks . . . . . . . . . . . . . . . . . . . . . . . . 38
141 5. Wireless for Industrial . . . . . . . . . . . . . . . . . . . 38
142 5.1. Use Case Description . . . . . . . . . . . . . . . . . . 38
143 5.1.1. Network Convergence using 6TiSCH . . . . . . . . . . 39
144 5.1.2. Common Protocol Development for 6TiSCH . . . . . . . 39
146 5.2. Wireless Industrial Today . . . . . . . . . . . . . . . . 40
147 5.3. Wireless Industrial Future . . . . . . . . . . . . . . . 40
148 5.3.1. Unified Wireless Network and Management . . . . . . . 40
149 5.3.1.1. PCE and 6TiSCH ARQ Retries . . . . . . . . . . . 42
150 5.3.2. Schedule Management by a PCE . . . . . . . . . . . . 43
151 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests . . . . . . 43
152 5.3.2.2. 6TiSCH IP Interface . . . . . . . . . . . . . . . 44
153 5.3.3. 6TiSCH Security Considerations . . . . . . . . . . . 44
154 5.4. Wireless Industrial Asks . . . . . . . . . . . . . . . . 45
155 6. Cellular Radio Use Cases . . . . . . . . . . . . . . . . . . 45
156 6.1. Use Case Description . . . . . . . . . . . . . . . . . . 45
157 6.1.1. Network Architecture . . . . . . . . . . . . . . . . 45
158 6.1.2. Time Synchronization Requirements . . . . . . . . . . 46
159 6.1.3. Time-Sensitive Stream Requirements . . . . . . . . . 48
160 6.1.4. Security Considerations . . . . . . . . . . . . . . . 48
161 6.2. Cellular Radio Networks Today . . . . . . . . . . . . . . 49
162 6.3. Cellular Radio Networks Future . . . . . . . . . . . . . 49
163 6.4. Cellular Radio Networks Asks . . . . . . . . . . . . . . 51
164 7. Cellular Coordinated Multipoint Processing (CoMP) . . . . . . 51
165 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 51
166 7.1.1. CoMP Architecture . . . . . . . . . . . . . . . . . . 52
167 7.1.2. Delay Sensitivity in CoMP . . . . . . . . . . . . . . 53
168 7.2. CoMP Today . . . . . . . . . . . . . . . . . . . . . . . 53
169 7.3. CoMP Future . . . . . . . . . . . . . . . . . . . . . . . 53
170 7.3.1. Mobile Industry Overall Goals . . . . . . . . . . . . 53
171 7.3.2. CoMP Infrastructure Goals . . . . . . . . . . . . . . 54
172 7.4. CoMP Asks . . . . . . . . . . . . . . . . . . . . . . . . 54
173 8. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 55
174 8.1. Use Case Description . . . . . . . . . . . . . . . . . . 55
175 8.2. Industrial M2M Communication Today . . . . . . . . . . . 56
176 8.2.1. Transport Parameters . . . . . . . . . . . . . . . . 56
177 8.2.2. Stream Creation and Destruction . . . . . . . . . . . 57
178 8.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 57
179 8.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 58
180 9. Internet-based Applications . . . . . . . . . . . . . . . . . 58
181 9.1. Use Case Description . . . . . . . . . . . . . . . . . . 58
182 9.1.1. Media Content Delivery . . . . . . . . . . . . . . . 58
183 9.1.2. Online Gaming . . . . . . . . . . . . . . . . . . . . 58
184 9.1.3. Virtual Reality . . . . . . . . . . . . . . . . . . . 58
185 9.2. Internet-Based Applications Today . . . . . . . . . . . . 59
186 9.3. Internet-Based Applications Future . . . . . . . . . . . 59
187 9.4. Internet-Based Applications Asks . . . . . . . . . . . . 59
188 10. Use Case Common Elements . . . . . . . . . . . . . . . . . . 59
189 11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 60
190 11.1. Pro Audio . . . . . . . . . . . . . . . . . . . . . . . 60
191 11.2. Utility Telecom . . . . . . . . . . . . . . . . . . . . 61
192 11.3. Building Automation Systems . . . . . . . . . . . . . . 61
193 11.4. Wireless for Industrial . . . . . . . . . . . . . . . . 61
194 11.5. Cellular Radio . . . . . . . . . . . . . . . . . . . . . 61
195 11.6. Industrial M2M . . . . . . . . . . . . . . . . . . . . . 61
196 11.7. Internet Applications and CoMP . . . . . . . . . . . . . 61
197 12. Informative References . . . . . . . . . . . . . . . . . . . 62
198 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 70
200 1. Introduction
202 This draft presents use cases from diverse industries which have in
203 common a need for deterministic streams, but which also differ
204 notably in their network topologies and specific desired behavior.
205 Together, they provide broad industry context for DetNet and a
206 yardstick against which proposed DetNet designs can be measured (to
207 what extent does a proposed design satisfy these various use cases?)
209 For DetNet, use cases explicitly do not define requirements; The
210 DetNet WG will consider the use cases, decide which elements are in
211 scope for DetNet, and the results will be incorporated into future
212 drafts. Similarly, the DetNet use case draft explicitly does not
213 suggest any specific design, architecture or protocols, which will be
214 topics of future drafts.
216 We present for each use case the answers to the following questions:
218 o What is the use case?
220 o How is it addressed today?
222 o How would you like it to be addressed in the future?
224 o What do you want the IETF to deliver?
226 The level of detail in each use case should be sufficient to express
227 the relevant elements of the use case, but not more.
229 At the end we consider the use cases collectively, and examine the
230 most significant goals they have in common.
232 2. Pro Audio Use Cases
234 2.1. Introduction
236 The professional audio and video industry includes music and film
237 content creation, broadcast, cinema, and live exposition as well as
238 public address, media and emergency systems at large venues
239 (airports, stadiums, churches, theme parks). These industries have
240 already gone through the transition of audio and video signals from
241 analog to digital, however the interconnect systems remain primarily
242 point-to-point with a single (or small number of) signals per link,
243 interconnected with purpose-built hardware.
245 These industries are now attempting to transition to packet based
246 infrastructure for distributing audio and video in order to reduce
247 cost, increase routing flexibility, and integrate with existing IT
248 infrastructure.
250 However, there are several requirements for making a network the
251 primary infrastructure for audio and video which are not met by
252 todays networks and these are our concern in this draft.
254 The principal requirement is that pro audio and video applications
255 become able to establish streams that provide guaranteed (bounded)
256 bandwidth and latency from the Layer 3 (IP) interface. Such streams
257 can be created today within standards-based layer 2 islands however
258 these are not sufficient to enable effective distribution over wider
259 areas (for example broadcast events that span wide geographical
260 areas).
262 Some proprietary systems have been created which enable deterministic
263 streams at layer 3 however they are engineered networks in that they
264 require careful configuration to operate, often require that the
265 system be over designed, and it is implied that all devices on the
266 network voluntarily play by the rules of that network. To enable
267 these industries to successfully transition to an interoperable
268 multi-vendor packet-based infrastructure requires effective open
269 standards, and we believe that establishing relevant IETF standards
270 is a crucial factor.
272 It would be highly desirable if such streams could be routed over the
273 open Internet, however even intermediate solutions with more limited
274 scope (such as enterprise networks) can provide a substantial
275 improvement over todays networks, and a solution that only provides
276 for the enterprise network scenario is an acceptable first step.
278 We also present more fine grained requirements of the audio and video
279 industries such as safety and security, redundant paths, devices with
280 limited computing resources on the network, and that reserved stream
281 bandwidth is available for use by other best-effort traffic when that
282 stream is not currently in use.
284 2.2. Fundamental Stream Requirements
286 The fundamental stream properties are guaranteed bandwidth and
287 deterministic latency as described in this section. Additional
288 stream requirements are described in a subsequent section.
290 2.2.1. Guaranteed Bandwidth
292 Transmitting audio and video streams is unlike common file transfer
293 activities because guaranteed delivery cannot be achieved by re-
294 trying the transmission; by the time the missing or corrupt packet
295 has been identified it is too late to execute a re-try operation and
296 stream playback is interrupted, which is unacceptable in for example
297 a live concert. In some contexts large amounts of buffering can be
298 used to provide enough delay to allow time for one or more retries,
299 however this is not an effective solution when live interaction is
300 involved, and is not considered an acceptable general solution for
301 pro audio and video. (Have you ever tried speaking into a microphone
302 through a sound system that has an echo coming back at you? It makes
303 it almost impossible to speak clearly).
305 Providing a way to reserve a specific amount of bandwidth for a given
306 stream is a key requirement.
308 2.2.2. Bounded and Consistent Latency
310 Latency in this context means the amount of time that passes between
311 when a signal is sent over a stream and when it is received, for
312 example the amount of time delay between when you speak into a
313 microphone and when your voice emerges from the speaker. Any delay
314 longer than about 10-15 milliseconds is noticeable by most live
315 performers, and greater latency makes the system unusable because it
316 prevents them from playing in time with the other players (see slide
317 6 of [SRP_LATENCY]).
319 The 15ms latency bound is made even more challenging because it is
320 often the case in network based music production with live electric
321 instruments that multiple stages of signal processing are used,
322 connected in series (i.e. from one to the other for example from
323 guitar through a series of digital effects processors) in which case
324 the latencies add, so the latencies of each individual stage must all
325 together remain less than 15ms.
327 In some situations it is acceptable at the local location for content
328 from the live remote site to be delayed to allow for a statistically
329 acceptable amount of latency in order to reduce jitter. However,
330 once the content begins playing in the local location any audio
331 artifacts caused by the local network are unacceptable, especially in
332 those situations where a live local performer is mixed into the feed
333 from the remote location.
335 In addition to being bounded to within some predictable and
336 acceptable amount of time (which may be 15 milliseconds or more or
337 less depending on the application) the latency also has to be
338 consistent. For example when playing a film consisting of a video
339 stream and audio stream over a network, those two streams must be
340 synchronized so that the voice and the picture match up. A common
341 tolerance for audio/video sync is one NTSC video frame (about 33ms)
342 and to maintain the audience perception of correct lip sync the
343 latency needs to be consistent within some reasonable tolerance, for
344 example 10%.
346 A common architecture for synchronizing multiple streams that have
347 different paths through the network (and thus potentially different
348 latencies) is to enable measurement of the latency of each path, and
349 have the data sinks (for example speakers) buffer (delay) all packets
350 on all but the slowest path. Each packet of each stream is assigned
351 a presentation time which is based on the longest required delay.
352 This implies that all sinks must maintain a common time reference of
353 sufficient accuracy, which can be achieved by any of various
354 techniques.
356 This type of architecture is commonly implemented using a central
357 controller that determines path delays and arbitrates buffering
358 delays.
360 2.2.2.1. Optimizations
362 The controller might also perform optimizations based on the
363 individual path delays, for example sinks that are closer to the
364 source can inform the controller that they can accept greater latency
365 since they will be buffering packets to match presentation times of
366 farther away sinks. The controller might then move a stream
367 reservation on a short path to a longer path in order to free up
368 bandwidth for other critical streams on that short path. See slides
369 3-5 of [SRP_LATENCY].
371 Additional optimization can be achieved in cases where sinks have
372 differing latency requirements, for example in a live outdoor concert
373 the speaker sinks have stricter latency requirements than the
374 recording hardware sinks. See slide 7 of [SRP_LATENCY].
376 Device cost can be reduced in a system with guaranteed reservations
377 with a small bounded latency due to the reduced requirements for
378 buffering (i.e. memory) on sink devices. For example, a theme park
379 might broadcast a live event across the globe via a layer 3 protocol;
380 in such cases the size of the buffers required is proportional to the
381 latency bounds and jitter caused by delivery, which depends on the
382 worst case segment of the end-to-end network path. For example on
383 todays open internet the latency is typically unacceptable for audio
384 and video streaming without many seconds of buffering. In such
385 scenarios a single gateway device at the local network that receives
386 the feed from the remote site would provide the expensive buffering
387 required to mask the latency and jitter issues associated with long
388 distance delivery. Sink devices in the local location would have no
389 additional buffering requirements, and thus no additional costs,
390 beyond those required for delivery of local content. The sink device
391 would be receiving the identical packets as those sent by the source
392 and would be unaware that there were any latency or jitter issues
393 along the path.
395 2.3. Additional Stream Requirements
397 The requirements in this section are more specific yet are common to
398 multiple audio and video industry applications.
400 2.3.1. Deterministic Time to Establish Streaming
402 Some audio systems installed in public environments (airports,
403 hospitals) have unique requirements with regards to health, safety
404 and fire concerns. One such requirement is a maximum of 3 seconds
405 for a system to respond to an emergency detection and begin sending
406 appropriate warning signals and alarms without human intervention.
407 For this requirement to be met, the system must support a bounded and
408 acceptable time from a notification signal to specific stream
409 establishment. For further details see [ISO7240-16].
411 Similar requirements apply when the system is restarted after a power
412 cycle, cable re-connection, or system reconfiguration.
414 In many cases such re-establishment of streaming state must be
415 achieved by the peer devices themselves, i.e. without a central
416 controller (since such a controller may only be present during
417 initial network configuration).
419 Video systems introduce related requirements, for example when
420 transitioning from one camera feed to another. Such systems
421 currently use purpose-built hardware to switch feeds smoothly,
422 however there is a current initiative in the broadcast industry to
423 switch to a packet-based infrastructure (see [STUDIO_IP] and the ESPN
424 DC2 use case described below).
426 2.3.2. Use of Unused Reservations by Best-Effort Traffic
428 In cases where stream bandwidth is reserved but not currently used
429 (or is under-utilized) that bandwidth must be available to best-
430 effort (i.e. non-time-sensitive) traffic. For example a single
431 stream may be nailed up (reserved) for specific media content that
432 needs to be presented at different times of the day, ensuring timely
433 delivery of that content, yet in between those times the full
434 bandwidth of the network can be utilized for best-effort tasks such
435 as file transfers.
437 This also addresses a concern of IT network administrators that are
438 considering adding reserved bandwidth traffic to their networks that
439 users will just reserve a ton of bandwidth and then never un-reserve
440 it even though they are not using it, and soon they will have no
441 bandwidth left.
443 2.3.3. Layer 3 Interconnecting Layer 2 Islands
445 As an intermediate step (short of providing guaranteed bandwidth
446 across the open internet) it would be valuable to provide a way to
447 connect multiple Layer 2 networks. For example layer 2 techniques
448 could be used to create a LAN for a single broadcast studio, and
449 several such studios could be interconnected via layer 3 links.
451 2.3.4. Secure Transmission
453 Digital Rights Management (DRM) is very important to the audio and
454 video industries. Any time protected content is introduced into a
455 network there are DRM concerns that must be maintained (see
456 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of
457 network technology, however there are cases when a secure link
458 supporting authentication and encryption is required by content
459 owners to carry their audio or video content when it is outside their
460 own secure environment (for example see [DCI]).
462 As an example, two techniques are Digital Transmission Content
463 Protection (DTCP) and High-Bandwidth Digital Content Protection
464 (HDCP). HDCP content is not approved for retransmission within any
465 other type of DRM, while DTCP may be retransmitted under HDCP.
466 Therefore if the source of a stream is outside of the network and it
467 uses HDCP protection it is only allowed to be placed on the network
468 with that same HDCP protection.
470 2.3.5. Redundant Paths
472 On-air and other live media streams must be backed up with redundant
473 links that seamlessly act to deliver the content when the primary
474 link fails for any reason. In point-to-point systems this is
475 provided by an additional point-to-point link; the analogous
476 requirement in a packet-based system is to provide an alternate path
477 through the network such that no individual link can bring down the
478 system.
480 2.3.6. Link Aggregation
482 For transmitting streams that require more bandwidth than a single
483 link in the target network can support, link aggregation is a
484 technique for combining (aggregating) the bandwidth available on
485 multiple physical links to create a single logical link of the
486 required bandwidth. However, if aggregation is to be used, the
487 network controller (or equivalent) must be able to determine the
488 maximum latency of any path through the aggregate link (see Bounded
489 and Consistent Latency section above).
491 2.3.7. Traffic Segregation
493 Sink devices may be low cost devices with limited processing power.
494 In order to not overwhelm the CPUs in these devices it is important
495 to limit the amount of traffic that these devices must process.
497 As an example, consider the use of individual seat speakers in a
498 cinema. These speakers are typically required to be cost reduced
499 since the quantities in a single theater can reach hundreds of seats.
500 Discovery protocols alone in a one thousand seat theater can generate
501 enough broadcast traffic to overwhelm a low powered CPU. Thus an
502 installation like this will benefit greatly from some type of traffic
503 segregation that can define groups of seats to reduce traffic within
504 each group. All seats in the theater must still be able to
505 communicate with a central controller.
507 There are many techniques that can be used to support this
508 requirement including (but not limited to) the following examples.
510 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets
512 Packet forwarding rules can be used to eliminate some extraneous
513 streaming traffic from reaching potentially low powered sink devices,
514 however there may be other types of broadcast traffic that should be
515 eliminated using other means for example VLANs or IP subnets.
517 2.3.7.2. Multicast Addressing (IPv4 and IPv6)
519 Multicast addressing is commonly used to keep bandwidth utilization
520 of shared links to a minimum.
522 Because of the MAC Address forwarding nature of Layer 2 bridges it is
523 important that a multicast MAC address is only associated with one
524 stream. This will prevent reservations from forwarding packets from
525 one stream down a path that has no interested sinks simply because
526 there is another stream on that same path that shares the same
527 multicast MAC address.
529 Since each multicast MAC Address can represent 32 different IPv4
530 multicast addresses there must be a process put in place to make sure
531 this does not occur. Requiring use of IPv6 address can achieve this,
532 however due to their continued prevalence, solutions that are
533 effective for IPv4 installations are also required.
535 2.4. Integration of Reserved Streams into IT Networks
537 A commonly cited goal of moving to a packet based media
538 infrastructure is that costs can be reduced by using off the shelf,
539 commodity network hardware. In addition, economy of scale can be
540 realized by combining media infrastructure with IT infrastructure.
541 In keeping with these goals, stream reservation technology should be
542 compatible with existing protocols, and not compromise use of the
543 network for best effort (non-time-sensitive) traffic.
545 2.5. Security Considerations
547 Many industries that are moving from the point-to-point world to the
548 digital network world have little understanding of the pitfalls that
549 they can create for themselves with improperly implemented network
550 infrastructure. DetNet should consider ways to provide security
551 against DoS attacks in solutions directed at these markets. Some
552 considerations are given here as examples of ways that we can help
553 new users avoid common pitfalls.
555 2.5.1. Denial of Service
557 One security pitfall that this author is aware of involves the use of
558 technology that allows a presenter to throw the content from their
559 tablet or smart phone onto the A/V system that is then viewed by all
560 those in attendance. The facility introducing this technology was
561 quite excited to allow such modern flexibility to those who came to
562 speak. One thing they hadn't realized was that since no security was
563 put in place around this technology it left a hole in the system that
564 allowed other attendees to "throw" their own content onto the A/V
565 system.
567 2.5.2. Control Protocols
569 Professional audio systems can include amplifiers that are capable of
570 generating hundreds or thousands of watts of audio power which if
571 used incorrectly can cause hearing damage to those in the vicinity.
572 Apart from the usual care required by the systems operators to
573 prevent such incidents, the network traffic that controls these
574 devices must be secured (as with any sensitive application traffic).
575 In addition, it would be desirable if the configuration protocols
576 that are used to create the network paths used by the professional
577 audio traffic could be designed to protect devices that are not meant
578 to receive high-amplitude content from having such potentially
579 damaging signals routed to them.
581 2.6. A State-of-the-Art Broadcast Installation Hits Technology Limits
583 ESPN recently constructed a state-of-the-art 194,000 sq ft, $125
584 million broadcast studio called DC2. The DC2 network is capable of
585 handling 46 Tbps of throughput with 60,000 simultaneous signals.
586 Inside the facility are 1,100 miles of fiber feeding four audio
587 control rooms. (See details at [ESPN_DC2] ).
589 In designing DC2 they replaced as much point-to-point technology as
590 they possibly could with packet-based technology. They constructed
591 seven individual studios using layer 2 LANS (using IEEE 802.1 AVB)
592 that were entirely effective at routing audio within the LANs, and
593 they were very happy with the results, however to interconnect these
594 layer 2 LAN islands together they ended up using dedicated links
595 because there is no standards-based routing solution available.
597 This is the kind of motivation we have to develop these standards
598 because customers are ready and able to use them.
600 3. Electrical Utilities
602 3.1. Use Case Description
604 Many systems that an electrical utility deploys today rely on high
605 availability and deterministic behavior of the underlying networks.
606 Here we present use cases in Transmission, Generation and
607 Distribution, including key timing and reliability metrics. We also
608 discuss security issues and industry trends which affect the
609 architecture of next generation utility networks
611 3.1.1. Transmission Use Cases
613 3.1.1.1. Protection
615 Protection means not only the protection of human operators but also
616 the protection of the electrical equipment and the preservation of
617 the stability and frequency of the grid. If a fault occurs in the
618 transmission or distribution of electricity then severe damage can
619 occur to human operators, electrical equipment and the grid itself,
620 leading to blackouts.
622 Communication links in conjunction with protection relays are used to
623 selectively isolate faults on high voltage lines, transformers,
624 reactors and other important electrical equipment. The role of the
625 teleprotection system is to selectively disconnect a faulty part by
626 transferring command signals within the shortest possible time.
628 3.1.1.1.1. Key Criteria
630 The key criteria for measuring teleprotection performance are command
631 transmission time, dependability and security. These criteria are
632 defined by the IEC standard 60834 as follows:
634 o Transmission time (Speed): The time between the moment where state
635 changes at the transmitter input and the moment of the
636 corresponding change at the receiver output, including propagation
637 delay. Overall operating time for a teleprotection system
638 includes the time for initiating the command at the transmitting
639 end, the propagation delay over the network (including equipments)
640 and the selection and decision time at the receiving end,
641 including any additional delay due to a noisy environment.
643 o Dependability: The ability to issue and receive valid commands in
644 the presence of interference and/or noise, by minimizing the
645 probability of missing command (PMC). Dependability targets are
646 typically set for a specific bit error rate (BER) level.
648 o Security: The ability to prevent false tripping due to a noisy
649 environment, by minimizing the probability of unwanted commands
650 (PUC). Security targets are also set for a specific bit error
651 rate (BER) level.
653 Additional elements of the the teleprotection system that impact its
654 performance include:
656 o Network bandwidth
658 o Failure recovery capacity (aka resiliency)
660 3.1.1.1.2. Fault Detection and Clearance Timing
662 Most power line equipment can tolerate short circuits or faults for
663 up to approximately five power cycles before sustaining irreversible
664 damage or affecting other segments in the network. This translates
665 to total fault clearance time of 100ms. As a safety precaution,
666 however, actual operation time of protection systems is limited to
667 70- 80 percent of this period, including fault recognition time,
668 command transmission time and line breaker switching time.
670 Some system components, such as large electromechanical switches,
671 require particularly long time to operate and take up the majority of
672 the total clearance time, leaving only a 10ms window for the
673 telecommunications part of the protection scheme, independent of the
674 distance to travel. Given the sensitivity of the issue, new networks
675 impose requirements that are even more stringent: IEC standard 61850
676 limits the transfer time for protection messages to 1/4 - 1/2 cycle
677 or 4 - 8ms (for 60Hz lines) for the most critical messages.
679 3.1.1.1.3. Symmetric Channel Delay
681 Teleprotection channels which are differential must be synchronous,
682 which means that any delays on the transmit and receive paths must
683 match each other. Teleprotection systems ideally support zero
684 asymmetric delay; typical legacy relays can tolerate delay
685 discrepancies of up to 750us.
687 Some tools available for lowering delay variation below this
688 threshold are:
690 o For legacy systems using Time Division Multiplexing (TDM), jitter
691 buffers at the multiplexers on each end of the line can be used to
692 offset delay variation by queuing sent and received packets. The
693 length of the queues must balance the need to regulate the rate of
694 transmission with the need to limit overall delay, as larger
695 buffers result in increased latency.
697 o For jitter-prone IP packet networks, traffic management tools can
698 ensure that the teleprotection signals receive the highest
699 transmission priority to minimize jitter.
701 o Standard packet-based synchronization technologies, such as
702 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet
703 (Sync-E), can help keep networks stable by maintaining a highly
704 accurate clock source on the various network devices.
706 3.1.1.1.4. Teleprotection Network Requirements (IEC 61850)
708 The following table captures the main network metrics as based on the
709 IEC 61850 standard.
711 +-----------------------------+-------------------------------------+
712 | Teleprotection Requirement | Attribute |
713 +-----------------------------+-------------------------------------+
714 | One way maximum delay | 4-10 ms |
715 | Asymetric delay required | Yes |
716 | Maximum jitter | less than 250 us (750 us for legacy |
717 | | IED) |
718 | Topology | Point to point, point to Multi- |
719 | | point |
720 | Availability | 99.9999 |
721 | precise timing required | Yes |
722 | Recovery time on node | less than 50ms - hitless |
723 | failure | |
724 | performance management | Yes, Mandatory |
725 | Redundancy | Yes |
726 | Packet loss | 0.1% to 1% |
727 +-----------------------------+-------------------------------------+
729 Table 1: Teleprotection network requirements
731 3.1.1.1.5. Inter-Trip Protection scheme
733 "Inter-tripping" is the signal-controlled tripping of a circuit
734 breaker to complete the isolation of a circuit or piece of apparatus
735 in concert with the tripping of other circuit breakers.
737 +--------------------------------+----------------------------------+
738 | Inter-Trip protection | Attribute |
739 | Requirement | |
740 +--------------------------------+----------------------------------+
741 | One way maximum delay | 5 ms |
742 | Asymetric delay required | No |
743 | Maximum jitter | Not critical |
744 | Topology | Point to point, point to Multi- |
745 | | point |
746 | Bandwidth | 64 Kbps |
747 | Availability | 99.9999 |
748 | precise timing required | Yes |
749 | Recovery time on node failure | less than 50ms - hitless |
750 | performance management | Yes, Mandatory |
751 | Redundancy | Yes |
752 | Packet loss | 0.1% |
753 +--------------------------------+----------------------------------+
755 Table 2: Inter-Trip protection network requirements
757 3.1.1.1.6. Current Differential Protection Scheme
759 Current differential protection is commonly used for line protection,
760 and is typical for protecting parallel circuits. At both end of the
761 lines the current is measured by the differential relays, and both
762 relays will trip the circuit breaker if the current going into the
763 line does not equal the current going out of the line. This type of
764 protection scheme assumes some form of communications being present
765 between the relays at both end of the line, to allow both relays to
766 compare measured current values. Line differential protection
767 schemes assume a very low telecommunications delay between both
768 relays, often as low as 5ms. Moreover, as those systems are often
769 not time-synchronized, they also assume symmetric telecommunications
770 paths with constant delay, which allows comparing current measurement
771 values taken at the exact same time.
773 +----------------------------------+--------------------------------+
774 | Current Differential protection | Attribute |
775 | Requirement | |
776 +----------------------------------+--------------------------------+
777 | One way maximum delay | 5 ms |
778 | Asymetric delay Required | Yes |
779 | Maximum jitter | less than 250 us (750us for |
780 | | legacy IED) |
781 | Topology | Point to point, point to |
782 | | Multi-point |
783 | Bandwidth | 64 Kbps |
784 | Availability | 99.9999 |
785 | precise timing required | Yes |
786 | Recovery time on node failure | less than 50ms - hitless |
787 | performance management | Yes, Mandatory |
788 | Redundancy | Yes |
789 | Packet loss | 0.1% |
790 +----------------------------------+--------------------------------+
792 Table 3: Current Differential Protection metrics
794 3.1.1.1.7. Distance Protection Scheme
796 Distance (Impedance Relay) protection scheme is based on voltage and
797 current measurements. The network metrics are similar (but not
798 identical to) Current Differential protection.
800 +-------------------------------+-----------------------------------+
801 | Distance protection | Attribute |
802 | Requirement | |
803 +-------------------------------+-----------------------------------+
804 | One way maximum delay | 5 ms |
805 | Asymetric delay Required | No |
806 | Maximum jitter | Not critical |
807 | Topology | Point to point, point to Multi- |
808 | | point |
809 | Bandwidth | 64 Kbps |
810 | Availability | 99.9999 |
811 | precise timing required | Yes |
812 | Recovery time on node failure | less than 50ms - hitless |
813 | performance management | Yes, Mandatory |
814 | Redundancy | Yes |
815 | Packet loss | 0.1% |
816 +-------------------------------+-----------------------------------+
818 Table 4: Distance Protection requirements
820 3.1.1.1.8. Inter-Substation Protection Signaling
822 This use case describes the exchange of Sampled Value and/or GOOSE
823 (Generic Object Oriented Substation Events) message between
824 Intelligent Electronic Devices (IED) in two substations for
825 protection and tripping coordination. The two IEDs are in a master-
826 slave mode.
828 The Current Transformer or Voltage Transformer (CT/VT) in one
829 substation sends the sampled analog voltage or current value to the
830 Merging Unit (MU) over hard wire. The MU sends the time-synchronized
831 61850-9-2 sampled values to the slave IED. The slave IED forwards
832 the information to the Master IED in the other substation. The
833 master IED makes the determination (for example based on sampled
834 value differentials) to send a trip command to the originating IED.
835 Once the slave IED/Relay receives the GOOSE trip for breaker
836 tripping, it opens the breaker. It then sends a confirmation message
837 back to the master. All data exchanges between IEDs are either
838 through Sampled Value and/or GOOSE messages.
840 +----------------------------------+--------------------------------+
841 | Inter-Substation protection | Attribute |
842 | Requirement | |
843 +----------------------------------+--------------------------------+
844 | One way maximum delay | 5 ms |
845 | Asymetric delay Required | No |
846 | Maximum jitter | Not critical |
847 | Topology | Point to point, point to |
848 | | Multi-point |
849 | Bandwidth | 64 Kbps |
850 | Availability | 99.9999 |
851 | precise timing required | Yes |
852 | Recovery time on node failure | less than 50ms - hitless |
853 | performance management | Yes, Mandatory |
854 | Redundancy | Yes |
855 | Packet loss | 1% |
856 +----------------------------------+--------------------------------+
858 Table 5: Inter-Substation Protection requirements
860 3.1.1.2. Intra-Substation Process Bus Communications
862 This use case describes the data flow from the CT/VT to the IEDs in
863 the substation via the MU. The CT/VT in the substation send the
864 sampled value (analog voltage or current) to the MU over hard wire.
865 The MU sends the time-synchronized 61850-9-2 sampled values to the
866 IEDs in the substation in GOOSE message format. The GPS Master Clock
867 can send 1PPS or IRIG-B format to the MU through a serial port or
868 IEEE 1588 protocol via a network. Process bus communication using
869 61850 simplifies connectivity within the substation and removes the
870 requirement for multiple serial connections and removes the slow
871 serial bus architectures that are typically used. This also ensures
872 increased flexibility and increased speed with the use of multicast
873 messaging between multiple devices.
875 +----------------------------------+--------------------------------+
876 | Intra-Substation protection | Attribute |
877 | Requirement | |
878 +----------------------------------+--------------------------------+
879 | One way maximum delay | 5 ms |
880 | Asymetric delay Required | No |
881 | Maximum jitter | Not critical |
882 | Topology | Point to point, point to |
883 | | Multi-point |
884 | Bandwidth | 64 Kbps |
885 | Availability | 99.9999 |
886 | precise timing required | Yes |
887 | Recovery time on Node failure | less than 50ms - hitless |
888 | performance management | Yes, Mandatory |
889 | Redundancy | Yes - No |
890 | Packet loss | 0.1% |
891 +----------------------------------+--------------------------------+
893 Table 6: Intra-Substation Protection requirements
895 3.1.1.3. Wide Area Monitoring and Control Systems
897 The application of synchrophasor measurement data from Phasor
898 Measurement Units (PMU) to Wide Area Monitoring and Control Systems
899 promises to provide important new capabilities for improving system
900 stability. Access to PMU data enables more timely situational
901 awareness over larger portions of the grid than what has been
902 possible historically with normal SCADA (Supervisory Control and Data
903 Acquisition) data. Handling the volume and real-time nature of
904 synchrophasor data presents unique challenges for existing
905 application architectures. Wide Area management System (WAMS) makes
906 it possible for the condition of the bulk power system to be observed
907 and understood in real-time so that protective, preventative, or
908 corrective action can be taken. Because of the very high sampling
909 rate of measurements and the strict requirement for time
910 synchronization of the samples, WAMS has stringent telecommunications
911 requirements in an IP network that are captured in the following
912 table:
914 +----------------------+--------------------------------------------+
915 | WAMS Requirement | Attribute |
916 +----------------------+--------------------------------------------+
917 | One way maximum | 50 ms |
918 | delay | |
919 | Asymetric delay | No |
920 | Required | |
921 | Maximum jitter | Not critical |
922 | Topology | Point to point, point to Multi-point, |
923 | | Multi-point to Multi-point |
924 | Bandwidth | 100 Kbps |
925 | Availability | 99.9999 |
926 | precise timing | Yes |
927 | required | |
928 | Recovery time on | less than 50ms - hitless |
929 | Node failure | |
930 | performance | Yes, Mandatory |
931 | management | |
932 | Redundancy | Yes |
933 | Packet loss | 1% |
934 +----------------------+--------------------------------------------+
936 Table 7: WAMS Special Communication Requirements
938 3.1.1.4. IEC 61850 WAN engineering guidelines requirement
939 classification
941 The IEC (International Electrotechnical Commission) has recently
942 published a Technical Report which offers guidelines on how to define
943 and deploy Wide Area Networks for the interconnections of electric
944 substations, generation plants and SCADA operation centers. The IEC
945 61850-90-12 is providing a classification of WAN communication
946 requirements into 4 classes. Table 8 summarizes these requirements:
948 +----------------+------------+------------+------------+-----------+
949 | WAN | Class WA | Class WB | Class WC | Class WD |
950 | Requirement | | | | |
951 +----------------+------------+------------+------------+-----------+
952 | Application | EHV (Extra | HV (High | MV (Medium | General |
953 | field | High | Voltage) | Voltage) | purpose |
954 | | Voltage) | | | |
955 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms |
956 | Jitter | 10 us | 100 us | 1 ms | 10 ms |
957 | Latency | 100 us | 1 ms | 10 ms | 100 ms |
958 | Asymetry | | | | |
959 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 |
960 | | | | | ms |
961 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | |
962 | | 10-6 | 10-4 | | |
963 | Unavailability | 10-7 to | 10-5 to | 10-3 | |
964 | | 10-6 | 10-4 | | |
965 | Recovery delay | Zero | 50 ms | 5 s | 50 s |
966 | Cyber security | extremely | High | Medium | Medium |
967 | | high | | | |
968 +----------------+------------+------------+------------+-----------+
970 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC
972 3.1.2. Generation Use Case
974 The electrical power generation frequency should be maintained within
975 a very narrow band. Deviations from the acceptable frequency range
976 are detected and the required signals are sent to the power plants
977 for frequency regulation.
979 Automatic generation control (AGC) is a system for adjusting the
980 power output of generators at different power plants, in response to
981 changes in the load.
983 +---------------------------------------------------+---------------+
984 | FCAG (Frequency Control Automatic Generation) | Attribute |
985 | Requirement | |
986 +---------------------------------------------------+---------------+
987 | One way maximum delay | 500 ms |
988 | Asymetric delay Required | No |
989 | Maximum jitter | Not critical |
990 | Topology | Point to |
991 | | point |
992 | Bandwidth | 20 Kbps |
993 | Availability | 99.999 |
994 | precise timing required | Yes |
995 | Recovery time on Node failure | N/A |
996 | performance management | Yes, |
997 | | Mandatory |
998 | Redundancy | Yes |
999 | Packet loss | 1% |
1000 +---------------------------------------------------+---------------+
1002 Table 9: FCAG Communication Requirements
1004 3.1.3. Distribution use case
1006 3.1.3.1. Fault Location Isolation and Service Restoration (FLISR)
1008 Fault Location, Isolation, and Service Restoration (FLISR) refers to
1009 the ability to automatically locate the fault, isolate the fault, and
1010 restore service in the distribution network. This will likely be the
1011 first widespread application of distributed intelligence in the grid.
1013 Static power switch status (open/closed) in the network dictates the
1014 power flow to secondary substations. Reconfiguring the network in
1015 the event of a fault is typically done manually on site to energize/
1016 de-energize alternate paths. Automating the operation of substation
1017 switchgear allows the flow of power to be altered automatically under
1018 fault conditions.
1020 FLISR can be managed centrally from a Distribution Management System
1021 (DMS) or executed locally through distributed control via intelligent
1022 switches and fault sensors.
1024 +----------------------+--------------------------------------------+
1025 | FLISR Requirement | Attribute |
1026 +----------------------+--------------------------------------------+
1027 | One way maximum | 80 ms |
1028 | delay | |
1029 | Asymetric delay | No |
1030 | Required | |
1031 | Maximum jitter | 40 ms |
1032 | Topology | Point to point, point to Multi-point, |
1033 | | Multi-point to Multi-point |
1034 | Bandwidth | 64 Kbps |
1035 | Availability | 99.9999 |
1036 | precise timing | Yes |
1037 | required | |
1038 | Recovery time on | Depends on customer impact |
1039 | Node failure | |
1040 | performance | Yes, Mandatory |
1041 | management | |
1042 | Redundancy | Yes |
1043 | Packet loss | 0.1% |
1044 +----------------------+--------------------------------------------+
1046 Table 10: FLISR Communication Requirements
1048 3.2. Electrical Utilities Today
1050 Many utilities still rely on complex environments formed of multiple
1051 application-specific proprietary networks, including TDM networks.
1053 In this kind of environment there is no mixing of OT and IT
1054 applications on the same network, and information is siloed between
1055 operational areas.
1057 Specific calibration of the full chain is required, which is costly.
1059 This kind of environment prevents utility operations from realizing
1060 the operational efficiency benefits, visibility, and functional
1061 integration of operational information across grid applications and
1062 data networks.
1064 In addition, there are many security-related issues as discussed in
1065 the following section.
1067 3.2.1. Security Current Practices and Limitations
1069 Grid monitoring and control devices are already targets for cyber
1070 attacks, and legacy telecommunications protocols have many intrinsic
1071 network-related vulnerabilities. For example, DNP3, Modbus,
1072 PROFIBUS/PROFINET, and other protocols are designed around a common
1073 paradigm of request and respond. Each protocol is designed for a
1074 master device such as an HMI (Human Machine Interface) system to send
1075 commands to subordinate slave devices to retrieve data (reading
1076 inputs) or control (writing to outputs). Because many of these
1077 protocols lack authentication, encryption, or other basic security
1078 measures, they are prone to network-based attacks, allowing a
1079 malicious actor or attacker to utilize the request-and-respond system
1080 as a mechanism for command-and-control like functionality. Specific
1081 security concerns common to most industrial control, including
1082 utility telecommunication protocols include the following:
1084 o Network or transport errors (e.g. malformed packets or excessive
1085 latency) can cause protocol failure.
1087 o Protocol commands may be available that are capable of forcing
1088 slave devices into inoperable states, including powering-off
1089 devices, forcing them into a listen-only state, disabling
1090 alarming.
1092 o Protocol commands may be available that are capable of restarting
1093 communications and otherwise interrupting processes.
1095 o Protocol commands may be available that are capable of clearing,
1096 erasing, or resetting diagnostic information such as counters and
1097 diagnostic registers.
1099 o Protocol commands may be available that are capable of requesting
1100 sensitive information about the controllers, their configurations,
1101 or other need-to-know information.
1103 o Most protocols are application layer protocols transported over
1104 TCP; therefore it is easy to transport commands over non-standard
1105 ports or inject commands into authorized traffic flows.
1107 o Protocol commands may be available that are capable of
1108 broadcasting messages to many devices at once (i.e. a potential
1109 DoS).
1111 o Protocol commands may be available to query the device network to
1112 obtain defined points and their values (i.e. a configuration
1113 scan).
1115 o Protocol commands may be available that will list all available
1116 function codes (i.e. a function scan).
1118 These inherent vulnerabilities, along with increasing connectivity
1119 between IT an OT networks, make network-based attacks very feasible.
1121 Simple injection of malicious protocol commands provides control over
1122 the target process. Altering legitimate protocol traffic can also
1123 alter information about a process and disrupt the legitimate controls
1124 that are in place over that process. A man-in-the-middle attack
1125 could provide both control over a process and misrepresentation of
1126 data back to operator consoles.
1128 3.3. Electrical Utilities Future
1130 The business and technology trends that are sweeping the utility
1131 industry will drastically transform the utility business from the way
1132 it has been for many decades. At the core of many of these changes
1133 is a drive to modernize the electrical grid with an integrated
1134 telecommunications infrastructure. However, interoperability
1135 concerns, legacy networks, disparate tools, and stringent security
1136 requirements all add complexity to the grid transformation. Given
1137 the range and diversity of the requirements that should be addressed
1138 by the next generation telecommunications infrastructure, utilities
1139 need to adopt a holistic architectural approach to integrate the
1140 electrical grid with digital telecommunications across the entire
1141 power delivery chain.
1143 The key to modernizing grid telecommunications is to provide a
1144 common, adaptable, multi-service network infrastructure for the
1145 entire utility organization. Such a network serves as the platform
1146 for current capabilities while enabling future expansion of the
1147 network to accommodate new applications and services.
1149 To meet this diverse set of requirements, both today and in the
1150 future, the next generation utility telecommunnications network will
1151 be based on open-standards-based IP architecture. An end-to-end IP
1152 architecture takes advantage of nearly three decades of IP technology
1153 development, facilitating interoperability across disparate networks
1154 and devices, as it has been already demonstrated in many mission-
1155 critical and highly secure networks.
1157 IPv6 is seen as a future telecommunications technology for the Smart
1158 Grid; the IEC (International Electrotechnical Commission) and
1159 different National Committees have mandated a specific adhoc group
1160 (AHG8) to define the migration strategy to IPv6 for all the IEC TC57
1161 power automation standards.
1163 3.3.1. Migration to Packet-Switched Network
1165 Throughout the world, utilities are increasingly planning for a
1166 future based on smart grid applications requiring advanced
1167 telecommunications systems. Many of these applications utilize
1168 packet connectivity for communicating information and control signals
1169 across the utility's Wide Area Network (WAN), made possible by
1170 technologies such as multiprotocol label switching (MPLS). The data
1171 that traverses the utility WAN includes:
1173 o Grid monitoring, control, and protection data
1175 o Non-control grid data (e.g. asset data for condition-based
1176 monitoring)
1178 o Physical safety and security data (e.g. voice and video)
1180 o Remote worker access to corporate applications (voice, maps,
1181 schematics, etc.)
1183 o Field area network backhaul for smart metering, and distribution
1184 grid management
1186 o Enterprise traffic (email, collaboration tools, business
1187 applications)
1189 WANs support this wide variety of traffic to and from substations,
1190 the transmission and distribution grid, generation sites, between
1191 control centers, and between work locations and data centers. To
1192 maintain this rapidly expanding set of applications, many utilities
1193 are taking steps to evolve present time-division multiplexing (TDM)
1194 based and frame relay infrastructures to packet systems. Packet-
1195 based networks are designed to provide greater functionalities and
1196 higher levels of service for applications, while continuing to
1197 deliver reliability and deterministic (real-time) traffic support.
1199 3.3.2. Telecommunications Trends
1201 These general telecommunications topics are in addition to the use
1202 cases that have been addressed so far. These include both current
1203 and future telecommunications related topics that should be factored
1204 into the network architecture and design.
1206 3.3.2.1. General Telecommunications Requirements
1208 o IP Connectivity everywhere
1210 o Monitoring services everywhere and from different remote centers
1212 o Move services to a virtual data center
1214 o Unify access to applications / information from the corporate
1215 network
1217 o Unify services
1219 o Unified Communications Solutions
1221 o Mix of fiber and microwave technologies - obsolescence of SONET/
1222 SDH or TDM
1224 o Standardize grid telecommunications protocol to opened standard to
1225 ensure interoperability
1227 o Reliable Telecommunications for Transmission and Distribution
1228 Substations
1230 o IEEE 1588 time synchronization Client / Server Capabilities
1232 o Integration of Multicast Design
1234 o QoS Requirements Mapping
1236 o Enable Future Network Expansion
1238 o Substation Network Resilience
1240 o Fast Convergence Design
1242 o Scalable Headend Design
1244 o Define Service Level Agreements (SLA) and Enable SLA Monitoring
1246 o Integration of 3G/4G Technologies and future technologies
1248 o Ethernet Connectivity for Station Bus Architecture
1250 o Ethernet Connectivity for Process Bus Architecture
1252 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP
1254 3.3.2.2. Specific Network topologies of Smart Grid Applications
1256 Utilities often have very large private telecommunications networks.
1257 It covers an entire territory / country. The main purpose of the
1258 network, until now, has been to support transmission network
1259 monitoring, control, and automation, remote control of generation
1260 sites, and providing FCAPS (Fault, Configuration, Accounting,
1261 Performance, Security) services from centralized network operation
1262 centers.
1264 Going forward, one network will support operation and maintenance of
1265 electrical networks (generation, transmission, and distribution),
1266 voice and data services for ten of thousands of employees and for
1267 exchange with neighboring interconnections, and administrative
1268 services. To meet those requirements, utility may deploy several
1269 physical networks leveraging different technologies across the
1270 country: an optical network and a microwave network for instance.
1271 Each protection and automatism system between two points has two
1272 telecommunications circuits, one on each network. Path diversity
1273 between two substations is key. Regardless of the event type
1274 (hurricane, ice storm, etc.), one path shall stay available so the
1275 system can still operate.
1277 In the optical network, signals are transmitted over more than tens
1278 of thousands of circuits using fiber optic links, microwave and
1279 telephone cables. This network is the nervous system of the
1280 utility's power transmission operations. The optical network
1281 represents ten of thousands of km of cable deployed along the power
1282 lines, with individual runs as long as 280 km.
1284 3.3.2.3. Precision Time Protocol
1286 Some utilities do not use GPS clocks in generation substations. One
1287 of the main reasons is that some of the generation plants are 30 to
1288 50 meters deep under ground and the GPS signal can be weak and
1289 unreliable. Instead, atomic clocks are used. Clocks are
1290 synchronized amongst each other. Rubidium clocks provide clock and
1291 1ms timestamps for IRIG-B.
1293 Some companies plan to transition to the Precision Time Protocol
1294 (PTP, [IEEE1588]), distributing the synchronization signal over the
1295 IP/MPLS network. PTP provides a mechanism for synchronizing the
1296 clocks of participating nodes to a high degree of accuracy and
1297 precision.
1299 PTP operates based on the following assumptions:
1301 It is assumed that the network eliminates cyclic forwarding of PTP
1302 messages within each communication path (e.g. by using a spanning
1303 tree protocol).
1305 PTP is tolerant of an occasional missed message, duplicated
1306 message, or message that arrived out of order. However, PTP
1307 assumes that such impairments are relatively rare.
1309 PTP was designed assuming a multicast communication model, however
1310 PTP also supports a unicast communication model as long as the
1311 behavior of the protocol is preserved.
1313 Like all message-based time transfer protocols, PTP time accuracy
1314 is degraded by delay asymmetry in the paths taken by event
1315 messages. Asymmetry is not detectable by PTP, however, if such
1316 delays are known a priori, PTP can correct for asymmetry.
1318 IEC 61850 will recommend the use of the IEEE PTP 1588 Utility Profile
1319 (as defined in [IEC62439-3:2012] Annex B) which offers the support of
1320 redundant attachment of clocks to Parallel Redundancy Protcol (PRP)
1321 and High-availability Seamless Redundancy (HSR) networks.
1323 3.3.3. Security Trends in Utility Networks
1325 Although advanced telecommunications networks can assist in
1326 transforming the energy industry by playing a critical role in
1327 maintaining high levels of reliability, performance, and
1328 manageability, they also introduce the need for an integrated
1329 security infrastructure. Many of the technologies being deployed to
1330 support smart grid projects such as smart meters and sensors can
1331 increase the vulnerability of the grid to attack. Top security
1332 concerns for utilities migrating to an intelligent smart grid
1333 telecommunications platform center on the following trends:
1335 o Integration of distributed energy resources
1337 o Proliferation of digital devices to enable management, automation,
1338 protection, and control
1340 o Regulatory mandates to comply with standards for critical
1341 infrastructure protection
1343 o Migration to new systems for outage management, distribution
1344 automation, condition-based maintenance, load forecasting, and
1345 smart metering
1347 o Demand for new levels of customer service and energy management
1349 This development of a diverse set of networks to support the
1350 integration of microgrids, open-access energy competition, and the
1351 use of network-controlled devices is driving the need for a converged
1352 security infrastructure for all participants in the smart grid,
1353 including utilities, energy service providers, large commercial and
1354 industrial, as well as residential customers. Securing the assets of
1355 electric power delivery systems (from the control center to the
1356 substation, to the feeders and down to customer meters) requires an
1357 end-to-end security infrastructure that protects the myriad of
1358 telecommunications assets used to operate, monitor, and control power
1359 flow and measurement.
1361 "Cyber security" refers to all the security issues in automation and
1362 telecommunications that affect any functions related to the operation
1363 of the electric power systems. Specifically, it involves the
1364 concepts of:
1366 o Integrity : data cannot be altered undetectably
1368 o Authenticity : the telecommunications parties involved must be
1369 validated as genuine
1371 o Authorization : only requests and commands from the authorized
1372 users can be accepted by the system
1374 o Confidentiality : data must not be accessible to any
1375 unauthenticated users
1377 When designing and deploying new smart grid devices and
1378 telecommunications systems, it is imperative to understand the
1379 various impacts of these new components under a variety of attack
1380 situations on the power grid. Consequences of a cyber attack on the
1381 grid telecommunications network can be catastrophic. This is why
1382 security for smart grid is not just an ad hoc feature or product,
1383 it's a complete framework integrating both physical and Cyber
1384 security requirements and covering the entire smart grid networks
1385 from generation to distribution. Security has therefore become one
1386 of the main foundations of the utility telecom network architecture
1387 and must be considered at every layer with a defense-in-depth
1388 approach. Migrating to IP based protocols is key to address these
1389 challenges for two reasons:
1391 o IP enables a rich set of features and capabilities to enhance the
1392 security posture
1394 o IP is based on open standards, which allows interoperability
1395 between different vendors and products, driving down the costs
1396 associated with implementing security solutions in OT networks.
1398 Securing OT (Operation technology) telecommunications over packet-
1399 switched IP networks follow the same principles that are foundational
1400 for securing the IT infrastructure, i.e., consideration must be given
1401 to enforcing electronic access control for both person-to-machine and
1402 machine-to-machine communications, and providing the appropriate
1403 levels of data privacy, device and platform integrity, and threat
1404 detection and mitigation.
1406 3.4. Electrical Utilities Asks
1408 o Mixed L2 and L3 topologies
1410 o Deterministic behavior
1412 o Bounded latency and jitter
1414 o High availability, low recovery time
1416 o Redundancy, low packet loss
1418 o Precise timing
1420 o Centralized computing of deterministic paths
1422 o Distributed configuration may also be useful
1424 4. Building Automation Systems
1426 4.1. Use Case Description
1428 A Building Automation System (BAS) manages equipment and sensors in a
1429 building for improving residents' comfort, reducing energy
1430 consumption, and responding to failures and emergencies. For
1431 example, the BAS measures the temperature of a room using sensors and
1432 then controls the HVAC (heating, ventilating, and air conditioning)
1433 to maintain a set temperature and minimize energy consumption.
1435 A BAS primarily performs the following functions:
1437 o Periodically measures states of devices, for example humidity and
1438 illuminance of rooms, open/close state of doors, FAN speed, etc.
1440 o Stores the measured data.
1442 o Provides the measured data to BAS systems and operators.
1444 o Generates alarms for abnormal state of devices.
1446 o Controls devices (e.g. turn off room lights at 10:00 PM).
1448 4.2. Building Automation Systems Today
1449 4.2.1. BAS Architecture
1451 A typical BAS architecture of today is shown in Figure 1.
1453 +----------------------------+
1454 | |
1455 | BMS HMI |
1456 | | | |
1457 | +----------------------+ |
1458 | | Management Network | |
1459 | +----------------------+ |
1460 | | | |
1461 | LC LC |
1462 | | | |
1463 | +----------------------+ |
1464 | | Field Network | |
1465 | +----------------------+ |
1466 | | | | | |
1467 | Dev Dev Dev Dev |
1468 | |
1469 +----------------------------+
1471 BMS := Building Management Server
1472 HMI := Human Machine Interface
1473 LC := Local Controller
1475 Figure 1: BAS architecture
1477 There are typically two layers of network in a BAS. The upper one is
1478 called the Management Network and the lower one is called the Field
1479 Network. In management networks an IP-based communication protocol
1480 is used, while in field networks non-IP based communication protocols
1481 ("field protocols") are mainly used. Field networks have specific
1482 timing requirements, whereas management networks can be best-effort.
1484 A Human Machine Interface (HMI) is typically a desktop PC used by
1485 operators to monitor and display device states, send device control
1486 commands to Local Controllers (LCs), and configure building schedules
1487 (for example "turn off all room lights in the building at 10:00 PM").
1489 A Building Management Server (BMS) performs the following operations.
1491 o Collect and store device states from LCs at regular intervals.
1493 o Send control values to LCs according to a building schedule.
1495 o Send an alarm signal to operators if it detects abnormal devices
1496 states.
1498 The BMS and HMI communicate with LCs via IP-based "management
1499 protocols" (see standards [bacnetip], [knx]).
1501 A LC is typically a Programmable Logic Controller (PLC) which is
1502 connected to several tens or hundreds of devices using "field
1503 protocols". An LC performs the following kinds of operations:
1505 o Measure device states and provide the information to BMS or HMI.
1507 o Send control values to devices, unilaterally or as part of a
1508 feedback control loop.
1510 There are many field protocols used today; some are standards-based
1511 and others are proprietary (see standards [lontalk], [modbus],
1512 [profibus] and [flnet]). The result is that BASs have multiple MAC/
1513 PHY modules and interfaces. This makes BASs more expensive, slower
1514 to develop, and can result in "vendor lock-in" with multiple types of
1515 management applications.
1517 4.2.2. BAS Deployment Model
1519 An example BAS for medium or large buildings is shown in Figure 2.
1520 The physical layout spans multiple floors, and there is a monitoring
1521 room where the BAS management entities are located. Each floor will
1522 have one or more LCs depending upon the number of devices connected
1523 to the field network.
1525 +--------------------------------------------------+
1526 | Floor 3 |
1527 | +----LC~~~~+~~~~~+~~~~~+ |
1528 | | | | | |
1529 | | Dev Dev Dev |
1530 | | |
1531 |--- | ------------------------------------------|
1532 | | Floor 2 |
1533 | +----LC~~~~+~~~~~+~~~~~+ Field Network |
1534 | | | | | |
1535 | | Dev Dev Dev |
1536 | | |
1537 |--- | ------------------------------------------|
1538 | | Floor 1 |
1539 | +----LC~~~~+~~~~~+~~~~~+ +-----------------|
1540 | | | | | | Monitoring Room |
1541 | | Dev Dev Dev | |
1542 | | | BMS HMI |
1543 | | Management Network | | | |
1544 | +--------------------------------+-----+ |
1545 | | |
1546 +--------------------------------------------------+
1548 Figure 2: BAS Deployment model for Medium/Large Buildings
1550 Each LC is connected to the monitoring room via the Management
1551 network, and the management functions are performed within the
1552 building. In most cases, fast Ethernet (e.g. 100BASE-T) is used for
1553 the management network. Since the management network is non-
1554 realtime, use of Ethernet without quality of service is sufficient
1555 for today's deployment.
1557 In the field network a variety of physical interfaces such as RS232C
1558 and RS485 are used, which have specific timing requirements. Thus if
1559 a field network is to be replaced with an Ethernet or wireless
1560 network, such networks must support time-critical deterministic
1561 flows.
1563 In Figure 3, another deployment model is presented in which the
1564 management system is hosted remotely. This is becoming popular for
1565 small office and residential buildings in which a standalone
1566 monitoring system is not cost-effective.
1568 +---------------+
1569 | Remote Center |
1570 | |
1571 | BMS HMI |
1572 +------------------------------------+ | | | |
1573 | Floor 2 | | +---+---+ |
1574 | +----LC~~~~+~~~~~+ Field Network| | | |
1575 | | | | | | Router |
1576 | | Dev Dev | +-------|-------+
1577 | | | |
1578 |--- | ------------------------------| |
1579 | | Floor 1 | |
1580 | +----LC~~~~+~~~~~+ | |
1581 | | | | | |
1582 | | Dev Dev | |
1583 | | | |
1584 | | Management Network | WAN |
1585 | +------------------------Router-------------+
1586 | |
1587 +------------------------------------+
1589 Figure 3: Deployment model for Small Buildings
1591 Some interoperability is possible today in the Management Network,
1592 but not in today's field networks due to their non-IP-based design.
1594 4.2.3. Use Cases for Field Networks
1596 Below are use cases for Environmental Monitoring, Fire Detection, and
1597 Feedback Control, and their implications for field network
1598 performance.
1600 4.2.3.1. Environmental Monitoring
1602 The BMS polls each LC at a maximum measurement interval of 100ms (for
1603 example to draw a historical chart of 1 second granularity with a 10x
1604 sampling interval) and then performs the operations as specified by
1605 the operator. Each LC needs to measure each of its several hundred
1606 sensors once per measurement interval. Latency is not critical in
1607 this scenario as long as all sensor values are completed in the
1608 measurement interval. Availability is expected to be 99.999 %.
1610 4.2.3.2. Fire Detection
1612 On detection of a fire, the BMS must stop the HVAC, close the fire
1613 shutters, turn on the fire sprinklers, send an alarm, etc. There are
1614 typically ~10s of sensors per LC that BMS needs to manage. In this
1615 scenario the measurement interval is 10-50ms, the communication delay
1616 is 10ms, and the availability must be 99.9999 %.
1618 4.2.3.3. Feedback Control
1620 BAS systems utilize feedback control in various ways; the most time-
1621 critial is control of DC motors, which require a short feedback
1622 interval (1-5ms) with low communication delay (10ms) and jitter
1623 (1ms). The feedback interval depends on the characteristics of the
1624 device and a target quality of control value. There are typically
1625 ~10s of such devices per LC.
1627 Communication delay is expected to be less than 10 ms, jitter less
1628 than 1 sec while the availability must be 99.9999% .
1630 4.2.4. Security Considerations
1632 When BAS field networks were developed it was assumed that the field
1633 networks would always be physically isolated from external networks
1634 and therefore security was not a concern. In today's world many BASs
1635 are managed remotely and are thus connected to shared IP networks and
1636 so security is definitely a concern, yet security features are not
1637 available in the majority of BAS field network deployments .
1639 The management network, being an IP-based network, has the protocols
1640 available to enable network security, but in practice many BAS
1641 systems do not implement even the available security features such as
1642 device authentication or encryption for data in transit.
1644 4.3. BAS Future
1646 In the future we expect more fine-grained environmental monitoring
1647 and lower energy consumption, which will require more sensors and
1648 devices, thus requiring larger and more complex building networks.
1650 We expect building networks to be connected to or converged with
1651 other networks (Enterprise network, Home network, and Internet).
1653 Therefore better facilities for network management, control,
1654 reliability and security are critical in order to improve resident
1655 and operator convenience and comfort. For example the ability to
1656 monitor and control building devices via the internet would enable
1657 (for example) control of room lights or HVAC from a resident's
1658 desktop PC or phone application.
1660 4.4. BAS Asks
1662 The community would like to see an interoperable protocol
1663 specification that can satisfy the timing, security, availability and
1664 QoS constraints described above, such that the resulting converged
1665 network can replace the disparate field networks. Ideally this
1666 connectivity could extend to the open Internet.
1668 This would imply an architecture that can guarantee
1670 o Low communication delays (from <10ms to 100ms in a network of
1671 several hundred devices)
1673 o Low jitter (< 1 ms)
1675 o Tight feedback intervals (1ms - 10ms)
1677 o High network availability (up to 99.9999% )
1679 o Availability of network data in disaster scenario
1681 o Authentication between management and field devices (both local
1682 and remote)
1684 o Integrity and data origin authentication of communication data
1685 between field and management devices
1687 o Confidentiality of data when communicated to a remote device
1689 5. Wireless for Industrial
1691 5.1. Use Case Description
1693 Wireless networks are useful for industrial applications, for example
1694 when portable, fast-moving or rotating objects are involved, and for
1695 the resource-constrained devices found in the Internet of Things
1696 (IoT).
1698 Such network-connected sensors, actuators, control loops (etc.)
1699 typically require that the underlying network support real-time
1700 quality of service (QoS), as well as specific classes of other
1701 network properties such as reliability, redundancy, and security.
1703 These networks may also contain very large numbers of devices, for
1704 example for factories, "big data" acquisition, and the IoT. Given
1705 the large numbers of devices installed, and the potential
1706 pervasiveness of the IoT, this is a huge and very cost-sensitive
1707 market. For example, a 1% cost reduction in some areas could save
1708 $100B
1710 5.1.1. Network Convergence using 6TiSCH
1712 Some wireless network technologies support real-time QoS, and are
1713 thus useful for these kinds of networks, but others do not. For
1714 example WiFi is pervasive but does not provide guaranteed timing or
1715 delivery of packets, and thus is not useful in this context.
1717 In this use case we focus on one specific wireless network technology
1718 which does provide the required deterministic QoS, which is "IPv6
1719 over the TSCH mode of IEEE 802.15.4e" (6TiSCH, where TSCH stands for
1720 "Time-Slotted Channel Hopping", see [I-D.ietf-6tisch-architecture],
1721 [IEEE802154], [IEEE802154e], and [RFC7554]).
1723 There are other deterministic wireless busses and networks available
1724 today, however they are imcompatible with each other, and
1725 incompatible with IP traffic (for example [ISA100], [WirelessHART]).
1727 Thus the primary goal of this use case is to apply 6TiSH as a
1728 converged IP- and standards-based wireless network for industrial
1729 applications, i.e. to replace multiple proprietary and/or
1730 incompatible wireless networking and wireless network management
1731 standards.
1733 5.1.2. Common Protocol Development for 6TiSCH
1735 Today there are a number of protocols required by 6TiSCH which are
1736 still in development, and a second intent of this use case is to
1737 highlight the ways in which these "missing" protocols share goals in
1738 common with DetNet. Thus it is possible that some of the protocol
1739 technology developed for DetNet will also be applicable to 6TiSCH.
1741 These protocol goals are identified here, along with their
1742 relationship to DetNet. It is likely that ultimately the resulting
1743 protocols will not be identical, but will share design principles
1744 which contribute to the eficiency of enabling both DetNet and 6TiSCH.
1746 One such commonality is that although at a different time scale, in
1747 both TSN [IEEE802.1TSNTG] and TSCH a packet crosses the network from
1748 node to node follows a precise schedule, as a train that leaves
1749 intermediate stations at precise times along its path. This kind of
1750 operation reduces collisions, saves energy, and enables engineering
1751 the network for deterministic properties.
1753 Another commonality is remote monitoring and scheduling management of
1754 a TSCH network by a Path Computation Element (PCE) and Network
1755 Management Entity (NME). The PCE/NME manage timeslots and device
1756 resources in a manner that minimizes the interaction with and the
1757 load placed on resource-constrained devices. For example, a tiny IoT
1758 device may have just enough buffers to store one or a few IPv6
1759 packets, and will have limited bandwidth between peers such that it
1760 can maintain only a small amount of peer information, and will not be
1761 able to store many packets waiting to be forwarded. It is
1762 advantageous then for it to only be required to carry out the
1763 specific behavior assigned to it by the PCE/NME (as opposed to
1764 maintaining its own IP stack, for example).
1766 6TiSCH depends on [PCE] and [I-D.finn-detnet-architecture], and we
1767 expect that DetNet will maintain consistency with [IEEE802.1TSNTG].
1769 5.2. Wireless Industrial Today
1771 Today industrial wireless is accomplished using multiple
1772 deterministic wireless networks which are incompatible with each
1773 other and with IP traffic.
1775 6TiSCH is not yet fully specified, so it cannot be used in today's
1776 applications.
1778 5.3. Wireless Industrial Future
1780 5.3.1. Unified Wireless Network and Management
1782 We expect DetNet and 6TiSCH together to enable converged transport of
1783 deterministic and best-effort traffic flows between real-time
1784 industrial devices and wide area networks via IP routing. A high
1785 level view of a basic such network is shown in Figure 4.
1787 ---+-------- ............ ------------
1788 | External Network |
1789 | +-----+
1790 +-----+ | NME |
1791 | | LLN Border | |
1792 | | router +-----+
1793 +-----+
1794 o o o
1795 o o o o
1796 o o LLN o o o
1797 o o o o
1798 o
1800 Figure 4: Basic 6TiSCH Network
1802 Figure 5 shows a backbone router federating multiple synchronized
1803 6TiSCH subnets into a single subnet connected to the external
1804 network.
1806 ---+-------- ............ ------------
1807 | External Network |
1808 | +-----+
1809 | +-----+ | NME |
1810 +-----+ | +-----+ | |
1811 | | Router | | PCE | +-----+
1812 | | +--| |
1813 +-----+ +-----+
1814 | |
1815 | Subnet Backbone |
1816 +--------------------+------------------+
1817 | | |
1818 +-----+ +-----+ +-----+
1819 | | Backbone | | Backbone | | Backbone
1820 o | | router | | router | | router
1821 +-----+ +-----+ +-----+
1822 o o o o o
1823 o o o o o o o o o o o
1824 o o o LLN o o o o
1825 o o o o o o o o o o o o
1827 Figure 5: Extended 6TiSCH Network
1829 The backbone router must ensure end-to-end deterministic behavior
1830 between the LLN and the backbone. We would like to see this
1831 accomplished in conformance with the work done in
1832 [I-D.finn-detnet-architecture] with respect to Layer-3 aspects of
1833 deterministic networks that span multiple Layer-2 domains.
1835 The PCE must compute a deterministic path end-to-end across the TSCH
1836 network and IEEE802.1 TSN Ethernet backbone, and DetNet protocols are
1837 expected to enable end-to-end deterministic forwarding.
1839 +-----+
1840 | IoT |
1841 | G/W |
1842 +-----+
1843 ^ <---- Elimination
1844 | |
1845 Track branch | |
1846 +-------+ +--------+ Subnet Backbone
1847 | |
1848 +--|--+ +--|--+
1849 | | | Backbone | | | Backbone
1850 o | | | router | | | router
1851 +--/--+ +--|--+
1852 o / o o---o----/ o
1853 o o---o--/ o o o o o
1854 o \ / o o LLN o
1855 o v <---- Replication
1856 o
1858 Figure 6: 6TiSCH Network with PRE
1860 5.3.1.1. PCE and 6TiSCH ARQ Retries
1862 6TiSCH uses the IEEE802.15.4 Automatic Repeat-reQuest (ARQ) mechanism
1863 to provide higher reliability of packet delivery. ARQ is related to
1864 packet replication and elimination because there are two independent
1865 paths for packets to arrive at the destination, and if an expected
1866 packed does not arrive on one path then it checks for the packet on
1867 the second path.
1869 Although to date this mechanism is only used by wireless networks,
1870 this may be a technique that would be appropriate for DetNet and so
1871 aspects of the enabling protocol could be co-developed.
1873 For example, in Figure 6, a Track is laid out from a field device in
1874 a 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN
1875 backbone.
1877 The Replication function in the field device sends a copy of each
1878 packet over two different branches, and the PCE schedules each hop of
1879 both branches so that the two copies arrive in due time at the
1880 gateway. In case of a loss on one branch, hopefully the other copy
1881 of the packet still arrives within the allocated time. If two copies
1882 make it to the IoT gateway, the Elimination function in the gateway
1883 ignores the extra packet and presents only one copy to upper layers.
1885 At each 6TiSCH hop along the Track, the PCE may schedule more than
1886 one timeSlot for a packet, so as to support Layer-2 retries (ARQ).
1888 In current deployments, a TSCH Track does not necessarily support PRE
1889 but is systematically multi-path. This means that a Track is
1890 scheduled so as to ensure that each hop has at least two forwarding
1891 solutions, and the forwarding decision is to try the preferred one
1892 and use the other in case of Layer-2 transmission failure as detected
1893 by ARQ.
1895 5.3.2. Schedule Management by a PCE
1897 A common feature of 6TiSCH and DetNet is the action of a PCE to
1898 configure paths through the network. Specifically, what is needed is
1899 a protocol and data model that the PCE will use to get/set the
1900 relevant configuration from/to the devices, as well as perform
1901 operations on the devices. We expect that this protocol will be
1902 developed by DetNet with consideration for its reuse by 6TiSCH. The
1903 remainder of this section provides a bit more context from the 6TiSCH
1904 side.
1906 5.3.2.1. PCE Commands and 6TiSCH CoAP Requests
1908 The 6TiSCH device does not expect to place the request for bandwidth
1909 between itself and another device in the network. Rather, an
1910 operation control system invoked through a human interface specifies
1911 the required traffic specification and the end nodes (in terms of
1912 latency and reliability). Based on this information, the PCE must
1913 compute a path between the end nodes and provision the network with
1914 per-flow state that describes the per-hop operation for a given
1915 packet, the corresponding timeslots, and the flow identification that
1916 enables recognizing that a certain packet belongs to a certain path,
1917 etc.
1919 For a static configuration that serves a certain purpose for a long
1920 period of time, it is expected that a node will be provisioned in one
1921 shot with a full schedule, which incorporates the aggregation of its
1922 behavior for multiple paths. 6TiSCH expects that the programing of
1923 the schedule will be done over COAP as discussed in
1924 [I-D.ietf-6tisch-coap].
1926 6TiSCH expects that the PCE commands will be issued directly as CoAP
1927 requests or be mapped back and forth into CoAP by a gateway function
1928 at the edge of the 6TiSCH network. For instance, it is possible that
1929 a mapping entity on the backbone transforms a non-CoAP protocol such
1930 as PCEP into the RESTful interfaces that the 6TiSCH devices support.
1931 This architecture will be refined to comply with DetNet
1932 [I-D.finn-detnet-architecture] when the work is formalized. Related
1933 information about 6TiSCH can be found at
1934 [I-D.ietf-6tisch-6top-interface] and RPL [RFC6550].
1936 If it appears that a path through the network does not perform as
1937 expected, a protocol may be used to update the state in the devices,
1938 but in 6TiSCH that flow was not designed and no protocol was selected
1939 and it is expected that DetNet will determine the appropriate end-to-
1940 end protocols to be used in that case.
1942 A "slotFrame" is the base object that the PCE needs to manipulate to
1943 program a schedule into an LLN node ([I-D.ietf-6tisch-architecture]).
1945 The PCE should be able to read energy data from devices, and compute
1946 paths that will implement policies on how energy in devices is
1947 consumed, for instance to ensure that the spent energy does not
1948 exceeded the available energy over a period of time.
1950 6TiSCH devices can discover their neighbors over the radio using a
1951 mechanism such as beacons, but even though the neighbor information
1952 is available in the 6TiSCH interface data model, 6TiSCH does not
1953 describe a protocol to proactively push the neighborhood information
1954 to a PCE. DetNet should define this protocol, and it and should
1955 operate over CoAP. The protocol should be able to carry multiple
1956 metrics, in particular the same metrics as used for RPL operations
1957 [RFC6551]
1959 5.3.2.2. 6TiSCH IP Interface
1961 "6top" ([I-D.wang-6tisch-6top-sublayer]) is a logical link control
1962 sitting between the IP layer and the TSCH MAC layer which provides
1963 the link abstraction that is required for IP operations. The 6top
1964 data model and management interfaces are further discussed in
1965 [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap].
1967 An IP packet that is sent along a 6TiSCH path uses the Differentiated
1968 Services Per-Hop-Behavior Group called Deterministic Forwarding, as
1969 described in [I-D.svshah-tsvwg-deterministic-forwarding].
1971 5.3.3. 6TiSCH Security Considerations
1973 On top of the classical requirements for protection of control
1974 signaling, it must be noted that 6TiSCH networks operate on limited
1975 resources that can be depleted rapidly in a DoS attack on the system,
1976 for instance by placing a rogue device in the network, or by
1977 obtaining management control and setting up unexpected additional
1978 paths.
1980 5.4. Wireless Industrial Asks
1982 6TiSCH depends on DetNet to define:
1984 o Configuration (state) and operations for deterministic paths
1986 o End-to-end protocols for deterministic forwarding (tagging, IP)
1988 o Protocol for packet replication and elimination
1990 o Protocol for packet automatic retries (ARQ) (specific to wireless)
1992 6. Cellular Radio Use Cases
1994 6.1. Use Case Description
1996 This use case describes the application of deterministic networking
1997 in the context of cellular telecom transport networks. Important
1998 elements include time synchronization, clock distribution, and ways
1999 of establishing time-sensitive streams for both Layer-2 and Layer-3
2000 user plane traffic.
2002 6.1.1. Network Architecture
2004 Figure 7 illustrates a typical 3GPP-defined cellular network
2005 architecture, which includes "Fronthaul" and "Midhaul" network
2006 segments. The "Fronthaul" is the network connecting base stations
2007 (baseband processing units) to the remote radio heads (antennas).
2008 The "Midhaul" is the network inter-connecting base stations (or small
2009 cell sites).
2011 In Figure 7 "eNB" ("E-UTRAN Node B") is the hardware that is
2012 connected to the mobile phone network which communicates directly
2013 with mobile handsets ([TS36300]).
2015 Y (remote radio heads (antennas))
2016 \
2017 Y__ \.--. .--. +------+
2018 \_( `. +---+ _(Back`. | 3GPP |
2019 Y------( Front )----|eNB|----( Haul )----| core |
2020 ( ` .Haul ) +---+ ( ` . ) ) | netw |
2021 /`--(___.-' \ `--(___.-' +------+
2022 Y_/ / \.--. \
2023 Y_/ _( Mid`. \
2024 ( Haul ) \
2025 ( ` . ) ) \
2026 `--(___.-'\_____+---+ (small cell sites)
2027 \ |SCe|__Y
2028 +---+ +---+
2029 Y__|eNB|__Y
2030 +---+
2031 Y_/ \_Y ("local" radios)
2033 Figure 7: Generic 3GPP-based Cellular Network Architecture
2035 The available processing time for Fronthaul networking overhead is
2036 limited to the available time after the baseband processing of the
2037 radio frame has completed. For example in Long Term Evolution (LTE)
2038 radio, processing of a radio frame is allocated 3ms, but typically
2039 the processing completes much earlier (<400us) allowing the remaining
2040 time to be used by the Fronthaul network. This ultimately determines
2041 the distance the remote radio heads can be located from the base
2042 stations (200us equals roughly 40 km of optical fiber-based
2043 transport, thus round trip time is 2*200us = 400us).
2045 The remainder of the "maximum delay budget" is consumed by all nodes
2046 and buffering between the remote radio head and the baseband
2047 processing, plus the distance-incurred delay.
2049 The baseband processing time and the available "delay budget" for the
2050 fronthaul is likely to change in the forthcoming "5G" due to reduced
2051 radio round trip times and other architectural and service
2052 requirements [NGMN].
2054 6.1.2. Time Synchronization Requirements
2056 Fronthaul time synchronization requirements are given by [TS25104],
2057 [TS36104], [TS36211], and [TS36133]. These can be summarized for the
2058 current 3GPP LTE-based networks as:
2060 Delay Accuracy:
2061 +-8ns (i.e. +-1/32 Tc, where Tc is the UMTS Chip time of 1/3.84
2062 MHz) resulting in a round trip accuracy of +-16ns. The value is
2063 this low to meet the 3GPP Timing Alignment Error (TAE) measurement
2064 requirements.
2066 Packet Delay Variation:
2067 Packet Delay Variation (PDV aka Jitter aka Timing Alignment Error)
2068 is problematic to Fronthaul networks and must be minimized. If
2069 the transport network cannot guarantee low enough PDV then
2070 additional buffering has to be introduced at the edges of the
2071 network to buffer out the jitter. Buffering is not desirable as
2072 it reduces the total available delay budget.
2074 * For multiple input multiple output (MIMO) or TX diversity
2075 transmissions, at each carrier frequency, TAE shall not exceed
2076 65 ns (i.e. 1/4 Tc).
2078 * For intra-band contiguous carrier aggregation, with or without
2079 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2
2080 Tc).
2082 * For intra-band non-contiguous carrier aggregation, with or
2083 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e.
2084 one Tc).
2086 * For inter-band carrier aggregation, with or without MIMO or TX
2087 diversity, TAE shall not exceed 260 ns.
2089 Transport link contribution to radio frequency error:
2090 +-2 PPB. This value is considered to be "available" for the
2091 Fronthaul link out of the total 50 PPB budget reserved for the
2092 radio interface. Note: the reason that the transport link
2093 contributes to radio frequency error is as follows. The current
2094 way of doing Fronthaul is from the radio unit to remote radio head
2095 directly. The remote radio head is essentially a passive device
2096 (without buffering etc.) The transport drives the antenna
2097 directly by feeding it with samples and everything the transport
2098 adds will be introduced to radio as-is. So if the transport
2099 causes additional frequence error that shows immediately on the
2100 radio as well.
2102 The above listed time synchronization requirements are difficult to
2103 meet with point-to-point connected networks, and more difficult when
2104 the network includes multiple hops. It is expected that networks
2105 must include buffering at the ends of the connections as imposed by
2106 the jitter requirements, since trying to meet the jitter requirements
2107 in every intermediate node is likely to be too costly. However,
2108 every measure to reduce jitter and delay on the path makes it easier
2109 to meet the end-to-end requirements.
2111 In order to meet the timing requirements both senders and receivers
2112 must remain time synchronized, demanding very accurate clock
2113 distribution, for example support for IEEE 1588 transparent clocks in
2114 every intermediate node.
2116 In cellular networks from the LTE radio era onward, phase
2117 synchronization is needed in addition to frequency synchronization
2118 ([TS36300], [TS23401]).
2120 6.1.3. Time-Sensitive Stream Requirements
2122 In addition to the time synchronization requirements listed in
2123 Section Section 6.1.2 the Fronthaul networks assume practically
2124 error-free transport. The maximum bit error rate (BER) has been
2125 defined to be 10^-12. When packetized that would imply a packet
2126 error rate (PER) of 2.4*10^-9 (assuming ~300 bytes packets).
2127 Retransmitting lost packets and/or using forward error correction
2128 (FEC) to circumvent bit errors is practically impossible due to the
2129 additional delay incurred. Using redundant streams for better
2130 guarantees for delivery is also practically impossible in many cases
2131 due to high bandwidth requirements of Fronthaul networks. For
2132 instance, current uncompressed CPRI bandwidth expansion ratio is
2133 roughly 20:1 compared to the IP layer user payload it carries.
2134 Protection switching is also a candidate but current technologies for
2135 the path switch are too slow. We do not currently know of a better
2136 solution for this issue.
2138 Fronthaul links are assumed to be symmetric, and all Fronthaul
2139 streams (i.e. those carrying radio data) have equal priority and
2140 cannot delay or pre-empt each other. This implies that the network
2141 must guarantee that each time-sensitive flow meets their schedule.
2143 6.1.4. Security Considerations
2145 Establishing time-sensitive streams in the network entails reserving
2146 networking resources for long periods of time. It is important that
2147 these reservation requests be authenticated to prevent malicious
2148 reservation attempts from hostile nodes (or accidental
2149 misconfiguration). This is particularly important in the case where
2150 the reservation requests span administrative domains. Furthermore,
2151 the reservation information itself should be digitally signed to
2152 reduce the risk of a legitimate node pushing a stale or hostile
2153 configuration into another networking node.
2155 6.2. Cellular Radio Networks Today
2157 Today's Fronthaul networks typically consist of:
2159 o Dedicated point-to-point fiber connection is common
2161 o Proprietary protocols and framings
2163 o Custom equipment and no real networking
2165 Today's Midhaul and Backhaul networks typically consist of:
2167 o Mostly normal IP networks, MPLS-TP, etc.
2169 o Clock distribution and sync using 1588 and SyncE
2171 Telecommunication networks in the cellular domain are already heading
2172 towards transport networks where precise time synchronization support
2173 is one of the basic building blocks. While the transport networks
2174 themselves have practically transitioned to all-IP packet based
2175 networks to meet the bandwidth and cost requirements, highly accurate
2176 clock distribution has become a challenge.
2178 Transport networks in the cellular domain are typically based on Time
2179 Division Multiplexing (TDM-based) and provide frequency
2180 synchronization capabilities as a part of the transport media.
2181 Alternatively other technologies such as Global Positioning System
2182 (GPS) or Synchronous Ethernet (SyncE) are used [SyncE].
2184 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985]
2185 for legacy transport support) have become popular tools to build and
2186 manage new all-IP Radio Access Networks (RAN)
2187 [I-D.kh-spring-ip-ran-use-case]. Although various timing and
2188 synchronization optimizations have already been proposed and
2189 implemented including 1588 PTP enhancements
2190 [I-D.ietf-tictoc-1588overmpls][I-D.mirsky-mpls-residence-time], these
2191 solution are not necessarily sufficient for the forthcoming RAN
2192 architectures or guarantee the higher time-synchronization
2193 requirements [CPRI]. There are also existing solutions for the TDM
2194 over IP [RFC5087] [RFC4553] or Ethernet transports [RFC5086].
2196 6.3. Cellular Radio Networks Future
2198 We would like to see the following in future Cellular Radio networks:
2200 o Unified standards-based transport protocols and standard
2201 networking equipment that can make use of underlying deterministic
2202 link-layer services
2204 o Unified and standards-based network management systems and
2205 protocols in all parts of the network (including Fronthaul)
2207 New radio access network deployment models and architectures may
2208 require time sensitive networking services with strict requirements
2209 on other parts of the network that previously were not considered to
2210 be packetized at all. The time and synchronization support are
2211 already topical for Backhaul and Midhaul packet networks [MEF], and
2212 becoming a real issue for Fronthaul networks. Specifically in the
2213 Fronthaul networks the timing and synchronization requirements can be
2214 extreme for packet based technologies, for example, on the order of
2215 sub +-20 ns packet delay variation (PDV) and frequency accuracy of
2216 +0.002 PPM [Fronthaul].
2218 The actual transport protocols and/or solutions to establish required
2219 transport "circuits" (pinned-down paths) for Fronthaul traffic are
2220 still undefined. Those are likely to include (but are not limited
2221 to) solutions directly over Ethernet, over IP, and MPLS/PseudoWire
2222 transport.
2224 Even the current time-sensitive networking features may not be
2225 sufficient for Fronthaul traffic. Therefore, having specific
2226 profiles that take the requirements of Fronthaul into account is
2227 desirable [IEEE8021CM].
2229 The really interesting and important existing work for time sensitive
2230 networking has been done for Ethernet [TSNTG], which specifies the
2231 use of IEEE 1588 time precision protocol (PTP) [IEEE1588] in the
2232 context of IEEE 802.1D and IEEE 802.1Q. While IEEE 802.1AS
2233 [IEEE8021AS] specifies a Layer-2 time synchronizing service other
2234 specification, such as IEEE 1722 [IEEE1722] specify Ethernet-based
2235 Layer-2 transport for time-sensitive streams. New promising work
2236 seeks to enable the transport of time-sensitive fronthaul streams in
2237 Ethernet bridged networks [IEEE8021CM]. Similarly to IEEE 1722 there
2238 is an ongoing standardization effort to define Layer-2 transport
2239 encapsulation format for transporting radio over Ethernet (RoE) in
2240 IEEE 1904.3 Task Force [IEEE19043].
2242 All-IP RANs and various "haul" networks would benefit from time
2243 synchronization and time-sensitive transport services. Although
2244 Ethernet appears to be the unifying technology for the transport
2245 there is still a disconnect providing Layer-3 services. The protocol
2246 stack typically has a number of layers below the Ethernet Layer-2
2247 that shows up to the Layer-3 IP transport. It is not uncommon that
2248 on top of the lowest layer (optical) transport there is the first
2249 layer of Ethernet followed one or more layers of MPLS, PseudoWires
2250 and/or other tunneling protocols finally carrying the Ethernet layer
2251 visible to the user plane IP traffic. While there are existing
2252 technologies, especially in MPLS/PWE space, to establish circuits
2253 through the routed and switched networks, there is a lack of
2254 signaling the time synchronization and time-sensitive stream
2255 requirements/reservations for Layer-3 flows in a way that the entire
2256 transport stack is addressed and the Ethernet layers that needs to be
2257 configured are addressed.
2259 Furthermore, not all "user plane" traffic will be IP. Therefore, the
2260 same solution also must address the use cases where the user plane
2261 traffic is again another layer or Ethernet frames. There is existing
2262 work describing the problem statement
2263 [I-D.finn-detnet-problem-statement] and the architecture
2264 [I-D.finn-detnet-architecture] for deterministic networking (DetNet)
2265 that targets solutions for time-sensitive (IP/transport) streams with
2266 deterministic properties over Ethernet-based switched networks.
2268 6.4. Cellular Radio Networks Asks
2270 A standard for data plane transport specification which is:
2272 o Unified among all *hauls
2274 o Deployed in a highly deterministic network environment
2276 A standard for data flow information models that are:
2278 o Aware of the time sensitivity and constraints of the target
2279 networking environment
2281 o Aware of underlying deterministic networking services (e.g. on the
2282 Ethernet layer)
2284 Mapping the Fronthaul requirements to IETF DetNet
2285 [I-D.finn-detnet-architecture] Section 3 "Providing the DetNet
2286 Quality of Service", the relevant features are:
2288 o Zero congestion loss.
2290 o Pinned-down paths.
2292 7. Cellular Coordinated Multipoint Processing (CoMP)
2294 7.1. Use Case Description
2296 In cellular wireless communication systems, Inter-Site Coordinated
2297 Multipoint Processing (CoMP, see [CoMP]) is a technique implemented
2298 within a cell site which improves system efficiency and user quality
2299 experience by significantly improving throughput in the cell-edge
2300 region (i.e. at the edges of that cell site's radio coverage area).
2301 CoMP techniques depend on deterministic high-reliability
2302 communication between cell sites, however such connections today are
2303 IP-based which in current mobile networks can not meet the QoS
2304 requirements, so CoMP is an emerging technology which can benefit
2305 from DetNet.
2307 Here we consider the JT (Joint Transmit) application for CoMP, which
2308 provides the highest performance gain (compared to other
2309 applications).
2311 7.1.1. CoMP Architecture
2313 +--------------------------+
2314 | CoMP |
2315 +--+--------------------+--+
2316 | |
2317 +----------+ +------------+
2318 | Uplink | | Downlink |
2319 +-----+----+ +--------+---+
2320 | |
2321 ------------------- -----------------------
2322 | | | | | |
2323 +---------+ +----+ +-----+ +------------+ +-----+ +-----+
2324 | Joint | | CS | | DPS | | Joint | | CS/ | | DPS |
2325 |Reception| | | | | |Transmission| | CB | | |
2326 +---------+ +----+ +-----+ +------------+ +-----+ +-----+
2327 | |
2328 |----------- |-------------
2329 | | | |
2330 +------------+ +---------+ +----------+ +------------+
2331 | Joint | | Soft | | Coherent | | Non- |
2332 |Equalization| |Combining| | JT | | Coherent JT|
2333 +------------+ +---------+ +----------+ +------------+
2335 Figure 8: Framework of CoMP Technology
2337 As shown in Figure 8, CoMP reception and transmission is a framework
2338 in which multiple geographically distributed antenna nodes cooperate
2339 to improve the performance of the users served in the common
2340 cooperation area. The design principal of CoMP is to extend the
2341 current single-cell to multi-UE (User Equipment) transmission to a
2342 multi-cell- to-multi-UEs transmission by base station cooperation.
2344 7.1.2. Delay Sensitivity in CoMP
2346 In contrast to the single-cell scenario, CoMP has delay-sensitive
2347 performance parameters, which are "backhaul latency" and "CSI
2348 (Channel State Information) reporting and accuracy". The essential
2349 feature of CoMP is signaling between eNBs, so the backhaul latency is
2350 the dominating limitation of the CoMP performance. Generally, JT can
2351 benefit from coordinated scheduling (either distributed or
2352 centralized) of different cells if the signaling delay between eNBs
2353 is within 4-10ms. This delay requirement is both rigid and absolute
2354 because any uncertainty in delay will degrade the performance
2355 significantly.
2357 7.2. CoMP Today
2359 Due to the strict sensitivity to latency and synchronization, CoMP
2360 between eNB has not been deployed yet. This is because the current
2361 interface path between eNBs cannot meet the delay bound because it is
2362 usually IP-based and passing through multiple network hops (this
2363 interface is called "X2" or "eX2" for "enhanced X2"). Today lack of
2364 absolute delay guarantee on X2/eX2 traffic is the main obstacle to JT
2365 and multi-eNB coordination.
2367 There is still lack of Layer-3 (IP) transport protocol and signaling
2368 that is capable of low latency services; current techniques such as
2369 MPLS and PWE focus on establishing circuits using pre-routed paths
2370 but there is no such signaling for reservation of time-sensitive
2371 stream.
2373 7.3. CoMP Future
2375 7.3.1. Mobile Industry Overall Goals
2377 [METIS] documents the fundamental challenges as well as overall
2378 technical goals of the 5G mobile and wireless system as the starting
2379 point. These future systems should support (at similar cost and
2380 energy consumption levels as today's system):
2382 o 1000 times higher mobile data volume per area
2384 o 10 times to 100 times higher typical user data rate
2386 o 10 times to 100 times higher number of connected devices
2388 o 10 times longer battery life for low power devices
2390 o 5 times reduced End-to-End (E2E) latency
2391 The current LTE networking system has E2E latency less than 20ms
2392 [LTE-Latency] which leads to around 5ms E2E latency for 5G networks.
2393 To fulfill these latency demands at similar cost will be challenging
2394 because as the system also requires 100x bandwidth and 100x connected
2395 devices, simply adding redundant bandwidth provisioning can no longer
2396 be an efficient solution.
2398 In addition to bandwidth provisioning, reserved critical flows should
2399 not be affected by other flows no matter the pressure of the network.
2400 Deterministic networking techniques in both layer-2 and layer-3 using
2401 IETF protocol solutions can be promising to serve these scenarios.
2403 7.3.2. CoMP Infrastructure Goals
2405 Inter-site CoMP is one of the key requirements for 5G and is also a
2406 near-term goal for the current 4.5G network architecture. Assuming
2407 network architecture remains unchanged (i.e. no Fronthaul network and
2408 data flow between eNB is via X2/eX2) we would like to see the
2409 following in the near future:
2411 o Unified protocols and delay-guaranteed forwarding network
2412 equipment that is capable of delivering deterministic latency
2413 services.
2415 o Unified management and protocols which take delay and timing into
2416 account.
2418 o Unified deterministic latency data model and signaling for
2419 resource reservation.
2421 7.4. CoMP Asks
2423 To fully utilize the power of CoMP, it requires:
2425 o Very tight absolute delay bound (100-500us) within 7-10 hops.
2427 o Standardized data plane with highly deterministic networking
2428 capability.
2430 o Standardized control plane to unify backhaul network elements with
2431 time-sensitive stream reservation signaling.
2433 In addition, a standardized deterministic latency data flow model
2434 that includes:
2436 o Network-aware constraints on the networking environment
2437 o Time-aware description of flow characteristics and network
2438 resources, which may not need to be bandwidth based
2440 o Application-aware description of deterministic latency services.
2442 8. Industrial M2M
2444 8.1. Use Case Description
2446 Industrial Automation in general refers to automation of
2447 manufacturing, quality control and material processing. In this
2448 "machine to machine" (M2M) use case we consider machine units in a
2449 plant floor which periodically exchange data with upstream or
2450 downstream machine modules and/or a supervisory controller within a
2451 local area network.
2453 The actors of M2M communication are Programmable Logic Controllers
2454 (PLCs). Communication between PLCs and between PLCs and the
2455 supervisory PLC (S-PLC) is achieved via critical control/data streams
2456 Figure 9.
2458 S (Sensor)
2459 \ +-----+
2460 PLC__ \.--. .--. ---| MES |
2461 \_( `. _( `./ +-----+
2462 A------( Local )-------------( L2 )
2463 ( Net ) ( Net ) +-------+
2464 /`--(___.-' `--(___.-' ----| S-PLC |
2465 S_/ / PLC .--. / +-------+
2466 A_/ \_( `.
2467 (Actuator) ( Local )
2468 ( Net )
2469 /`--(___.-'\
2470 / \ A
2471 S A
2473 Figure 9: Current Generic Industrial M2M Network Architecture
2475 This use case focuses on PLC-related communications; communication to
2476 Manufacturing-Execution-Systems (MESs) are not addressed.
2478 This use case covers only critical control/data streams; non-critical
2479 traffic between industrial automation applications (such as
2480 communication of state, configuration, set-up, and database
2481 communication) are adequately served by currently available
2482 prioritizing techniques. Such traffic can use up to 80% of the total
2483 bandwidth required. There is also a subset of non-time-critical
2484 traffic that must be reliable even though it is not time sensitive.
2486 In this use case the primary need for deterministic networking is to
2487 provide end-to-end delivery of M2M messages within specific timing
2488 constraints, for example in closed loop automation control. Today
2489 this level of determinism is provided by proprietary networking
2490 technologies. In addition, standard networking technologies are used
2491 to connect the local network to remote industrial automation sites,
2492 e.g. over an enterprise or metro network which also carries other
2493 types of traffic. Therefore, flows that should be forwarded with
2494 deterministic guarantees need to be sustained regardless of the
2495 amount of other flows in those networks.
2497 8.2. Industrial M2M Communication Today
2499 Today, proprietary networks fulfill the needed timing and
2500 availability for M2M networks.
2502 The network topologies used today by industrial automation are
2503 similar to those used by telecom networks: Daisy Chain, Ring, Hub and
2504 Spoke, and Comb (a subset of Daisy Chain).
2506 PLC-related control/data streams are transmitted periodically and
2507 carry either a pre-configured payload or a payload configured during
2508 runtime.
2510 Some industrial applications require time synchronization at the end
2511 nodes. For such time-coordinated PLCs, accuracy of 1 microsecond is
2512 required. Even in the case of "non-time-coordinated" PLCs time sync
2513 may be needed e.g. for timestamping of sensor data.
2515 Industrial network scenarios require advanced security solutions.
2516 Many of the current industrial production networks are physically
2517 separated. Preventing critical flows from be leaked outside a domain
2518 is handled today by filtering policies that are typically enforced in
2519 firewalls.
2521 8.2.1. Transport Parameters
2523 The Cycle Time defines the frequency of message(s) between industrial
2524 actors. The Cycle Time is application dependent, in the range of 1ms
2525 - 100ms for critical control/data streams.
2527 Because industrial applications assume deterministic transport for
2528 critical Control-Data-Stream parameters (instead of defining latency
2529 and delay variation parameters) it is sufficient to fulfill the upper
2530 bound of latency (maximum latency). The underlying networking
2531 infrastructure must ensure a maximum end-to-end delivery time of
2532 messages in the range of 100 microseconds to 50 milliseconds
2533 depending on the control loop application.
2535 The bandwidth requirements of control/data streams are usually
2536 calculated directly from the bytes-per-cycle parameter of the control
2537 loop. For PLC-to-PLC communication one can expect 2 - 32 streams
2538 with packet size in the range of 100 - 700 bytes. For S-PLC to PLCs
2539 the number of streams is higher - up to 256 streams. Usually no more
2540 than 20% of available bandwidth is used for critical control/data
2541 streams. In today's networks 1Gbps links are commonly used.
2543 Most PLC control loops are rather tolerant of packet loss, however
2544 critical control/data streams accept no more than 1 packet loss per
2545 consecutive communication cycle (i.e. if a packet gets lost in cycle
2546 "n", then the next cycle ("n+1") must be lossless). After two or
2547 more consecutive packet losses the network may be considered to be
2548 "down" by the Application.
2550 As network downtime may impact the whole production system the
2551 required network availability is rather high (99,999%).
2553 Based on the above parameters we expect that some form of redundancy
2554 will be required for M2M communications, however any individual
2555 solution depends on several parameters including cycle time, delivery
2556 time, etc.
2558 8.2.2. Stream Creation and Destruction
2560 In an industrial environment, critical control/data streams are
2561 created rather infrequently, on the order of ~10 times per day / week
2562 / month. Most of these critical control/data streams get created at
2563 machine startup, however flexibility is also needed during runtime,
2564 for example when adding or removing a machine. Going forward as
2565 production systems become more flexible, we expect a significant
2566 increase in the rate at which streams are created, changed and
2567 destroyed.
2569 8.3. Industrial M2M Future
2571 We would like to see a converged IP-standards-based network with
2572 deterministic properties that can satisfy the timing, security and
2573 reliability constraints described above. Today's proprietary
2574 networks could then be interfaced to such a network via gateways or,
2575 in the case of new installations, devices could be connected directly
2576 to the converged network.
2578 8.4. Industrial M2M Asks
2580 o Converged IP-based network
2582 o Deterministic behavior (bounded latency and jitter )
2584 o High availability (presumably through redundancy) (99.999 %)
2586 o Low message delivery time (100us - 50ms)
2588 o Low packet loss (burstless, 0.1-1 %)
2590 o Precise time synchronization accuracy (1us)
2592 o Security (e.g. prevent critical flows from being leaked between
2593 physically separated networks)
2595 9. Internet-based Applications
2597 9.1. Use Case Description
2599 There are many applications that communicate across the open Internet
2600 that could benefit from guaranteed delivery and bounded latency. The
2601 following are some representative examples.
2603 9.1.1. Media Content Delivery
2605 Media content delivery continues to be an important use of the
2606 Internet, yet users often experience poor quality audio and video due
2607 to the delay and jitter inherent in today's Internet.
2609 9.1.2. Online Gaming
2611 Online gaming is a significant part of the gaming market, however
2612 latency can degrade the end user experience. For example "First
2613 Person Shooter" (FPS) games are highly delay-sensitive.
2615 9.1.3. Virtual Reality
2617 Virtual reality (VR) has many commercial applications including real
2618 estate presentations, remote medical procedures, and so on. Low
2619 latency is critical to interacting with the virtual world because
2620 perceptual delays can cause motion sickness.
2622 9.2. Internet-Based Applications Today
2624 Internet service today is by definition "best effort", with no
2625 guarantees on delivery or bandwidth.
2627 9.3. Internet-Based Applications Future
2629 We imagine an Internet from which we will be able to play a video
2630 without glitches and play games without lag.
2632 For online gaming, the maximum round-trip delay can be 100ms and
2633 stricter for FPS gaming which can be 10-50ms. Transport delay is the
2634 dominate part with a 5-20ms budget.
2636 For VR, 1-10ms maximum delay is needed and total network budget is
2637 1-5ms if doing remote VR.
2639 Flow identification can be used for gaming and VR, i.e. it can
2640 recognize a critical flow and provide appropriate latency bounds.
2642 9.4. Internet-Based Applications Asks
2644 o Unified control and management protocols to handle time-critical
2645 data flow
2647 o Application-aware flow filtering mechanism to recognize the timing
2648 critical flow without doing 5-tuple matching
2650 o Unified control plane to provide low latency service on Layer-3
2651 without changing the data plane
2653 o OAM system and protocols which can help to provide E2E-delay
2654 sensitive service provisioning
2656 10. Use Case Common Elements
2658 Looking at the use cases collectively, the following common desires
2659 for the DetNet-based networks of the future emerge:
2661 o Open standards-based network (replace various proprietary
2662 networks, reduce cost, create multi-vendor market)
2664 o Centrally administered (though such administration may be
2665 distributed for scale and resiliency)
2667 o Integrates L2 (bridged) and L3 (routed) environments (independent
2668 of the Link layer, e.g. can be used with Ethernet, 6TiSCH, etc.)
2670 o Carries both deterministic and best-effort traffic (guaranteed
2671 end-to-end delivery of deterministic flows, deterministic flows
2672 isolated from each other and from best-effort traffic congestion,
2673 unused deterministic BW available to best-effort traffic)
2675 o Ability to add or remove systems from the network with minimal,
2676 bounded service interruption (applications include replacement of
2677 failed devices as well as plug and play)
2679 o Uses standardized data flow information models capable of
2680 expressing deterministic properties (models express device
2681 capabilities, flow properties. Protocols for pushing models from
2682 controller to devices, devices to controller)
2684 o Scalable size (long distances (many km) and short distances
2685 (within a single machine), many hops (radio repeaters, microwave
2686 links, fiber links...) and short hops (single machine))
2688 o Scalable timing parameters and accuracy (bounded latency,
2689 guaranteed worst case maximum, minimum. Low latency, e.g. control
2690 loops may be less than 1ms, but larger for wide area networks)
2692 o High availability (99.9999 percent up time requested, but may be
2693 up to twelve 9s)
2695 o Reliability, redundancy (lives at stake)
2697 o Security (from failures, attackers, misbehaving devices -
2698 sensitive to both packet content and arrival time)
2700 11. Acknowledgments
2702 11.1. Pro Audio
2704 This section was derived from draft-gunther-detnet-proaudio-req-01.
2706 The editors would like to acknowledge the help of the following
2707 individuals and the companies they represent:
2709 Jeff Koftinoff, Meyer Sound
2711 Jouni Korhonen, Associate Technical Director, Broadcom
2713 Pascal Thubert, CTAO, Cisco
2715 Kieran Tyrrell, Sienda New Media Technologies GmbH
2717 11.2. Utility Telecom
2719 This section was derived from draft-wetterwald-detnet-utilities-reqs-
2720 02.
2722 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy
2723 Practice Cisco
2725 Pascal Thubert, CTAO Cisco
2727 11.3. Building Automation Systems
2729 This section was derived from draft-bas-usecase-detnet-00.
2731 11.4. Wireless for Industrial
2733 This section was derived from draft-thubert-6tisch-4detnet-01.
2735 This specification derives from the 6TiSCH architecture, which is the
2736 result of multiple interactions, in particular during the 6TiSCH
2737 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at
2738 the IETF.
2740 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier
2741 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael
2742 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon,
2743 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey,
2744 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria
2745 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation
2746 and various contributions.
2748 11.5. Cellular Radio
2750 This section was derived from draft-korhonen-detnet-telreq-00.
2752 11.6. Industrial M2M
2754 The authors would like to thank Feng Chen and Marcel Kiessling for
2755 their comments and suggestions.
2757 11.7. Internet Applications and CoMP
2759 This section was derived from draft-zha-detnet-use-case-00.
2761 This document has benefited from reviews, suggestions, comments and
2762 proposed text provided by the following members, listed in
2763 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver
2764 Huang.
2766 12. Informative References
2768 [ACE] IETF, "Authentication and Authorization for Constrained
2769 Environments", .
2772 [bacnetip]
2773 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP",
2774 January 1999.
2776 [CCAMP] IETF, "Common Control and Measurement Plane",
2777 .
2779 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND
2780 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_
2781 and_Enhancement_v2.0, March 2015,
2782 .
2785 [CONTENT_PROTECTION]
2786 Olsen, D., "1722a Content Protection", 2012,
2787 .
2790 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI);
2791 Interface Specification", CPRI Specification V6.1, July
2792 2014, .
2795 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification,
2796 Version 1.2", 2012, .
2798 [DICE] IETF, "DTLS In Constrained Environments",
2799 .
2801 [EA12] Evans, P. and M. Annunziata, "Industrial Internet: Pushing
2802 the Boundaries of Minds and Machines", November 2012.
2804 [ESPN_DC2]
2805 Daley, D., "ESPN's DC2 Scales AVB Large", 2014,
2806 .
2809 [flnet] Japan Electrical Manufacturers' Association, "JEMA 1479 -
2810 English Edition", September 2012.
2812 [Fronthaul]
2813 Chen, D. and T. Mustala, "Ethernet Fronthaul
2814 Considerations", IEEE 1904.3, February 2015,
2815 .
2818 [HART] www.hartcomm.org, "Highway Addressable remote Transducer,
2819 a group of specifications for industrial process and
2820 control devices administered by the HART Foundation".
2822 [I-D.finn-detnet-architecture]
2823 Finn, N., Thubert, P., and M. Teener, "Deterministic
2824 Networking Architecture", draft-finn-detnet-
2825 architecture-03 (work in progress), March 2016.
2827 [I-D.finn-detnet-problem-statement]
2828 Finn, N. and P. Thubert, "Deterministic Networking Problem
2829 Statement", draft-finn-detnet-problem-statement-04 (work
2830 in progress), October 2015.
2832 [I-D.ietf-6tisch-6top-interface]
2833 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
2834 (6top) Interface", draft-ietf-6tisch-6top-interface-04
2835 (work in progress), July 2015.
2837 [I-D.ietf-6tisch-architecture]
2838 Thubert, P., "An Architecture for IPv6 over the TSCH mode
2839 of IEEE 802.15.4", draft-ietf-6tisch-architecture-09 (work
2840 in progress), November 2015.
2842 [I-D.ietf-6tisch-coap]
2843 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and
2844 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work
2845 in progress), March 2015.
2847 [I-D.ietf-6tisch-terminology]
2848 Palattella, M., Thubert, P., Watteyne, T., and Q. Wang,
2849 "Terminology in IPv6 over the TSCH mode of IEEE
2850 802.15.4e", draft-ietf-6tisch-terminology-06 (work in
2851 progress), November 2015.
2853 [I-D.ietf-ipv6-multilink-subnets]
2854 Thaler, D. and C. Huitema, "Multi-link Subnet Support in
2855 IPv6", draft-ietf-ipv6-multilink-subnets-00 (work in
2856 progress), July 2002.
2858 [I-D.ietf-roll-rpl-industrial-applicability]
2859 Phinney, T., Thubert, P., and R. Assimiti, "RPL
2860 applicability in industrial networks", draft-ietf-roll-
2861 rpl-industrial-applicability-02 (work in progress),
2862 October 2013.
2864 [I-D.ietf-tictoc-1588overmpls]
2865 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L.
2866 Montini, "Transporting Timing messages over MPLS
2867 Networks", draft-ietf-tictoc-1588overmpls-07 (work in
2868 progress), October 2015.
2870 [I-D.kh-spring-ip-ran-use-case]
2871 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing
2872 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02
2873 (work in progress), November 2014.
2875 [I-D.mirsky-mpls-residence-time]
2876 Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S.,
2877 and S. Vainshtein, "Residence Time Measurement in MPLS
2878 network", draft-mirsky-mpls-residence-time-07 (work in
2879 progress), July 2015.
2881 [I-D.svshah-tsvwg-deterministic-forwarding]
2882 Shah, S. and P. Thubert, "Deterministic Forwarding PHB",
2883 draft-svshah-tsvwg-deterministic-forwarding-04 (work in
2884 progress), August 2015.
2886 [I-D.thubert-6lowpan-backbone-router]
2887 Thubert, P., "6LoWPAN Backbone Router", draft-thubert-
2888 6lowpan-backbone-router-03 (work in progress), February
2889 2013.
2891 [I-D.wang-6tisch-6top-sublayer]
2892 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
2893 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in
2894 progress), November 2015.
2896 [IEC61850-90-12]
2897 TC57 WG10, IEC., "IEC 61850-90-12 TR: Communication
2898 networks and systems for power utility automation - Part
2899 90-12: Wide area network engineering guidelines", 2015.
2901 [IEC62439-3:2012]
2902 TC65, IEC., "IEC 62439-3: Industrial communication
2903 networks - High availability automation networks - Part 3:
2904 Parallel Redundancy Protocol (PRP) and High-availability
2905 Seamless Redundancy (HSR)", 2012.
2907 [IEEE1588]
2908 IEEE, "IEEE Standard for a Precision Clock Synchronization
2909 Protocol for Networked Measurement and Control Systems",
2910 IEEE Std 1588-2008, 2008,
2911 .
2914 [IEEE1722]
2915 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport
2916 Protocol for Time Sensitive Applications in a Bridged
2917 Local Area Network", IEEE Std 1722-2011, 2011,
2918 .
2921 [IEEE19043]
2922 IEEE Standards Association, "IEEE 1904.3 TF", IEEE 1904.3,
2923 2015, .
2925 [IEEE802.1TSNTG]
2926 IEEE Standards Association, "IEEE 802.1 Time-Sensitive
2927 Networks Task Group", March 2013,
2928 .
2930 [IEEE802154]
2931 IEEE standard for Information Technology, "IEEE std.
2932 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC)
2933 and Physical Layer (PHY) Specifications for Low-Rate
2934 Wireless Personal Area Networks".
2936 [IEEE802154e]
2937 IEEE standard for Information Technology, "IEEE standard
2938 for Information Technology, IEEE std. 802.15.4, Part.
2939 15.4: Wireless Medium Access Control (MAC) and Physical
2940 Layer (PHY) Specifications for Low-Rate Wireless Personal
2941 Area Networks, June 2011 as amended by IEEE std.
2942 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area
2943 Networks (LR-WPANs) Amendment 1: MAC sublayer", April
2944 2012.
2946 [IEEE8021AS]
2947 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)",
2948 IEEE 802.1AS-2001, 2011,
2949 .
2952 [IEEE8021CM]
2953 Farkas, J., "Time-Sensitive Networking for Fronthaul",
2954 Unapproved PAR, PAR for a New IEEE Standard;
2955 IEEE P802.1CM, April 2015,
2956 .
2959 [IEEE8021TSN]
2960 IEEE 802.1, "The charter of the TG is to provide the
2961 specifications that will allow time-synchronized low
2962 latency streaming services through 802 networks.", 2016,
2963 .
2965 [IETFDetNet]
2966 IETF, "Charter for IETF DetNet Working Group", 2015,
2967 .
2969 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation",
2970 .
2972 [ISA100.11a]
2973 ISA/ANSI, "Wireless Systems for Industrial Automation:
2974 Process Control and Related Applications - ISA100.11a-2011
2975 - IEC 62734", 2011, .
2978 [ISO7240-16]
2979 ISO, "ISO 7240-16:2007 Fire detection and alarm systems --
2980 Part 16: Sound system control and indicating equipment",
2981 2007, .
2984 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006.
2986 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0",
2987 1994.
2989 [LTE-Latency]
2990 Johnston, S., "LTE Latency: How does it compare to other
2991 technologies", March 2014,
2992 .
2995 [MEF] MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells",
2996 MEF 22.1.1, July 2014,
2997 .
3000 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and
3001 wireless system", ICT-317669-METIS/D1.1 ICT-
3002 317669-METIS/D1.1, April 2013, .
3005 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL
3006 SPECIFICATION V1.1b", December 2006.
3008 [net5G] Ericsson, "5G Radio Access, Challenges for 2020 and
3009 Beyond", Ericsson white paper wp-5g, June 2013,
3010 .
3012 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0,
3013 February 2015, .
3016 [PCE] IETF, "Path Computation Element",
3017 .
3019 [profibus]
3020 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001.
3022 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
3023 Requirement Levels", BCP 14, RFC 2119,
3024 DOI 10.17487/RFC2119, March 1997,
3025 .
3027 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6
3028 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460,
3029 December 1998, .
3031 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black,
3032 "Definition of the Differentiated Services Field (DS
3033 Field) in the IPv4 and IPv6 Headers", RFC 2474,
3034 DOI 10.17487/RFC2474, December 1998,
3035 .
3037 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol
3038 Label Switching Architecture", RFC 3031,
3039 DOI 10.17487/RFC3031, January 2001,
3040 .
3042 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
3043 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
3044 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001,
3045 .
3047 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation
3048 Metric for IP Performance Metrics (IPPM)", RFC 3393,
3049 DOI 10.17487/RFC3393, November 2002,
3050 .
3052 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between
3053 Information Models and Data Models", RFC 3444,
3054 DOI 10.17487/RFC3444, January 2003,
3055 .
3057 [RFC3972] Aura, T., "Cryptographically Generated Addresses (CGA)",
3058 RFC 3972, DOI 10.17487/RFC3972, March 2005,
3059 .
3061 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation
3062 Edge-to-Edge (PWE3) Architecture", RFC 3985,
3063 DOI 10.17487/RFC3985, March 2005,
3064 .
3066 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing
3067 Architecture", RFC 4291, DOI 10.17487/RFC4291, February
3068 2006, .
3070 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure-
3071 Agnostic Time Division Multiplexing (TDM) over Packet
3072 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006,
3073 .
3075 [RFC4903] Thaler, D., "Multi-Link Subnet Issues", RFC 4903,
3076 DOI 10.17487/RFC4903, June 2007,
3077 .
3079 [RFC4919] Kushalnagar, N., Montenegro, G., and C. Schumacher, "IPv6
3080 over Low-Power Wireless Personal Area Networks (6LoWPANs):
3081 Overview, Assumptions, Problem Statement, and Goals",
3082 RFC 4919, DOI 10.17487/RFC4919, August 2007,
3083 .
3085 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and
3086 P. Pate, "Structure-Aware Time Division Multiplexed (TDM)
3087 Circuit Emulation Service over Packet Switched Network
3088 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007,
3089 .
3091 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi,
3092 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087,
3093 DOI 10.17487/RFC5087, December 2007,
3094 .
3096 [RFC6282] Hui, J., Ed. and P. Thubert, "Compression Format for IPv6
3097 Datagrams over IEEE 802.15.4-Based Networks", RFC 6282,
3098 DOI 10.17487/RFC6282, September 2011,
3099 .
3101 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J.,
3102 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur,
3103 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for
3104 Low-Power and Lossy Networks", RFC 6550,
3105 DOI 10.17487/RFC6550, March 2012,
3106 .
3108 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N.,
3109 and D. Barthel, "Routing Metrics Used for Path Calculation
3110 in Low-Power and Lossy Networks", RFC 6551,
3111 DOI 10.17487/RFC6551, March 2012,
3112 .
3114 [RFC6775] Shelby, Z., Ed., Chakrabarti, S., Nordmark, E., and C.
3115 Bormann, "Neighbor Discovery Optimization for IPv6 over
3116 Low-Power Wireless Personal Area Networks (6LoWPANs)",
3117 RFC 6775, DOI 10.17487/RFC6775, November 2012,
3118 .
3120 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using
3121 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the
3122 Internet of Things (IoT): Problem Statement", RFC 7554,
3123 DOI 10.17487/RFC7554, May 2015,
3124 .
3126 [SRP_LATENCY]
3127 Gunther, C., "Specifying SRP Latency", 2014,
3128 .
3131 [STUDIO_IP]
3132 Mace, G., "IP Networked Studio Infrastructure for
3133 Synchronized & Real-Time Multimedia Transmissions", 2007,
3134 .
3137 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in
3138 packet networks", Recommendation G.8261, August 2013,
3139 .
3141 [TEAS] IETF, "Traffic Engineering Architecture and Signaling",
3142 .
3144 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements
3145 for Evolved Universal Terrestrial Radio Access Network
3146 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013.
3148 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception
3149 (FDD)", 3GPP TS 25.104 3.14.0, March 2007.
3151 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access
3152 (E-UTRA); Base Station (BS) radio transmission and
3153 reception", 3GPP TS 36.104 10.11.0, July 2013.
3155 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access
3156 (E-UTRA); Requirements for support of radio resource
3157 management", 3GPP TS 36.133 12.7.0, April 2015.
3159 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access
3160 (E-UTRA); Physical channels and modulation", 3GPP
3161 TS 36.211 10.7.0, March 2013.
3163 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA)
3164 and Evolved Universal Terrestrial Radio Access Network
3165 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300
3166 10.11.0, September 2013.
3168 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive
3169 Networks Task Group", 2013,
3170 .
3172 [UHD-video]
3173 Holub, P., "Ultra-High Definition Videos and Their
3174 Applications over the Network", The 7th International
3175 Symposium on VICTORIES Project PetrHolub_presentation,
3176 October 2014, .
3179 [WirelessHART]
3180 www.hartcomm.org, "Industrial Communication Networks -
3181 Wireless Communication Network and Communication Profiles
3182 - WirelessHART - IEC 62591", 2010.
3184 Authors' Addresses
3185 Ethan Grossman (editor)
3186 Dolby Laboratories, Inc.
3187 1275 Market Street
3188 San Francisco, CA 94103
3189 USA
3191 Phone: +1 415 645 4726
3192 Email: ethan.grossman@dolby.com
3193 URI: http://www.dolby.com
3195 Craig Gunther
3196 Harman International
3197 10653 South River Front Parkway
3198 South Jordan, UT 84095
3199 USA
3201 Phone: +1 801 568-7675
3202 Email: craig.gunther@harman.com
3203 URI: http://www.harman.com
3205 Pascal Thubert
3206 Cisco Systems, Inc
3207 Building D
3208 45 Allee des Ormes - BP1200
3209 MOUGINS - Sophia Antipolis 06254
3210 FRANCE
3212 Phone: +33 497 23 26 34
3213 Email: pthubert@cisco.com
3215 Patrick Wetterwald
3216 Cisco Systems
3217 45 Allees des Ormes
3218 Mougins 06250
3219 FRANCE
3221 Phone: +33 4 97 23 26 36
3222 Email: pwetterw@cisco.com
3223 Jean Raymond
3224 Hydro-Quebec
3225 1500 University
3226 Montreal H3A3S7
3227 Canada
3229 Phone: +1 514 840 3000
3230 Email: raymond.jean@hydro.qc.ca
3232 Jouni Korhonen
3233 Broadcom Corporation
3234 3151 Zanker Road
3235 San Jose, CA 95134
3236 USA
3238 Email: jouni.nospam@gmail.com
3240 Yu Kaneko
3241 Toshiba
3242 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi
3243 Kanagawa, Japan
3245 Email: yu1.kaneko@toshiba.co.jp
3247 Subir Das
3248 Applied Communication Sciences
3249 150 Mount Airy Road, Basking Ridge
3250 New Jersey, 07920, USA
3252 Email: sdas@appcomsci.com
3254 Yiyong Zha
3255 Huawei Technologies
3257 Email: zhayiyong@huawei.com
3259 Balazs Varga
3260 Ericsson
3261 Konyves Kalman krt. 11/B
3262 Budapest 1097
3263 Hungary
3265 Email: balazs.a.varga@ericsson.com
3266 Janos Farkas
3267 Ericsson
3268 Konyves Kalman krt. 11/B
3269 Budapest 1097
3270 Hungary
3272 Email: janos.farkas@ericsson.com
3274 Franz-Josef Goetz
3275 Siemens
3276 Gleiwitzerstr. 555
3277 Nurnberg 90475
3278 Germany
3280 Email: franz-josef.goetz@siemens.com
3282 Juergen Schmitt
3283 Siemens
3284 Gleiwitzerstr. 555
3285 Nurnberg 90475
3286 Germany
3288 Email: juergen.jues.schmitt@siemens.com