idnits 2.17.1
draft-ietf-detnet-use-cases-04.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
No issues found here.
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
== The document seems to lack the recommended RFC 2119 boilerplate, even if
it appears to use RFC 2119 keywords.
(The document does seem to have the reference to RFC 2119 which the
ID-Checklist requires).
-- The document date (February 22, 2016) is 2984 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Unused Reference: 'ACE' is defined on line 3218, but no explicit
reference was found in the text
== Unused Reference: 'DICE' is defined on line 3248, but no explicit
reference was found in the text
== Unused Reference: 'EA12' is defined on line 3251, but no explicit
reference was found in the text
== Unused Reference: 'HART' is defined on line 3268, but no explicit
reference was found in the text
== Unused Reference: 'I-D.thubert-6lowpan-backbone-router' is defined on
line 3336, but no explicit reference was found in the text
== Unused Reference: 'IEC61850-90-12' is defined on line 3346, but no
explicit reference was found in the text
== Unused Reference: 'IEEE8021TSN' is defined on line 3409, but no explicit
reference was found in the text
== Unused Reference: 'IETFDetNet' is defined on line 3415, but no explicit
reference was found in the text
== Unused Reference: 'ISA100' is defined on line 3419, but no explicit
reference was found in the text
== Unused Reference: 'RFC2119' is defined on line 3472, but no explicit
reference was found in the text
== Unused Reference: 'RFC2460' is defined on line 3477, but no explicit
reference was found in the text
== Unused Reference: 'RFC2474' is defined on line 3481, but no explicit
reference was found in the text
== Unused Reference: 'RFC3209' is defined on line 3492, but no explicit
reference was found in the text
== Unused Reference: 'RFC3393' is defined on line 3497, but no explicit
reference was found in the text
== Unused Reference: 'RFC4903' is defined on line 3525, but no explicit
reference was found in the text
== Unused Reference: 'RFC4919' is defined on line 3529, but no explicit
reference was found in the text
== Unused Reference: 'RFC6282' is defined on line 3546, but no explicit
reference was found in the text
== Unused Reference: 'RFC6775' is defined on line 3564, but no explicit
reference was found in the text
== Unused Reference: 'TEAS' is defined on line 3591, but no explicit
reference was found in the text
== Unused Reference: 'UHD-video' is defined on line 3622, but no explicit
reference was found in the text
== Unused Reference: 'WirelessHART' is defined on line 3629, but no
explicit reference was found in the text
== Outdated reference: A later version (-08) exists of
draft-finn-detnet-architecture-02
== Outdated reference: A later version (-05) exists of
draft-finn-detnet-problem-statement-04
== Outdated reference: A later version (-30) exists of
draft-ietf-6tisch-architecture-09
== Outdated reference: A later version (-10) exists of
draft-ietf-6tisch-terminology-06
-- Obsolete informational reference (is this intentional?): RFC 2460
(Obsoleted by RFC 8200)
Summary: 0 errors (**), 0 flaws (~~), 27 warnings (==), 2 comments (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Internet Engineering Task Force E. Grossman, Ed.
3 Internet-Draft DOLBY
4 Intended status: Informational C. Gunther
5 Expires: August 25, 2016 HARMAN
6 P. Thubert
7 P. Wetterwald
8 CISCO
9 J. Raymond
10 HYDRO-QUEBEC
11 J. Korhonen
12 BROADCOM
13 Y. Kaneko
14 Toshiba
15 S. Das
16 Applied Communication Sciences
17 Y. Zha
18 HUAWEI
19 B. Varga
20 J. Farkas
21 Ericsson
22 F. Goetz
23 J. Schmitt
24 Siemens
25 February 22, 2016
27 Deterministic Networking Use Cases
28 draft-ietf-detnet-use-cases-04
30 Abstract
32 This draft documents requirements in several diverse industries to
33 establish multi-hop paths for characterized flows with deterministic
34 properties. In this context deterministic implies that streams can
35 be established which provide guaranteed bandwidth and latency which
36 can be established from either a Layer 2 or Layer 3 (IP) interface,
37 and which can co-exist on an IP network with best-effort traffic.
39 Additional requirements include optional redundant paths, very high
40 reliability paths, time synchronization, and clock distribution.
41 Industries considered include wireless for industrial applications,
42 professional audio, electrical utilities, building automation
43 systems, radio/mobile access networks, automotive, and gaming.
45 For each case, this document will identify the application, identify
46 representative solutions used today, and what new uses an IETF DetNet
47 solution may enable.
49 Status of This Memo
51 This Internet-Draft is submitted in full conformance with the
52 provisions of BCP 78 and BCP 79.
54 Internet-Drafts are working documents of the Internet Engineering
55 Task Force (IETF). Note that other groups may also distribute
56 working documents as Internet-Drafts. The list of current Internet-
57 Drafts is at http://datatracker.ietf.org/drafts/current/.
59 Internet-Drafts are draft documents valid for a maximum of six months
60 and may be updated, replaced, or obsoleted by other documents at any
61 time. It is inappropriate to use Internet-Drafts as reference
62 material or to cite them other than as "work in progress."
64 This Internet-Draft will expire on August 25, 2016.
66 Copyright Notice
68 Copyright (c) 2016 IETF Trust and the persons identified as the
69 document authors. All rights reserved.
71 This document is subject to BCP 78 and the IETF Trust's Legal
72 Provisions Relating to IETF Documents
73 (http://trustee.ietf.org/license-info) in effect on the date of
74 publication of this document. Please review these documents
75 carefully, as they describe your rights and restrictions with respect
76 to this document. Code Components extracted from this document must
77 include Simplified BSD License text as described in Section 4.e of
78 the Trust Legal Provisions and are provided without warranty as
79 described in the Simplified BSD License.
81 Table of Contents
83 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5
84 2. Pro Audio Use Cases . . . . . . . . . . . . . . . . . . . . . 5
85 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 5
86 2.2. Fundamental Stream Requirements . . . . . . . . . . . . . 6
87 2.2.1. Guaranteed Bandwidth . . . . . . . . . . . . . . . . 7
88 2.2.2. Bounded and Consistent Latency . . . . . . . . . . . 7
89 2.2.2.1. Optimizations . . . . . . . . . . . . . . . . . . 8
90 2.3. Additional Stream Requirements . . . . . . . . . . . . . 9
91 2.3.1. Deterministic Time to Establish Streaming . . . . . . 9
92 2.3.2. Use of Unused Reservations by Best-Effort Traffic . . 9
93 2.3.3. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 10
94 2.3.4. Secure Transmission . . . . . . . . . . . . . . . . . 10
95 2.3.5. Redundant Paths . . . . . . . . . . . . . . . . . . . 10
96 2.3.6. Link Aggregation . . . . . . . . . . . . . . . . . . 11
97 2.3.7. Traffic Segregation . . . . . . . . . . . . . . . . . 11
98 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets . . . 11
99 2.3.7.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 11
100 2.4. Integration of Reserved Streams into IT Networks . . . . 12
101 2.5. Security Considerations . . . . . . . . . . . . . . . . . 12
102 2.5.1. Denial of Service . . . . . . . . . . . . . . . . . . 12
103 2.5.2. Control Protocols . . . . . . . . . . . . . . . . . . 12
104 2.6. A State-of-the-Art Broadcast Installation Hits Technology
105 Limits . . . . . . . . . . . . . . . . . . . . . . . . . 13
106 3. Utility Telecom Use Cases . . . . . . . . . . . . . . . . . . 13
107 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 13
108 3.2. Telecommunications Trends and General telecommunications
109 Requirements . . . . . . . . . . . . . . . . . . . . . . 14
110 3.2.1. General Telecommunications Requirements . . . . . . . 14
111 3.2.1.1. Migration to Packet-Switched Network . . . . . . 15
112 3.2.2. Applications, Use cases and traffic patterns . . . . 16
113 3.2.2.1. Transmission use cases . . . . . . . . . . . . . 16
114 3.2.2.2. Distribution use case . . . . . . . . . . . . . . 26
115 3.2.2.3. Generation use case . . . . . . . . . . . . . . . 29
116 3.2.3. Specific Network topologies of Smart Grid
117 Applications . . . . . . . . . . . . . . . . . . . . 30
118 3.2.4. Precision Time Protocol . . . . . . . . . . . . . . . 31
119 3.3. IANA Considerations . . . . . . . . . . . . . . . . . . . 32
120 3.4. Security Considerations . . . . . . . . . . . . . . . . . 32
121 3.4.1. Current Practices and Their Limitations . . . . . . . 32
122 3.4.2. Security Trends in Utility Networks . . . . . . . . . 34
123 4. Building Automation Systems . . . . . . . . . . . . . . . . . 35
124 4.1. Use Case Description . . . . . . . . . . . . . . . . . . 35
125 4.2. Building Automation Systems Today . . . . . . . . . . . . 36
126 4.2.1. BAS Architecture . . . . . . . . . . . . . . . . . . 36
127 4.2.2. BAS Deployment Model . . . . . . . . . . . . . . . . 37
128 4.2.3. Use Cases for Field Networks . . . . . . . . . . . . 39
129 4.2.3.1. Environmental Monitoring . . . . . . . . . . . . 39
130 4.2.3.2. Fire Detection . . . . . . . . . . . . . . . . . 39
131 4.2.3.3. Feedback Control . . . . . . . . . . . . . . . . 40
132 4.2.4. Security Considerations . . . . . . . . . . . . . . . 40
133 4.3. BAS Future . . . . . . . . . . . . . . . . . . . . . . . 40
134 4.4. BAS Asks . . . . . . . . . . . . . . . . . . . . . . . . 41
135 5. Wireless for Industrial Use Cases . . . . . . . . . . . . . . 41
136 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 41
137 5.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 42
138 5.3. 6TiSCH Overview . . . . . . . . . . . . . . . . . . . . . 43
139 5.3.1. TSCH and 6top . . . . . . . . . . . . . . . . . . . . 46
140 5.3.2. SlotFrames and Priorities . . . . . . . . . . . . . . 46
141 5.3.3. Schedule Management by a PCE . . . . . . . . . . . . 46
142 5.3.4. Track Forwarding . . . . . . . . . . . . . . . . . . 47
143 5.3.4.1. Transport Mode . . . . . . . . . . . . . . . . . 49
144 5.3.4.2. Tunnel Mode . . . . . . . . . . . . . . . . . . . 50
145 5.3.4.3. Tunnel Metadata . . . . . . . . . . . . . . . . . 51
146 5.4. Operations of Interest for DetNet and PCE . . . . . . . . 51
147 5.4.1. Packet Marking and Handling . . . . . . . . . . . . . 52
148 5.4.1.1. Tagging Packets for Flow Identification . . . . . 52
149 5.4.1.2. Replication, Retries and Elimination . . . . . . 52
150 5.4.1.3. Differentiated Services Per-Hop-Behavior . . . . 53
151 5.4.2. Topology and capabilities . . . . . . . . . . . . . . 53
152 5.5. Security Considerations . . . . . . . . . . . . . . . . . 54
153 6. Cellular Radio Use Cases . . . . . . . . . . . . . . . . . . 54
154 6.1. Use Case Description . . . . . . . . . . . . . . . . . . 54
155 6.1.1. Network Architecture . . . . . . . . . . . . . . . . 54
156 6.1.2. Time Synchronization Requirements . . . . . . . . . . 55
157 6.1.3. Time-Sensitive Stream Requirements . . . . . . . . . 57
158 6.1.4. Security Considerations . . . . . . . . . . . . . . . 57
159 6.2. Cellular Radio Networks Today . . . . . . . . . . . . . . 58
160 6.3. Cellular Radio Networks Future . . . . . . . . . . . . . 58
161 6.4. Cellular Radio Networks Asks . . . . . . . . . . . . . . 60
162 7. Cellular Coordinated Multipoint Processing (CoMP) . . . . . . 60
163 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 60
164 7.1.1. CoMP Architecture . . . . . . . . . . . . . . . . . . 61
165 7.1.2. Delay Sensitivity in CoMP . . . . . . . . . . . . . . 62
166 7.2. CoMP Today . . . . . . . . . . . . . . . . . . . . . . . 62
167 7.3. CoMP Future . . . . . . . . . . . . . . . . . . . . . . . 62
168 7.3.1. Mobile Industry Overall Goals . . . . . . . . . . . . 62
169 7.3.2. CoMP Infrastructure Goals . . . . . . . . . . . . . . 63
170 7.4. CoMP Asks . . . . . . . . . . . . . . . . . . . . . . . . 63
171 8. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 64
172 8.1. Use Case Description . . . . . . . . . . . . . . . . . . 64
173 8.2. Industrial M2M Communication Today . . . . . . . . . . . 65
174 8.2.1. Transport Parameters . . . . . . . . . . . . . . . . 65
175 8.2.2. Stream Creation and Destruction . . . . . . . . . . . 66
176 8.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 66
177 8.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 67
178 9. Internet-based Applications . . . . . . . . . . . . . . . . . 67
179 9.1. Use Case Description . . . . . . . . . . . . . . . . . . 67
180 9.1.1. Media Content Delivery . . . . . . . . . . . . . . . 67
181 9.1.2. Online Gaming . . . . . . . . . . . . . . . . . . . . 67
182 9.1.3. Virtual Reality . . . . . . . . . . . . . . . . . . . 67
183 9.2. Internet-Based Applications Today . . . . . . . . . . . . 68
184 9.3. Internet-Based Applications Future . . . . . . . . . . . 68
185 9.4. Internet-Based Applications Asks . . . . . . . . . . . . 68
186 10. Use Case Common Elements . . . . . . . . . . . . . . . . . . 68
187 11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 69
188 11.1. Pro Audio . . . . . . . . . . . . . . . . . . . . . . . 69
189 11.2. Utility Telecom . . . . . . . . . . . . . . . . . . . . 70
190 11.3. Building Automation Systems . . . . . . . . . . . . . . 70
191 11.4. Wireless for Industrial . . . . . . . . . . . . . . . . 70
192 11.5. Cellular Radio . . . . . . . . . . . . . . . . . . . . . 70
193 11.6. Industrial M2M . . . . . . . . . . . . . . . . . . . . . 70
194 11.7. Other . . . . . . . . . . . . . . . . . . . . . . . . . 70
195 12. Informative References . . . . . . . . . . . . . . . . . . . 71
196 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 79
198 1. Introduction
200 This draft presents use cases from diverse industries which have in
201 common a need for deterministic streams, but which also differ
202 notably in their network topologies and specific desired behavior.
203 Together, they provide broad industry context for DetNet and a
204 yardstick against which proposed DetNet designs can be measured (to
205 what extent does a proposed design satisfy these various use cases?)
207 For DetNet, use cases explicitly do not define requirements; The
208 DetNet WG will consider the use cases, decide which elements are in
209 scope for DetNet, and the results will be incorporated into future
210 drafts. Similarly, the DetNet use case draft explicitly does not
211 suggest any specific design, architecture or protocols, which will be
212 topics of future drafts.
214 We present for each use case the answers to the following questions:
216 o What is the use case?
218 o How is it addressed today?
220 o How would you like it to be addressed in the future?
222 o What do you want the IETF to deliver?
224 The level of detail in each use case should be sufficient to express
225 the relevant elements of the use case, but not more.
227 At the end we consider the use cases collectively, and examine the
228 most significant goals they have in common.
230 2. Pro Audio Use Cases
232 2.1. Introduction
234 The professional audio and video industry includes music and film
235 content creation, broadcast, cinema, and live exposition as well as
236 public address, media and emergency systems at large venues
237 (airports, stadiums, churches, theme parks). These industries have
238 already gone through the transition of audio and video signals from
239 analog to digital, however the interconnect systems remain primarily
240 point-to-point with a single (or small number of) signals per link,
241 interconnected with purpose-built hardware.
243 These industries are now attempting to transition to packet based
244 infrastructure for distributing audio and video in order to reduce
245 cost, increase routing flexibility, and integrate with existing IT
246 infrastructure.
248 However, there are several requirements for making a network the
249 primary infrastructure for audio and video which are not met by
250 todays networks and these are our concern in this draft.
252 The principal requirement is that pro audio and video applications
253 become able to establish streams that provide guaranteed (bounded)
254 bandwidth and latency from the Layer 3 (IP) interface. Such streams
255 can be created today within standards-based layer 2 islands however
256 these are not sufficient to enable effective distribution over wider
257 areas (for example broadcast events that span wide geographical
258 areas).
260 Some proprietary systems have been created which enable deterministic
261 streams at layer 3 however they are engineered networks in that they
262 require careful configuration to operate, often require that the
263 system be over designed, and it is implied that all devices on the
264 network voluntarily play by the rules of that network. To enable
265 these industries to successfully transition to an interoperable
266 multi-vendor packet-based infrastructure requires effective open
267 standards, and we believe that establishing relevant IETF standards
268 is a crucial factor.
270 It would be highly desirable if such streams could be routed over the
271 open Internet, however even intermediate solutions with more limited
272 scope (such as enterprise networks) can provide a substantial
273 improvement over todays networks, and a solution that only provides
274 for the enterprise network scenario is an acceptable first step.
276 We also present more fine grained requirements of the audio and video
277 industries such as safety and security, redundant paths, devices with
278 limited computing resources on the network, and that reserved stream
279 bandwidth is available for use by other best-effort traffic when that
280 stream is not currently in use.
282 2.2. Fundamental Stream Requirements
284 The fundamental stream properties are guaranteed bandwidth and
285 deterministic latency as described in this section. Additional
286 stream requirements are described in a subsequent section.
288 2.2.1. Guaranteed Bandwidth
290 Transmitting audio and video streams is unlike common file transfer
291 activities because guaranteed delivery cannot be achieved by re-
292 trying the transmission; by the time the missing or corrupt packet
293 has been identified it is too late to execute a re-try operation and
294 stream playback is interrupted, which is unacceptable in for example
295 a live concert. In some contexts large amounts of buffering can be
296 used to provide enough delay to allow time for one or more retries,
297 however this is not an effective solution when live interaction is
298 involved, and is not considered an acceptable general solution for
299 pro audio and video. (Have you ever tried speaking into a microphone
300 through a sound system that has an echo coming back at you? It makes
301 it almost impossible to speak clearly).
303 Providing a way to reserve a specific amount of bandwidth for a given
304 stream is a key requirement.
306 2.2.2. Bounded and Consistent Latency
308 Latency in this context means the amount of time that passes between
309 when a signal is sent over a stream and when it is received, for
310 example the amount of time delay between when you speak into a
311 microphone and when your voice emerges from the speaker. Any delay
312 longer than about 10-15 milliseconds is noticeable by most live
313 performers, and greater latency makes the system unusable because it
314 prevents them from playing in time with the other players (see slide
315 6 of [SRP_LATENCY]).
317 The 15ms latency bound is made even more challenging because it is
318 often the case in network based music production with live electric
319 instruments that multiple stages of signal processing are used,
320 connected in series (i.e. from one to the other for example from
321 guitar through a series of digital effects processors) in which case
322 the latencies add, so the latencies of each individual stage must all
323 together remain less than 15ms.
325 In some situations it is acceptable at the local location for content
326 from the live remote site to be delayed to allow for a statistically
327 acceptable amount of latency in order to reduce jitter. However,
328 once the content begins playing in the local location any audio
329 artifacts caused by the local network are unacceptable, especially in
330 those situations where a live local performer is mixed into the feed
331 from the remote location.
333 In addition to being bounded to within some predictable and
334 acceptable amount of time (which may be 15 milliseconds or more or
335 less depending on the application) the latency also has to be
336 consistent. For example when playing a film consisting of a video
337 stream and audio stream over a network, those two streams must be
338 synchronized so that the voice and the picture match up. A common
339 tolerance for audio/video sync is one NTSC video frame (about 33ms)
340 and to maintain the audience perception of correct lip sync the
341 latency needs to be consistent within some reasonable tolerance, for
342 example 10%.
344 A common architecture for synchronizing multiple streams that have
345 different paths through the network (and thus potentially different
346 latencies) is to enable measurement of the latency of each path, and
347 have the data sinks (for example speakers) buffer (delay) all packets
348 on all but the slowest path. Each packet of each stream is assigned
349 a presentation time which is based on the longest required delay.
350 This implies that all sinks must maintain a common time reference of
351 sufficient accuracy, which can be achieved by any of various
352 techniques.
354 This type of architecture is commonly implemented using a central
355 controller that determines path delays and arbitrates buffering
356 delays.
358 2.2.2.1. Optimizations
360 The controller might also perform optimizations based on the
361 individual path delays, for example sinks that are closer to the
362 source can inform the controller that they can accept greater latency
363 since they will be buffering packets to match presentation times of
364 farther away sinks. The controller might then move a stream
365 reservation on a short path to a longer path in order to free up
366 bandwidth for other critical streams on that short path. See slides
367 3-5 of [SRP_LATENCY].
369 Additional optimization can be achieved in cases where sinks have
370 differing latency requirements, for example in a live outdoor concert
371 the speaker sinks have stricter latency requirements than the
372 recording hardware sinks. See slide 7 of [SRP_LATENCY].
374 Device cost can be reduced in a system with guaranteed reservations
375 with a small bounded latency due to the reduced requirements for
376 buffering (i.e. memory) on sink devices. For example, a theme park
377 might broadcast a live event across the globe via a layer 3 protocol;
378 in such cases the size of the buffers required is proportional to the
379 latency bounds and jitter caused by delivery, which depends on the
380 worst case segment of the end-to-end network path. For example on
381 todays open internet the latency is typically unacceptable for audio
382 and video streaming without many seconds of buffering. In such
383 scenarios a single gateway device at the local network that receives
384 the feed from the remote site would provide the expensive buffering
385 required to mask the latency and jitter issues associated with long
386 distance delivery. Sink devices in the local location would have no
387 additional buffering requirements, and thus no additional costs,
388 beyond those required for delivery of local content. The sink device
389 would be receiving the identical packets as those sent by the source
390 and would be unaware that there were any latency or jitter issues
391 along the path.
393 2.3. Additional Stream Requirements
395 The requirements in this section are more specific yet are common to
396 multiple audio and video industry applications.
398 2.3.1. Deterministic Time to Establish Streaming
400 Some audio systems installed in public environments (airports,
401 hospitals) have unique requirements with regards to health, safety
402 and fire concerns. One such requirement is a maximum of 3 seconds
403 for a system to respond to an emergency detection and begin sending
404 appropriate warning signals and alarms without human intervention.
405 For this requirement to be met, the system must support a bounded and
406 acceptable time from a notification signal to specific stream
407 establishment. For further details see [ISO7240-16].
409 Similar requirements apply when the system is restarted after a power
410 cycle, cable re-connection, or system reconfiguration.
412 In many cases such re-establishment of streaming state must be
413 achieved by the peer devices themselves, i.e. without a central
414 controller (since such a controller may only be present during
415 initial network configuration).
417 Video systems introduce related requirements, for example when
418 transitioning from one camera feed to another. Such systems
419 currently use purpose-built hardware to switch feeds smoothly,
420 however there is a current initiative in the broadcast industry to
421 switch to a packet-based infrastructure (see [STUDIO_IP] and the ESPN
422 DC2 use case described below).
424 2.3.2. Use of Unused Reservations by Best-Effort Traffic
426 In cases where stream bandwidth is reserved but not currently used
427 (or is under-utilized) that bandwidth must be available to best-
428 effort (i.e. non-time-sensitive) traffic. For example a single
429 stream may be nailed up (reserved) for specific media content that
430 needs to be presented at different times of the day, ensuring timely
431 delivery of that content, yet in between those times the full
432 bandwidth of the network can be utilized for best-effort tasks such
433 as file transfers.
435 This also addresses a concern of IT network administrators that are
436 considering adding reserved bandwidth traffic to their networks that
437 users will just reserve a ton of bandwidth and then never un-reserve
438 it even though they are not using it, and soon they will have no
439 bandwidth left.
441 2.3.3. Layer 3 Interconnecting Layer 2 Islands
443 As an intermediate step (short of providing guaranteed bandwidth
444 across the open internet) it would be valuable to provide a way to
445 connect multiple Layer 2 networks. For example layer 2 techniques
446 could be used to create a LAN for a single broadcast studio, and
447 several such studios could be interconnected via layer 3 links.
449 2.3.4. Secure Transmission
451 Digital Rights Management (DRM) is very important to the audio and
452 video industries. Any time protected content is introduced into a
453 network there are DRM concerns that must be maintained (see
454 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of
455 network technology, however there are cases when a secure link
456 supporting authentication and encryption is required by content
457 owners to carry their audio or video content when it is outside their
458 own secure environment (for example see [DCI]).
460 As an example, two techniques are Digital Transmission Content
461 Protection (DTCP) and High-Bandwidth Digital Content Protection
462 (HDCP). HDCP content is not approved for retransmission within any
463 other type of DRM, while DTCP may be retransmitted under HDCP.
464 Therefore if the source of a stream is outside of the network and it
465 uses HDCP protection it is only allowed to be placed on the network
466 with that same HDCP protection.
468 2.3.5. Redundant Paths
470 On-air and other live media streams must be backed up with redundant
471 links that seamlessly act to deliver the content when the primary
472 link fails for any reason. In point-to-point systems this is
473 provided by an additional point-to-point link; the analogous
474 requirement in a packet-based system is to provide an alternate path
475 through the network such that no individual link can bring down the
476 system.
478 2.3.6. Link Aggregation
480 For transmitting streams that require more bandwidth than a single
481 link in the target network can support, link aggregation is a
482 technique for combining (aggregating) the bandwidth available on
483 multiple physical links to create a single logical link of the
484 required bandwidth. However, if aggregation is to be used, the
485 network controller (or equivalent) must be able to determine the
486 maximum latency of any path through the aggregate link (see Bounded
487 and Consistent Latency section above).
489 2.3.7. Traffic Segregation
491 Sink devices may be low cost devices with limited processing power.
492 In order to not overwhelm the CPUs in these devices it is important
493 to limit the amount of traffic that these devices must process.
495 As an example, consider the use of individual seat speakers in a
496 cinema. These speakers are typically required to be cost reduced
497 since the quantities in a single theater can reach hundreds of seats.
498 Discovery protocols alone in a one thousand seat theater can generate
499 enough broadcast traffic to overwhelm a low powered CPU. Thus an
500 installation like this will benefit greatly from some type of traffic
501 segregation that can define groups of seats to reduce traffic within
502 each group. All seats in the theater must still be able to
503 communicate with a central controller.
505 There are many techniques that can be used to support this
506 requirement including (but not limited to) the following examples.
508 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets
510 Packet forwarding rules can be used to eliminate some extraneous
511 streaming traffic from reaching potentially low powered sink devices,
512 however there may be other types of broadcast traffic that should be
513 eliminated using other means for example VLANs or IP subnets.
515 2.3.7.2. Multicast Addressing (IPv4 and IPv6)
517 Multicast addressing is commonly used to keep bandwidth utilization
518 of shared links to a minimum.
520 Because of the MAC Address forwarding nature of Layer 2 bridges it is
521 important that a multicast MAC address is only associated with one
522 stream. This will prevent reservations from forwarding packets from
523 one stream down a path that has no interested sinks simply because
524 there is another stream on that same path that shares the same
525 multicast MAC address.
527 Since each multicast MAC Address can represent 32 different IPv4
528 multicast addresses there must be a process put in place to make sure
529 this does not occur. Requiring use of IPv6 address can achieve this,
530 however due to their continued prevalence, solutions that are
531 effective for IPv4 installations are also required.
533 2.4. Integration of Reserved Streams into IT Networks
535 A commonly cited goal of moving to a packet based media
536 infrastructure is that costs can be reduced by using off the shelf,
537 commodity network hardware. In addition, economy of scale can be
538 realized by combining media infrastructure with IT infrastructure.
539 In keeping with these goals, stream reservation technology should be
540 compatible with existing protocols, and not compromise use of the
541 network for best effort (non-time-sensitive) traffic.
543 2.5. Security Considerations
545 Many industries that are moving from the point-to-point world to the
546 digital network world have little understanding of the pitfalls that
547 they can create for themselves with improperly implemented network
548 infrastructure. DetNet should consider ways to provide security
549 against DoS attacks in solutions directed at these markets. Some
550 considerations are given here as examples of ways that we can help
551 new users avoid common pitfalls.
553 2.5.1. Denial of Service
555 One security pitfall that this author is aware of involves the use of
556 technology that allows a presenter to throw the content from their
557 tablet or smart phone onto the A/V system that is then viewed by all
558 those in attendance. The facility introducing this technology was
559 quite excited to allow such modern flexibility to those who came to
560 speak. One thing they hadn't realized was that since no security was
561 put in place around this technology it left a hole in the system that
562 allowed other attendees to "throw" their own content onto the A/V
563 system.
565 2.5.2. Control Protocols
567 Professional audio systems can include amplifiers that are capable of
568 generating hundreds or thousands of watts of audio power which if
569 used incorrectly can cause hearing damage to those in the vicinity.
570 Apart from the usual care required by the systems operators to
571 prevent such incidents, the network traffic that controls these
572 devices must be secured (as with any sensitive application traffic).
573 In addition, it would be desirable if the configuration protocols
574 that are used to create the network paths used by the professional
575 audio traffic could be designed to protect devices that are not meant
576 to receive high-amplitude content from having such potentially
577 damaging signals routed to them.
579 2.6. A State-of-the-Art Broadcast Installation Hits Technology Limits
581 ESPN recently constructed a state-of-the-art 194,000 sq ft, $125
582 million broadcast studio called DC2. The DC2 network is capable of
583 handling 46 Tbps of throughput with 60,000 simultaneous signals.
584 Inside the facility are 1,100 miles of fiber feeding four audio
585 control rooms. (See details at [ESPN_DC2] ).
587 In designing DC2 they replaced as much point-to-point technology as
588 they possibly could with packet-based technology. They constructed
589 seven individual studios using layer 2 LANS (using IEEE 802.1 AVB)
590 that were entirely effective at routing audio within the LANs, and
591 they were very happy with the results, however to interconnect these
592 layer 2 LAN islands together they ended up using dedicated links
593 because there is no standards-based routing solution available.
595 This is the kind of motivation we have to develop these standards
596 because customers are ready and able to use them.
598 3. Utility Telecom Use Cases
600 3.1. Overview
602 [I-D.finn-detnet-problem-statement] defines the characteristics of a
603 deterministic flow as a data communication flow with a bounded
604 latency, extraordinarily low frame loss, and a very narrow jitter.
605 This document intends to define the utility requirements for
606 deterministic networking.
608 Utility Telecom Networks
610 The business and technology trends that are sweeping the utility
611 industry will drastically transform the utility business from the way
612 it has been for many decades. At the core of many of these changes
613 is a drive to modernize the electrical grid with an integrated
614 telecommunications infrastructure. However, interoperability,
615 concerns, legacy networks, disparate tools, and stringent security
616 requirements all add complexity to the grid transformation. Given
617 the range and diversity of the requirements that should be addressed
618 by the next generation telecommunications infrastructure, utilities
619 need to adopt a holistic architectural approach to integrate the
620 electrical grid with digital telecommunications across the entire
621 power delivery chain.
623 Many utilities still rely on complex environments formed of multiple
624 application-specific, proprietary networks. Information is siloed
625 between operational areas. This prevents utility operations from
626 realizing the operational efficiency benefits, visibility, and
627 functional integration of operational information across grid
628 applications and data networks. The key to modernizing grid
629 telecommunications is to provide a common, adaptable, multi-service
630 network infrastructure for the entire utility organization. Such a
631 network serves as the platform for current capabilities while
632 enabling future expansion of the network to accommodate new
633 applications and services.
635 To meet this diverse set of requirements, both today and in the
636 future, the next generation utility telecommunnications network will
637 be based on open-standards-based IP architecture. An end-to-end IP
638 architecture takes advantage of nearly three decades of IP technology
639 development, facilitating interoperability across disparate networks
640 and devices, as it has been already demonstrated in many mission-
641 critical and highly secure networks.
643 IEC (International Electrotechnical Commission) and different
644 National Committees have mandated a specific adhoc group (AHG8) to
645 define the migration strategy to IPv6 for all the IEC TC57 power
646 automation standards. IPv6 is seen as the obvious future
647 telecommunications technology for the Smart Grid. The Adhoc Group
648 has disclosed, to the IEC coordination group, their conclusions at
649 the end of 2014.
651 It is imperative that utilities participate in standards development
652 bodies to influence the development of future solutions and to
653 benefit from shared experiences of other utilities and vendors.
655 3.2. Telecommunications Trends and General telecommunications
656 Requirements
658 These general telecommunications requirements are over and above the
659 specific requirements of the use cases that have been addressed so
660 far. These include both current and future telecommunications
661 related requirements that should be factored into the network
662 architecture and design.
664 3.2.1. General Telecommunications Requirements
666 o IP Connectivity everywhere
668 o Monitoring services everywhere and from different remote centers
670 o Move services to a virtual data center
671 o Unify access to applications / information from the corporate
672 network
674 o Unify services
676 o Unified Communications Solutions
678 o Mix of fiber and microwave technologies - obsolescence of SONET/
679 SDH or TDM
681 o Standardize grid telecommunications protocol to opened standard to
682 ensure interoperability
684 o Reliable Telecommunications for Transmission and Distribution
685 Substations
687 o IEEE 1588 time synchronization Client / Server Capabilities
689 o Integration of Multicast Design
691 o QoS Requirements Mapping
693 o Enable Future Network Expansion
695 o Substation Network Resilience
697 o Fast Convergence Design
699 o Scalable Headend Design
701 o Define Service Level Agreements (SLA) and Enable SLA Monitoring
703 o Integration of 3G/4G Technologies and future technologies
705 o Ethernet Connectivity for Station Bus Architecture
707 o Ethernet Connectivity for Process Bus Architecture
709 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP
711 3.2.1.1. Migration to Packet-Switched Network
713 Throughout the world, utilities are increasingly planning for a
714 future based on smart grid applications requiring advanced
715 telecommunications systems. Many of these applications utilize
716 packet connectivity for communicating information and control signals
717 across the utility's Wide Area Network (WAN), made possible by
718 technologies such as multiprotocol label switching (MPLS). The data
719 that traverses the utility WAN includes:
721 o Grid monitoring, control, and protection data
723 o Non-control grid data (e.g. asset data for condition-based
724 monitoring)
726 o Physical safety and security data (e.g. voice and video)
728 o Remote worker access to corporate applications (voice, maps,
729 schematics, etc.)
731 o Field area network backhaul for smart metering, and distribution
732 grid management
734 o Enterprise traffic (email, collaboration tools, business
735 applications)
737 WANs support this wide variety of traffic to and from substations,
738 the transmission and distribution grid, generation sites, between
739 control centers, and between work locations and data centers. To
740 maintain this rapidly expanding set of applications, many utilities
741 are taking steps to evolve present time-division multiplexing (TDM)
742 based and frame relay infrastructures to packet systems. Packet-
743 based networks are designed to provide greater functionalities and
744 higher levels of service for applications, while continuing to
745 deliver reliability and deterministic (real-time) traffic support.
747 3.2.2. Applications, Use cases and traffic patterns
749 Among the numerous applications and use cases that a utility deploys
750 today, many rely on high availability and deterministic behaviour of
751 the telecommunications networks. Protection use cases and generation
752 control are the most demanding and can't rely on a best effort
753 approach.
755 3.2.2.1. Transmission use cases
757 Protection means not only the protection of the human operator but
758 also the protection of the electric equipments and the preservation
759 of the stability and frequency of the grid. If a default occurs on
760 the transmission or the distribution of the electricity, important
761 damages could occured to the human operator but also to very costly
762 electrical equipments and perturb the grid leading to blackouts. The
763 time and reliability requirements are very strong to avoid dramatic
764 impacts to the electrical infrastructure.
766 3.2.2.1.1. Tele Protection
768 The key criteria for measuring Teleprotection performance are command
769 transmission time, dependability and security. These criteria are
770 defined by the IEC standard 60834 as follows:
772 o Transmission time (Speed): The time between the moment where state
773 changes at the transmitter input and the moment of the
774 corresponding change at the receiver output, including propagation
775 delay. Overall operating time for a Teleprotection system
776 includes the time for initiating the command at the transmitting
777 end, the propagation delay over the network (including equipments)
778 and the selection and decision time at the receiving end,
779 including any additional delay due to a noisy environment.
781 o Dependability: The ability to issue and receive valid commands in
782 the presence of interference and/or noise, by minimizing the
783 probability of missing command (PMC). Dependability targets are
784 typically set for a specific bit error rate (BER) level.
786 o Security: The ability to prevent false tripping due to a noisy
787 environment, by minimizing the probability of unwanted commands
788 (PUC). Security targets are also set for a specific bit error
789 rate (BER) level.
791 Additional key elements that may impact Teleprotection performance
792 include bandwidth rate of the Teleprotection system and its
793 resiliency or failure recovery capacity. Transmission time,
794 bandwidth utilization and resiliency are directly linked to the
795 telecommunications equipments and the connections that are used to
796 transfer the commands between relays.
798 3.2.2.1.1.1. Latency Budget Consideration
800 Delay requirements for utility networks may vary depending upon a
801 number of parameters, such as the specific protection equipments
802 used. Most power line equipment can tolerate short circuits or
803 faults for up to approximately five power cycles before sustaining
804 irreversible damage or affecting other segments in the network. This
805 translates to total fault clearance time of 100ms. As a safety
806 precaution, however, actual operation time of protection systems is
807 limited to 70- 80 percent of this period, including fault recognition
808 time, command transmission time and line breaker switching time.
809 Some system components, such as large electromechanical switches,
810 require particularly long time to operate and take up the majority of
811 the total clearance time, leaving only a 10ms window for the
812 telecommunications part of the protection scheme, independent of the
813 distance to travel. Given the sensitivity of the issue, new networks
814 impose requirements that are even more stringent: IEC standard 61850
815 limits the transfer time for protection messages to 1/4 - 1/2 cycle
816 or 4 - 8ms (for 60Hz lines) for the most critical messages.
818 3.2.2.1.1.2. Asymetric delay
820 In addition to minimal transmission delay, a differential protection
821 telecommunications channel must be synchronous, i.e., experiencing
822 symmetrical channel delay in transmit and receive paths. This
823 requires special attention in jitter-prone packet networks. While
824 optimally Teleprotection systems should support zero asymmetric
825 delay, typical legacy relays can tolerate discrepancies of up to
826 750us.
828 The main tools available for lowering delay variation below this
829 threshold are:
831 o A jitter buffer at the multiplexers on each end of the line can be
832 used to offset delay variation by queuing sent and received
833 packets. The length of the queues must balance the need to
834 regulate the rate of transmission with the need to limit overall
835 delay, as larger buffers result in increased latency. This is the
836 old TDM traditional way to fulfill this requirement.
838 o Traffic management tools ensure that the Teleprotection signals
839 receive the highest transmission priority and minimize the number
840 of jitter addition during the path. This is one way to meet the
841 requirement in IP networks.
843 o Standard Packet-Based synchronization technologies, such as
844 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet
845 (Sync-E), can help maintain stable networks by keeping a highly
846 accurate clock source on the different network devices involved.
848 3.2.2.1.1.2.1. Other traffic characteristics
850 o Redundancy: The existence in a system of more than one means of
851 accomplishing a given function.
853 o Recovery time : The duration of time within which a business
854 process must be restored after any type of disruption in order to
855 avoid unacceptable consequences associated with a break in
856 business continuity.
858 o performance management : In networking, a management function
859 defined for controlling and analyzing different parameters/metrics
860 such as the throughput, error rate.
862 o packet loss : One or more packets of data travelling across
863 network fail to reach their destination.
865 3.2.2.1.1.2.2. Teleprotection network requirements
867 The following table captures the main network requirements (this is
868 based on IEC 61850 standard)
870 +-----------------------------+-------------------------------------+
871 | Teleprotection Requirement | Attribute |
872 +-----------------------------+-------------------------------------+
873 | One way maximum delay | 4-10 ms |
874 | Asymetric delay required | Yes |
875 | Maximum jitter | less than 250 us (750 us for legacy |
876 | | IED) |
877 | Topology | Point to point, point to Multi- |
878 | | point |
879 | Availability | 99.9999 |
880 | precise timing required | Yes |
881 | Recovery time on node | less than 50ms - hitless |
882 | failure | |
883 | performance management | Yes, Mandatory |
884 | Redundancy | Yes |
885 | Packet loss | 0.1% to 1% |
886 +-----------------------------+-------------------------------------+
888 Table 1: Teleprotection network requirements
890 3.2.2.1.2. Inter-Trip Protection scheme
892 Inter-tripping is the controlled tripping of a circuit breaker to
893 complete the isolation of a circuit or piece of apparatus in concert
894 with the tripping of other circuit breakers. The main use of such
895 schemes is to ensure that protection at both ends of a faulted
896 circuit will operate to isolate the equipment concerned. Inter-
897 tripping schemes use signaling to convey a trip command to remote
898 circuit breakers to isolate circuits.
900 +--------------------------------+----------------------------------+
901 | Inter-Trip protection | Attribute |
902 | Requirement | |
903 +--------------------------------+----------------------------------+
904 | One way maximum delay | 5 ms |
905 | Asymetric delay required | No |
906 | Maximum jitter | Not critical |
907 | Topology | Point to point, point to Multi- |
908 | | point |
909 | Bandwidth | 64 Kbps |
910 | Availability | 99.9999 |
911 | precise timing required | Yes |
912 | Recovery time on node failure | less than 50ms - hitless |
913 | performance management | Yes, Mandatory |
914 | Redundancy | Yes |
915 | Packet loss | 0.1% |
916 +--------------------------------+----------------------------------+
918 Table 2: Inter-Trip protection network requirements
920 3.2.2.1.3. Current Differential Protection Scheme
922 Current differential protection is commonly used for line protection,
923 and is typical for protecting parallel circuits. A main advantage
924 for differential protection is that, compared to overcurrent
925 protection, it allows only the faulted circuit to be de-energized in
926 case of a fault. At both end of the lines, the current is measured
927 by the differential relays, and based on Kirchhoff's law, both relays
928 will trip the circuit breaker if the current going into the line does
929 not equal the current going out of the line. This type of protection
930 scheme assumes some form of communications being present between the
931 relays at both end of the line, to allow both relays to compare
932 measured current values. A fault in line 1 will cause overcurrent to
933 be flowing in both lines, but because the current in line 2 is a
934 through following current, this current is measured equal at both
935 ends of the line, therefore the differential relays on line 2 will
936 not trip line 2. Line 1 will be tripped, as the relays will not
937 measure the same currents at both ends of the line. Line
938 differential protection schemes assume a very low telecommunications
939 delay between both relays, often as low as 5ms. Moreover, as those
940 systems are often not time-synchronized, they also assume symmetric
941 telecommunications paths with constant delay, which allows comparing
942 current measurement values taken at the exact same time.
944 +----------------------------------+--------------------------------+
945 | Current Differential protection | Attribute |
946 | Requirement | |
947 +----------------------------------+--------------------------------+
948 | One way maximum delay | 5 ms |
949 | Asymetric delay Required | Yes |
950 | Maximum jitter | less than 250 us (750us for |
951 | | legacy IED) |
952 | Topology | Point to point, point to |
953 | | Multi-point |
954 | Bandwidth | 64 Kbps |
955 | Availability | 99.9999 |
956 | precise timing required | Yes |
957 | Recovery time on node failure | less than 50ms - hitless |
958 | performance management | Yes, Mandatory |
959 | Redundancy | Yes |
960 | Packet loss | 0.1% |
961 +----------------------------------+--------------------------------+
963 Table 3: Current Differential Protection requirements
965 3.2.2.1.4. Distance Protection Scheme
967 Distance (Impedance Relay) protection scheme is based on voltage and
968 current measurements. A fault on a circuit will generally create a
969 sag in the voltage level. If the ratio of voltage to current
970 measured at the protection relay terminals, which equates to an
971 impedance element, falls within a set threshold the circuit breaker
972 will operate. The operating characteristics of this protection are
973 based on the line characteristics. This means that when a fault
974 appears on the line, the impedance setting in the relay is compared
975 to the apparent impedance of the line from the relay terminals to the
976 fault. If the relay setting is determined to be below the apparent
977 impedance it is determined that the fault is within the zone of
978 protection. When the transmission line length is under a minimum
979 length, distance protection becomes more difficult to coordinate. In
980 these instances the best choice of protection is current differential
981 protection.
983 +-------------------------------+-----------------------------------+
984 | Distance protection | Attribute |
985 | Requirement | |
986 +-------------------------------+-----------------------------------+
987 | One way maximum delay | 5 ms |
988 | Asymetric delay Required | No |
989 | Maximum jitter | Not critical |
990 | Topology | Point to point, point to Multi- |
991 | | point |
992 | Bandwidth | 64 Kbps |
993 | Availability | 99.9999 |
994 | precise timing required | Yes |
995 | Recovery time on node failure | less than 50ms - hitless |
996 | performance management | Yes, Mandatory |
997 | Redundancy | Yes |
998 | Packet loss | 0.1% |
999 +-------------------------------+-----------------------------------+
1001 Table 4: Distance Protection requirements
1003 3.2.2.1.5. Inter-Substation Protection Signaling
1005 This use case describes the exchange of Sampled Value and/or GOOSE
1006 (Generic Object Oriented Substation Events) message between
1007 Intelligent Electronic Devices (IED) in two substations for
1008 protection and tripping coordination. The two IEDs are in a master-
1009 slave mode.
1011 The Current Transformer or Voltage Transformer (CT/VT) in one
1012 substation sends the sampled analog voltage or current value to the
1013 Merging Unit (MU) over hard wire. The merging unit sends the time-
1014 synchronized 61850-9-2 sampled values to the slave IED. The slave
1015 IED forwards the information to the Master IED in the other
1016 substation. The master IED makes the determination (for example
1017 based on sampled value differentials) to send a trip command to the
1018 originating IED. Once the slave IED/Relay receives the GOOSE trip
1019 for breaker tripping, it opens the breaker. It then sends a
1020 confirmation message back to the master. All data exchanges between
1021 IEDs are either through Sampled Value and/or GOOSE messages.
1023 +----------------------------------+--------------------------------+
1024 | Inter-Substation protection | Attribute |
1025 | Requirement | |
1026 +----------------------------------+--------------------------------+
1027 | One way maximum delay | 5 ms |
1028 | Asymetric delay Required | No |
1029 | Maximum jitter | Not critical |
1030 | Topology | Point to point, point to |
1031 | | Multi-point |
1032 | Bandwidth | 64 Kbps |
1033 | Availability | 99.9999 |
1034 | precise timing required | Yes |
1035 | Recovery time on node failure | less than 50ms - hitless |
1036 | performance management | Yes, Mandatory |
1037 | Redundancy | Yes |
1038 | Packet loss | 1% |
1039 +----------------------------------+--------------------------------+
1041 Table 5: Inter-Substation Protection requirements
1043 3.2.2.1.6. Intra-Substation Process Bus Communications
1045 This use case describes the data flow from the CT/VT to the IEDs in
1046 the substation via the merging unit (MU). The CT/VT in the
1047 substation send the sampled value (analog voltage or current) to the
1048 Merging Unit (MU) over hard wire. The merging unit sends the time-
1049 synchronized 61850-9-2 sampled values to the IEDs in the substation
1050 in GOOSE message format. The GPS Master Clock can send 1PPS or
1051 IRIG-B format to MU through serial port, or IEEE 1588 protocol via
1052 network. Process bus communication using 61850 simplifies
1053 connectivity within the substation and removes the requirement for
1054 multiple serial connections and removes the slow serial bus
1055 architectures that are typically used. This also ensures increased
1056 flexibility and increased speed with the use of multicast messaging
1057 between multiple devices.
1059 +----------------------------------+--------------------------------+
1060 | Intra-Substation protection | Attribute |
1061 | Requirement | |
1062 +----------------------------------+--------------------------------+
1063 | One way maximum delay | 5 ms |
1064 | Asymetric delay Required | No |
1065 | Maximum jitter | Not critical |
1066 | Topology | Point to point, point to |
1067 | | Multi-point |
1068 | Bandwidth | 64 Kbps |
1069 | Availability | 99.9999 |
1070 | precise timing required | Yes |
1071 | Recovery time on Node failure | less than 50ms - hitless |
1072 | performance management | Yes, Mandatory |
1073 | Redundancy | Yes - No |
1074 | Packet loss | 0.1% |
1075 +----------------------------------+--------------------------------+
1077 Table 6: Intra-Substation Protection requirements
1079 3.2.2.1.7. Wide Area Monitoring and Control Systems
1081 The application of synchrophasor measurement data from Phasor
1082 Measurement Units (PMU) to Wide Area Monitoring and Control Systems
1083 promises to provide important new capabilities for improving system
1084 stability. Access to PMU data enables more timely situational
1085 awareness over larger portions of the grid than what has been
1086 possible historically with normal SCADA (Supervisory Control and Data
1087 Acquisition) data. Handling the volume and real-time nature of
1088 synchrophasor data presents unique challenges for existing
1089 application architectures. Wide Area management System (WAMS) makes
1090 it possible for the condition of the bulk power system to be observed
1091 and understood in real-time so that protective, preventative, or
1092 corrective action can be taken. Because of the very high sampling
1093 rate of measurements and the strict requirement for time
1094 synchronization of the samples, WAMS has stringent telecommunications
1095 requirements in an IP network that are captured in the following
1096 table:
1098 +----------------------+--------------------------------------------+
1099 | WAMS Requirement | Attribute |
1100 +----------------------+--------------------------------------------+
1101 | One way maximum | 50 ms |
1102 | delay | |
1103 | Asymetric delay | No |
1104 | Required | |
1105 | Maximum jitter | Not critical |
1106 | Topology | Point to point, point to Multi-point, |
1107 | | Multi-point to Multi-point |
1108 | Bandwidth | 100 Kbps |
1109 | Availability | 99.9999 |
1110 | precise timing | Yes |
1111 | required | |
1112 | Recovery time on | less than 50ms - hitless |
1113 | Node failure | |
1114 | performance | Yes, Mandatory |
1115 | management | |
1116 | Redundancy | Yes |
1117 | Packet loss | 1% |
1118 +----------------------+--------------------------------------------+
1120 Table 7: WAMS Special Communication Requirements
1122 3.2.2.1.8. IEC 61850 WAN engineering guidelines requirement
1123 classification
1125 The IEC (International Electrotechnical Commission) has recently
1126 published a Technical Report which offers guidelines on how to define
1127 and deploy Wide Area Networks for the interconnections of electric
1128 substations, generation plants and SCADA operation centers. The IEC
1129 61850-90-12 is providing a classification of WAN communication
1130 requirements into 4 classes. You will find herafter the table
1131 summarizing these requirements:
1133 +----------------+------------+------------+------------+-----------+
1134 | WAN | Class WA | Class WB | Class WC | Class WD |
1135 | Requirement | | | | |
1136 +----------------+------------+------------+------------+-----------+
1137 | Application | EHV (Extra | HV (High | MV (Medium | General |
1138 | field | High | Voltage) | Voltage) | purpose |
1139 | | Voltage) | | | |
1140 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms |
1141 | Jitter | 10 us | 100 us | 1 ms | 10 ms |
1142 | Latency | 100 us | 1 ms | 10 ms | 100 ms |
1143 | Asymetry | | | | |
1144 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 |
1145 | | | | | ms |
1146 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | |
1147 | | 10-6 | 10-4 | | |
1148 | Unavailability | 10-7 to | 10-5 to | 10-3 | |
1149 | | 10-6 | 10-4 | | |
1150 | Recovery delay | Zero | 50 ms | 5 s | 50 s |
1151 | Cyber security | extremely | High | Medium | Medium |
1152 | | high | | | |
1153 +----------------+------------+------------+------------+-----------+
1155 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC
1157 3.2.2.2. Distribution use case
1159 3.2.2.2.1. Fault Location Isolation and Service Restoration (FLISR)
1161 As the name implies, Fault Location, Isolation, and Service
1162 Restoration (FLISR) refers to the ability to automatically locate the
1163 fault, isolate the fault, and restore service in the distribution
1164 network. It is a self-healing feature whose purpose is to minimize
1165 the impact of faults by serving portions of the loads on the affected
1166 circuit by switching to other circuits. It reduces the number of
1167 customers that experience a sustained power outage by reconfiguring
1168 distribution circuits. This will likely be the first wide spread
1169 application of distributed intelligence in the grid. Secondary
1170 substations can be connected to multiple primary substations.
1171 Normally, static power switch statuses (open/closed) in the network
1172 dictate the power flow to secondary substations. Reconfiguring the
1173 network in the event of a fault is typically done manually on site to
1174 operate switchgear to energize/de-energize alternate paths.
1175 Automating the operation of substation switchgear allows the utility
1176 to have a more dynamic network where the flow of power can be altered
1177 under fault conditions but also during times of peak load. It allows
1178 the utility to shift peak loads around the network. Or, to be more
1179 precise, alters the configuration of the network to move loads
1180 between different primary substations. The FLISR capability can be
1181 enabled in two modes:
1183 o Managed centrally from DMS (Distribution Management System), or
1185 o Executed locally through distributed control via intelligent
1186 switches and fault sensors.
1188 There are 3 distinct sub-functions that are performed:
1190 1. Fault Location Identification
1192 This sub-function is initiated by SCADA inputs, such as lockouts,
1193 fault indications/location, and, also, by input from the Outage
1194 Management System (OMS), and in the future by inputs from fault-
1195 predicting devices. It determines the specific protective device,
1196 which has cleared the sustained fault, identifies the de-energized
1197 sections, and estimates the probable location of the actual or the
1198 expected fault. It distinguishes faults cleared by controllable
1199 protective devices from those cleared by fuses, and identifies
1200 momentary outages and inrush/cold load pick-up currents. This step
1201 is also referred to as Fault Detection Classification and Location
1202 (FDCL). This step helps to expedite the restoration of faulted
1203 sections through fast fault location identification and improved
1204 diagnostic information available for crew dispatch. Also provides
1205 visualization of fault information to design and implement a
1206 switching plan to isolate the fault.
1208 2. Fault Type Determination
1210 I. Indicates faults cleared by controllable protective devices by
1211 distinguishing between:
1213 a. Faults cleared by fuses
1215 b. Momentary outages
1217 c. Inrush/cold load current
1219 II. Determines the faulted sections based on SCADA fault indications
1220 and protection lockout signals
1222 III. Increases the accuracy of the fault location estimation based
1223 on SCADA fault current measurements and real-time fault analysis
1225 3. Fault Isolation and Service Restoration
1226 Once the location and type of the fault has been pinpointed, the
1227 systems will attempt to isolate the fault and restore the non-faulted
1228 section of the network. This can have three modes of operation:
1230 I. Closed-loop mode : This is initiated by the Fault location sub-
1231 function. It generates a switching order (i.e., sequence of
1232 switching) for the remotely controlled switching devices to isolate
1233 the faulted section, and restore service to the non-faulted sections.
1234 The switching order is automatically executed via SCADA.
1236 II. Advisory mode : This is initiated by the Fault location sub-
1237 function. It generates a switching order for remotely and manually
1238 controlled switching devices to isolate the faulted section, and
1239 restore service to the non-faulted sections. The switching order is
1240 presented to operator for approval and execution.
1242 III. Study mode : the operator initiates this function. It analyzes
1243 a saved case modified by the operator, and generates a switching
1244 order under the operating conditions specified by the operator.
1246 With the increasing volume of data that are collected through fault
1247 sensors, utilities will use Big Data query and analysis tools to
1248 study outage information to anticipate and prevent outages by
1249 detecting failure patterns and their correlation with asset age,
1250 type, load profiles, time of day, weather conditions, and other
1251 conditions to discover conditions that lead to faults and take the
1252 necessary preventive and corrective measures.
1254 +----------------------+--------------------------------------------+
1255 | FLISR Requirement | Attribute |
1256 +----------------------+--------------------------------------------+
1257 | One way maximum | 80 ms |
1258 | delay | |
1259 | Asymetric delay | No |
1260 | Required | |
1261 | Maximum jitter | 40 ms |
1262 | Topology | Point to point, point to Multi-point, |
1263 | | Multi-point to Multi-point |
1264 | Bandwidth | 64 Kbps |
1265 | Availability | 99.9999 |
1266 | precise timing | Yes |
1267 | required | |
1268 | Recovery time on | Depends on customer impact |
1269 | Node failure | |
1270 | performance | Yes, Mandatory |
1271 | management | |
1272 | Redundancy | Yes |
1273 | Packet loss | 0.1% |
1274 +----------------------+--------------------------------------------+
1276 Table 9: FLISR Communication Requirements
1278 3.2.2.3. Generation use case
1280 3.2.2.3.1. Frequency Control / Automatic Generation Control (AGC)
1282 The system frequency should be maintained within a very narrow band.
1283 Deviations from the acceptable frequency range are detected and
1284 forwarded to the Load Frequency Control (LFC) system so that required
1285 up or down generation increase / decrease pulses can be sent to the
1286 power plants for frequency regulation. The trend in system frequency
1287 is a measure of mismatch between demand and generation, and is a
1288 necessary parameter for load control in interconnected systems.
1290 Automatic generation control (AGC) is a system for adjusting the
1291 power output of generators at different power plants, in response to
1292 changes in the load. Since a power grid requires that generation and
1293 load closely balance moment by moment, frequent adjustments to the
1294 output of generators are necessary. The balance can be judged by
1295 measuring the system frequency; if it is increasing, more power is
1296 being generated than used, and all machines in the system are
1297 accelerating. If the system frequency is decreasing, more demand is
1298 on the system than the instantaneous generation can provide, and all
1299 generators are slowing down.
1301 Where the grid has tie lines to adjacent control areas, automatic
1302 generation control helps maintain the power interchanges over the tie
1303 lines at the scheduled levels. The AGC takes into account various
1304 parameters including the most economical units to adjust, the
1305 coordination of thermal, hydroelectric, and other generation types,
1306 and even constraints related to the stability of the system and
1307 capacity of interconnections to other power grids.
1309 For the purpose of AGC we use static frequency measurements and
1310 averaging methods are used to get a more precise measure of system
1311 frequency in steady-state conditions.
1313 During disturbances, more real-time dynamic measurements of system
1314 frequency are taken using PMUs, especially when different areas of
1315 the system exhibit different frequencies. But that is outside the
1316 scope of this use case.
1318 +---------------------------------------------------+---------------+
1319 | FCAG (Frequency Control Automatic Generation) | Attribute |
1320 | Requirement | |
1321 +---------------------------------------------------+---------------+
1322 | One way maximum delay | 500 ms |
1323 | Asymetric delay Required | No |
1324 | Maximum jitter | Not critical |
1325 | Topology | Point to |
1326 | | point |
1327 | Bandwidth | 20 Kbps |
1328 | Availability | 99.999 |
1329 | precise timing required | Yes |
1330 | Recovery time on Node failure | N/A |
1331 | performance management | Yes, |
1332 | | Mandatory |
1333 | Redundancy | Yes |
1334 | Packet loss | 1% |
1335 +---------------------------------------------------+---------------+
1337 Table 10: FCAG Communication Requirements
1339 3.2.3. Specific Network topologies of Smart Grid Applications
1341 Utilities often have very large private telecommunications networks.
1342 It covers an entire territory / country. The main purpose of the
1343 network, until now, has been to support transmission network
1344 monitoring, control, and automation, remote control of generation
1345 sites, and providing FCAPS (Fault. Configuration. Accounting.
1346 Performance. Security) services from centralized network operation
1347 centers.
1349 Going forward, one network will support operation and maintenance of
1350 electrical networks (generation, transmission, and distribution),
1351 voice and data services for ten of thousands of employees and for
1352 exchange with neighboring interconnections, and administrative
1353 services. To meet those requirements, utility may deploy several
1354 physical networks leveraging different technologies across the
1355 country: an optical network and a microwave network for instance.
1356 Each protection and automatism system between two points has two
1357 telecommunications circuits, one on each network. Path diversity
1358 between two substations is key. Regardless of the event type
1359 (hurricane, ice storm, etc.), one path shall stay available so the
1360 SPS can still operate.
1362 In the optical network, signals are transmitted over more than tens
1363 of thousands of circuits using fiber optic links, microwave and
1364 telephone cables. This network is the nervous system of the
1365 utility's power transmission operations. The optical network
1366 represents ten of thousands of km of cable deployed along the power
1367 lines.
1369 Due to vast distances between transmission substations (for example
1370 as far as 280km apart), the fiber signal can be amplified to reach a
1371 distance of 280 km without attenuation.
1373 3.2.4. Precision Time Protocol
1375 Some utilities do not use GPS clocks in generation substations. One
1376 of the main reasons is that some of the generation plants are 30 to
1377 50 meters deep under ground and the GPS signal can be weak and
1378 unreliable. Instead, atomic clocks are used. Clocks are
1379 synchronized amongst each other. Rubidium clocks provide clock and
1380 1ms timestamps for IRIG-B. Some companies plan to transition to the
1381 Precision Time Protocol (IEEE 1588), distributing the synchronization
1382 signal over the IP/MPLS network.
1384 The Precision Time Protocol (PTP) is defined in IEEE standard 1588.
1385 PTP is applicable to distributed systems consisting of one or more
1386 nodes, communicating over a network. Nodes are modeled as containing
1387 a real-time clock that may be used by applications within the node
1388 for various purposes such as generating time-stamps for data or
1389 ordering events managed by the node. The protocol provides a
1390 mechanism for synchronizing the clocks of participating nodes to a
1391 high degree of accuracy and precision.
1393 PTP operates based on the following assumptions :
1395 It is assumed that the network eliminates cyclic forwarding of PTP
1396 messages within each communication path (e.g., by using a spanning
1397 tree protocol). PTP eliminates cyclic forwarding of PTP messages
1398 between communication paths.
1400 PTP is tolerant of an occasional missed message, duplicated
1401 message, or message that arrived out of order. However, PTP
1402 assumes that such impairments are relatively rare.
1404 PTP was designed assuming a multicast communication model. PTP
1405 also supports a unicast communication model as long as the
1406 behavior of the protocol is preserved.
1408 Like all message-based time transfer protocols, PTP time accuracy
1409 is degraded by asymmetry in the paths taken by event messages.
1410 Asymmetry is not detectable by PTP, however, if known, PTP
1411 corrects for asymmetry.
1413 A time-stamp event is generated at the time of transmission and
1414 reception of any event message. The time-stamp event occurs when the
1415 message's timestamp point crosses the boundary between the node and
1416 the network.
1418 IEC 61850 will recommend the use of the IEEE PTP 1588 Utility Profile
1419 (as defined in IEC 62439-3 Annex B) which offers the support of
1420 redundant attachment of clocks to Paralell Redundancy Protcol (PRP)
1421 and High-availability Seamless Redundancy (HSR) networks.
1423 3.3. IANA Considerations
1425 This memo includes no request to IANA.
1427 3.4. Security Considerations
1429 3.4.1. Current Practices and Their Limitations
1431 Grid monitoring and control devices are already targets for cyber
1432 attacks and legacy telecommunications protocols have many intrinsic
1433 network related vulnerabilities. DNP3, Modbus, PROFIBUS/PROFINET,
1434 and other protocols are designed around a common paradigm of request
1435 and respond. Each protocol is designed for a master device such as
1436 an HMI (Human Machine Interface) system to send commands to
1437 subordinate slave devices to retrieve data (reading inputs) or
1438 control (writing to outputs). Because many of these protocols lack
1439 authentication, encryption, or other basic security measures, they
1440 are prone to network-based attacks, allowing a malicious actor or
1441 attacker to utilize the request-and-respond system as a mechanism for
1442 command-and-control like functionality. Specific security concerns
1443 common to most industrial control, including utility
1444 telecommunication protocols include the following:
1446 o Network or transport errors (e.g. malformed packets or excessive
1447 latency) can cause protocol failure.
1449 o Protocol commands may be available that are capable of forcing
1450 slave devices into inoperable states, including powering-off
1451 devices, forcing them into a listen-only state, disabling
1452 alarming.
1454 o Protocol commands may be available that are capable of restarting
1455 communications and otherwise interrupting processes.
1457 o Protocol commands may be available that are capable of clearing,
1458 erasing, or resetting diagnostic information such as counters and
1459 diagnostic registers.
1461 o Protocol commands may be available that are capable of requesting
1462 sensitive information about the controllers, their configurations,
1463 or other need-to-know information.
1465 o Most protocols are application layer protocols transported over
1466 TCP; therefore it is easy to transport commands over non-standard
1467 ports or inject commands into authorized traffic flows.
1469 o Protocol commands may be available that are capable of
1470 broadcasting messages to many devices at once (i.e. a potential
1471 DoS).
1473 o Protocol commands may be available to query the device network to
1474 obtain defined points and their values (i.e. a configuration
1475 scan).
1477 o Protocol commands may be available that will list all available
1478 function codes (i.e. a function scan).
1480 o Bump in the wire (BITW) solutions : A hardware device is added to
1481 provide IPSec services between two routers that are not capable of
1482 IPSec functions. This special IPsec device will intercept then
1483 intercept outgoing datagrams, add IPSec protection to them, and
1484 strip it off incoming datagrams. BITW can all IPSec to legacy
1485 hosts and can retrofit non-IPSec routers to provide security
1486 benefits. The disadvantages are complexity and cost.
1488 These inherent vulnerabilities, along with increasing connectivity
1489 between IT an OT networks, make network-based attacks very feasible.
1490 Simple injection of malicious protocol commands provides control over
1491 the target process. Altering legitimate protocol traffic can also
1492 alter information about a process and disrupt the legitimate controls
1493 that are in place over that process. A man- in-the-middle attack
1494 could provide both control over a process and misrepresentation of
1495 data back to operator consoles.
1497 3.4.2. Security Trends in Utility Networks
1499 Although advanced telecommunications networks can assist in
1500 transforming the energy industry, playing a critical role in
1501 maintaining high levels of reliability, performance, and
1502 manageability, they also introduce the need for an integrated
1503 security infrastructure. Many of the technologies being deployed to
1504 support smart grid projects such as smart meters and sensors can
1505 increase the vulnerability of the grid to attack. Top security
1506 concerns for utilities migrating to an intelligent smart grid
1507 telecommunications platform center on the following trends:
1509 o Integration of distributed energy resources
1511 o Proliferation of digital devices to enable management, automation,
1512 protection, and control
1514 o Regulatory mandates to comply with standards for critical
1515 infrastructure protection
1517 o Migration to new systems for outage management, distribution
1518 automation, condition-based maintenance, load forecasting, and
1519 smart metering
1521 o Demand for new levels of customer service and energy management
1523 This development of a diverse set of networks to support the
1524 integration of microgrids, open-access energy competition, and the
1525 use of network-controlled devices is driving the need for a converged
1526 security infrastructure for all participants in the smart grid,
1527 including utilities, energy service providers, large commercial and
1528 industrial, as well as residential customers. Securing the assets of
1529 electric power delivery systems, from the control center to the
1530 substation, to the feeders and down to customer meters, requires an
1531 end-to-end security infrastructure that protects the myriad of
1532 telecommunications assets used to operate, monitor, and control power
1533 flow and measurement. Cyber security refers to all the security
1534 issues in automation and telecommunications that affect any functions
1535 related to the operation of the electric power systems.
1536 Specifically, it involves the concepts of:
1538 o Integrity : data cannot be altered undetectably
1540 o Authenticity : the telecommunications parties involved must be
1541 validated as genuine
1543 o Authorization : only requests and commands from the authorized
1544 users can be accepted by the system
1546 o Confidentiality : data must not be accessible to any
1547 unauthenticated users
1549 When designing and deploying new smart grid devices and
1550 telecommunications systems, it's imperative to understand the various
1551 impacts of these new components under a variety of attack situations
1552 on the power grid. Consequences of a cyber attack on the grid
1553 telecommunications network can be catastrophic. This is why security
1554 for smart grid is not just an ad hoc feature or product, it's a
1555 complete framework integrating both physical and Cyber security
1556 requirements and covering the entire smart grid networks from
1557 generation to distribution. Security has therefore become one of the
1558 main foundations of the utility telecom network architecture and must
1559 be considered at every layer with a defense-in-depth approach.
1560 Migrating to IP based protocols is key to address these challenges
1561 for two reasons:
1563 1. IP enables a rich set of features and capabilities to enhance the
1564 security posture
1566 2. IP is based on open standards, which allows interoperability
1567 between different vendors and products, driving down the costs
1568 associated with implementing security solutions in OT networks.
1570 Securing OT (Operation technology) telecommunications over packet-
1571 switched IP networks follow the same principles that are foundational
1572 for securing the IT infrastructure, i.e., consideration must be given
1573 to enforcing electronic access control for both person-to-machine and
1574 machine-to-machine communications, and providing the appropriate
1575 levels of data privacy, device and platform integrity, and threat
1576 detection and mitigation.
1578 4. Building Automation Systems
1580 4.1. Use Case Description
1582 A Building Automation System (BAS) manages equipment and sensors in a
1583 building for improving residents' comfort, reducing energy
1584 consumption, and responding to failures and emergencies. For
1585 example, the BAS measures the temperature of a room using sensors and
1586 then controls the HVAC (heating, ventilating, and air conditioning)
1587 to maintain a set temperature and minimize energy consumption.
1589 A BAS primarily performs the following functions:
1591 o Periodically measures states of devices, for example humidity and
1592 illuminance of rooms, open/close state of doors, FAN speed, etc.
1594 o Stores the measured data.
1596 o Provides the measured data to BAS systems and operators.
1598 o Generates alarms for abnormal state of devices.
1600 o Controls devices (e.g. turn off room lights at 10:00 PM).
1602 4.2. Building Automation Systems Today
1604 4.2.1. BAS Architecture
1606 A typical BAS architecture of today is shown in Figure 1.
1608 +----------------------------+
1609 | |
1610 | BMS HMI |
1611 | | | |
1612 | +----------------------+ |
1613 | | Management Network | |
1614 | +----------------------+ |
1615 | | | |
1616 | LC LC |
1617 | | | |
1618 | +----------------------+ |
1619 | | Field Network | |
1620 | +----------------------+ |
1621 | | | | | |
1622 | Dev Dev Dev Dev |
1623 | |
1624 +----------------------------+
1626 BMS := Building Management Server
1627 HMI := Human Machine Interface
1628 LC := Local Controller
1630 Figure 1: BAS architecture
1632 There are typically two layers of network in a BAS. The upper one is
1633 called the Management Network and the lower one is called the Field
1634 Network. In management networks an IP-based communication protocol
1635 is used, while in field networks non-IP based communication protocols
1636 ("field protocols") are mainly used. Field networks have specific
1637 timing requirements, whereas management networks can be best-effort.
1639 A Human Machine Interface (HMI) is typically a desktop PC used by
1640 operators to monitor and display device states, send device control
1641 commands to Local Controllers (LCs), and configure building schedules
1642 (for example "turn off all room lights in the building at 10:00 PM").
1644 A Building Management Server (BMS) performs the following operations.
1646 o Collect and store device states from LCs at regular intervals.
1648 o Send control values to LCs according to a building schedule.
1650 o Send an alarm signal to operators if it detects abnormal devices
1651 states.
1653 The BMS and HMI communicate with LCs via IP-based "management
1654 protocols" (see standards [bacnetip], [knx]).
1656 A LC is typically a Programmable Logic Controller (PLC) which is
1657 connected to several tens or hundreds of devices using "field
1658 protocols". An LC performs the following kinds of operations:
1660 o Measure device states and provide the information to BMS or HMI.
1662 o Send control values to devices, unilaterally or as part of a
1663 feedback control loop.
1665 There are many field protocols used today; some are standards-based
1666 and others are proprietary (see standards [lontalk], [modbus],
1667 [profibus] and [flnet]). The result is that BASs have multiple MAC/
1668 PHY modules and interfaces. This makes BASs more expensive, slower
1669 to develop, and can result in "vendor lock-in" with multiple types of
1670 management applications.
1672 4.2.2. BAS Deployment Model
1674 An example BAS for medium or large buildings is shown in Figure 2.
1675 The physical layout spans multiple floors, and there is a monitoring
1676 room where the BAS management entities are located. Each floor will
1677 have one or more LCs depending upon the number of devices connected
1678 to the field network.
1680 +--------------------------------------------------+
1681 | Floor 3 |
1682 | +----LC~~~~+~~~~~+~~~~~+ |
1683 | | | | | |
1684 | | Dev Dev Dev |
1685 | | |
1686 |--- | ------------------------------------------|
1687 | | Floor 2 |
1688 | +----LC~~~~+~~~~~+~~~~~+ Field Network |
1689 | | | | | |
1690 | | Dev Dev Dev |
1691 | | |
1692 |--- | ------------------------------------------|
1693 | | Floor 1 |
1694 | +----LC~~~~+~~~~~+~~~~~+ +-----------------|
1695 | | | | | | Monitoring Room |
1696 | | Dev Dev Dev | |
1697 | | | BMS HMI |
1698 | | Management Network | | | |
1699 | +--------------------------------+-----+ |
1700 | | |
1701 +--------------------------------------------------+
1703 Figure 2: BAS Deployment model for Medium/Large Buildings
1705 Each LC is connected to the monitoring room via the Management
1706 network, and the management functions are performed within the
1707 building. In most cases, fast Ethernet (e.g. 100BASE-T) is used for
1708 the management network. Since the management network is non-
1709 realtime, use of Ethernet without quality of service is sufficient
1710 for today's deployment.
1712 In the field network a variety of physical interfaces such as RS232C
1713 and RS485 are used, which have specific timing requirements. Thus if
1714 a field network is to be replaced with an Ethernet or wireless
1715 network, such networks must support time-critical deterministic
1716 flows.
1718 In Figure 3, another deployment model is presented in which the
1719 management system is hosted remotely. This is becoming popular for
1720 small office and residential buildings in which a standalone
1721 monitoring system is not cost-effective.
1723 +---------------+
1724 | Remote Center |
1725 | |
1726 | BMS HMI |
1727 +------------------------------------+ | | | |
1728 | Floor 2 | | +---+---+ |
1729 | +----LC~~~~+~~~~~+ Field Network| | | |
1730 | | | | | | Router |
1731 | | Dev Dev | +-------|-------+
1732 | | | |
1733 |--- | ------------------------------| |
1734 | | Floor 1 | |
1735 | +----LC~~~~+~~~~~+ | |
1736 | | | | | |
1737 | | Dev Dev | |
1738 | | | |
1739 | | Management Network | WAN |
1740 | +------------------------Router-------------+
1741 | |
1742 +------------------------------------+
1744 Figure 3: Deployment model for Small Buildings
1746 Some interoperability is possible today in the Management Network,
1747 but not in today's field networks due to their non-IP-based design.
1749 4.2.3. Use Cases for Field Networks
1751 Below are use cases for Environmental Monitoring, Fire Detection, and
1752 Feedback Control, and their implications for field network
1753 performance.
1755 4.2.3.1. Environmental Monitoring
1757 The BMS polls each LC at a maximum measurement interval of 100ms (for
1758 example to draw a historical chart of 1 second granularity with a 10x
1759 sampling interval) and then performs the operations as specified by
1760 the operator. Each LC needs to measure each of its several hundred
1761 sensors once per measurement interval. Latency is not critical in
1762 this scenario as long as all sensor values are completed in the
1763 measurement interval. Availability is expected to be 99.999 %.
1765 4.2.3.2. Fire Detection
1767 On detection of a fire, the BMS must stop the HVAC, close the fire
1768 shutters, turn on the fire sprinklers, send an alarm, etc. There are
1769 typically ~10s of sensors per LC that BMS needs to manage. In this
1770 scenario the measurement interval is 10-50ms, the communication delay
1771 is 10ms, and the availability must be 99.9999 %.
1773 4.2.3.3. Feedback Control
1775 BAS systems utilize feedback control in various ways; the most time-
1776 critial is control of DC motors, which require a short feedback
1777 interval (1-5ms) with low communication delay (10ms) and jitter
1778 (1ms). The feedback interval depends on the characteristics of the
1779 device and a target quality of control value. There are typically
1780 ~10s of such devices per LC.
1782 Communication delay is expected to be less than 10 ms, jitter less
1783 than 1 sec while the availability must be 99.9999% .
1785 4.2.4. Security Considerations
1787 When BAS field networks were developed it was assumed that the field
1788 networks would always be physically isolated from external networks
1789 and therefore security was not a concern. In today's world many BASs
1790 are managed remotely and are thus connected to shared IP networks and
1791 so security is definitely a concern, yet security features are not
1792 available in the majority of BAS field network deployments .
1794 The management network, being an IP-based network, has the protocols
1795 available to enable network security, but in practice many BAS
1796 systems do not implement even the available security features such as
1797 device authentication or encryption for data in transit.
1799 4.3. BAS Future
1801 In the future we expect more fine-grained environmental monitoring
1802 and lower energy consumption, which will require more sensors and
1803 devices, thus requiring larger and more complex building networks.
1805 We expect building networks to be connected to or converged with
1806 other networks (Enterprise network, Home network, and Internet).
1808 Therefore better facilities for network management, control,
1809 reliability and security are critical in order to improve resident
1810 and operator convenience and comfort. For example the ability to
1811 monitor and control building devices via the internet would enable
1812 (for example) control of room lights or HVAC from a resident's
1813 desktop PC or phone application.
1815 4.4. BAS Asks
1817 The community would like to see an interoperable protocol
1818 specification that can satisfy the timing, security, availability and
1819 QoS constraints described above, such that the resulting converged
1820 network can replace the disparate field networks. Ideally this
1821 connectivity could extend to the open Internet.
1823 This would imply an architecture that can guarantee
1825 o Low communication delays (from <10ms to 100ms in a network of
1826 several hundred devices)
1828 o Low jitter (< 1 ms)
1830 o Tight feedback intervals (1ms - 10ms)
1832 o High network availability (up to 99.9999% )
1834 o Availability of network data in disaster scenario
1836 o Authentication between management and field devices (both local
1837 and remote)
1839 o Integrity and data origin authentication of communication data
1840 between field and management devices
1842 o Confidentiality of data when communicated to a remote device
1844 5. Wireless for Industrial Use Cases
1846 (This section was derived from draft-thubert-6tisch-4detnet-01)
1848 5.1. Introduction
1850 The emergence of wireless technology has enabled a variety of new
1851 devices to get interconnected, at a very low marginal cost per
1852 device, at any distance ranging from Near Field to interplanetary,
1853 and in circumstances where wiring may not be practical, for instance
1854 on fast-moving or rotating devices.
1856 At the same time, a new breed of Time Sensitive Networks is being
1857 developed to enable traffic that is highly sensitive to jitter, quite
1858 sensitive to latency, and with a high degree of operational
1859 criticality so that loss should be minimized at all times. Such
1860 traffic is not limited to professional Audio/ Video networks, but is
1861 also found in command and control operations such as industrial
1862 automation and vehicular sensors and actuators.
1864 At IEEE802.1, the Audio/Video Task Group [IEEE802.1TSNTG] Time
1865 Sensitive Networking (TSN) to address Deterministic Ethernet. The
1866 Medium access Control (MAC) of IEEE802.15.4 [IEEE802154] has evolved
1867 with the new TimeSlotted Channel Hopping (TSCH) [RFC7554] mode for
1868 deterministic industrial-type applications. TSCH was introduced with
1869 the IEEE802.15.4e [IEEE802154e] amendment and will be wrapped up in
1870 the next revision of the IEEE802.15.4 standard. For all practical
1871 purpose, this document is expected to be insensitive to the future
1872 versions of the IEEE802.15.4 standard, which is thus referenced
1873 undated.
1875 Though at a different time scale, both TSN and TSCH standards provide
1876 Deterministic capabilities to the point that a packet that pertains
1877 to a certain flow crosses the network from node to node following a
1878 very precise schedule, as a train that leaves intermediate stations
1879 at precise times along its path. With TSCH, time is formatted into
1880 timeSlots, and an individual cell is allocated to unicast or
1881 broadcast communication at the MAC level. The time-slotted operation
1882 reduces collisions, saves energy, and enables to more closely
1883 engineer the network for deterministic properties. The channel
1884 hopping aspect is a simple and efficient technique to combat multi-
1885 path fading and co-channel interferences (for example by Wi-Fi
1886 emitters).
1888 The 6TiSCH Architecture [I-D.ietf-6tisch-architecture] defines a
1889 remote monitoring and scheduling management of a TSCH network by a
1890 Path Computation Element (PCE), which cooperates with an abstract
1891 Network Management Entity (NME) to manage timeSlots and device
1892 resources in a manner that minimizes the interaction with and the
1893 load placed on the constrained devices.
1895 This Architecture applies the concepts of Deterministic Networking on
1896 a TSCH network to enable the switching of timeSlots in a G-MPLS
1897 manner. This document details the dependencies that 6TiSCH has on
1898 PCE [PCE] and DetNet [I-D.finn-detnet-architecture] to provide the
1899 necessary capabilities that may be specific to such networks. In
1900 turn, DetNet is expected to integrate and maintain consistency with
1901 the work that has taken place and is continuing at IEEE802.1TSN and
1902 AVnu.
1904 5.2. Terminology
1906 Readers are expected to be familiar with all the terms and concepts
1907 that are discussed in "Multi-link Subnet Support in IPv6"
1908 [I-D.ietf-ipv6-multilink-subnets].
1910 The draft uses terminology defined or referenced in
1911 [I-D.ietf-6tisch-terminology] and
1912 [I-D.ietf-roll-rpl-industrial-applicability].
1914 The draft also conforms to the terms and models described in
1915 [RFC3444] and uses the vocabulary and the concepts defined in
1916 [RFC4291] for the IPv6 Architecture.
1918 5.3. 6TiSCH Overview
1920 The scope of the present work is a subnet that, in its basic
1921 configuration, is made of a TSCH [RFC7554] MAC Low Power Lossy
1922 Network (LLN).
1924 ---+-------- ............ ------------
1925 | External Network |
1926 | +-----+
1927 +-----+ | NME |
1928 | | LLN Border | |
1929 | | router +-----+
1930 +-----+
1931 o o o
1932 o o o o
1933 o o LLN o o o
1934 o o o o
1935 o
1937 Figure 4: Basic Configuration of a 6TiSCH Network
1939 In the extended configuration, a Backbone Router (6BBR) federates
1940 multiple 6TiSCH in a single subnet over a backbone. 6TiSCH 6BBRs
1941 synchronize with one another over the backbone, so as to ensure that
1942 the multiple LLNs that form the IPv6 subnet stay tightly
1943 synchronized.
1945 ---+-------- ............ ------------
1946 | External Network |
1947 | +-----+
1948 | +-----+ | NME |
1949 +-----+ | +-----+ | |
1950 | | Router | | PCE | +-----+
1951 | | +--| |
1952 +-----+ +-----+
1953 | |
1954 | Subnet Backbone |
1955 +--------------------+------------------+
1956 | | |
1957 +-----+ +-----+ +-----+
1958 | | Backbone | | Backbone | | Backbone
1959 o | | router | | router | | router
1960 +-----+ +-----+ +-----+
1961 o o o o o
1962 o o o o o o o o o o o
1963 o o o LLN o o o o
1964 o o o o o o o o o o o o
1966 Figure 5: Extended Configuration of a 6TiSCH Network
1968 If the Backbone is Deterministic, then the Backbone Router ensures
1969 that the end-to-end deterministic behavior is maintained between the
1970 LLN and the backbone. This SHOULD be done in conformance to the
1971 DetNet Architecture [I-D.finn-detnet-architecture] which studies
1972 Layer-3 aspects of Deterministic Networks, and covers networks that
1973 span multiple Layer-2 domains. One particular requirement is that
1974 the PCE MUST be able to compute a deterministic path and to end
1975 across the TSCH network and an IEEE802.1 TSN Ethernet backbone, and
1976 DetNet MUST enable end-to-end deterministic forwarding.
1978 6TiSCH defines the concept of a Track, which is a complex form of a
1979 uni-directional Circuit ([I-D.ietf-6tisch-terminology]). As opposed
1980 to a simple circuit that is a sequence of nodes and links, a Track is
1981 shaped as a directed acyclic graph towards a destination to support
1982 multi-path forwarding and route around failures. A Track may also
1983 branch off and rejoin, for the purpose of the so-called Packet
1984 Replication and Elimination (PRE), over non congruent branches. PRE
1985 may be used to complement layer-2 Automatic Repeat reQuest (ARQ) to
1986 meet industrial expectations in Packet Delivery Ratio (PDR), in
1987 particular when the Track extends beyond the 6TiSCH network.
1989 +-----+
1990 | IoT |
1991 | G/W |
1992 +-----+
1993 ^ <---- Elimination
1994 | |
1995 Track branch | |
1996 +-------+ +--------+ Subnet Backbone
1997 | |
1998 +--|--+ +--|--+
1999 | | | Backbone | | | Backbone
2000 o | | | router | | | router
2001 +--/--+ +--|--+
2002 o / o o---o----/ o
2003 o o---o--/ o o o o o
2004 o \ / o o LLN o
2005 o v <---- Replication
2006 o
2008 Figure 6: End-to-End deterministic Track
2010 In the example above, a Track is laid out from a field device in a
2011 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN
2012 backbone.
2014 The Replication function in the field device sends a copy of each
2015 packet over two different branches, and the PCE schedules each hop of
2016 both branches so that the two copies arrive in due time at the
2017 gateway. In case of a loss on one branch, hopefully the other copy
2018 of the packet still makes it in due time. If two copies make it to
2019 the IoT gateway, the Elimination function in the gateway ignores the
2020 extra packet and presents only one copy to upper layers.
2022 At each 6TiSCH hop along the Track, the PCE may schedule more than
2023 one timeSlot for a packet, so as to support Layer-2 retries (ARQ).
2024 It is also possible that the field device only uses the second branch
2025 if sending over the first branch fails.
2027 In current deployments, a TSCH Track does not necessarily support PRE
2028 but is systematically multi-path. This means that a Track is
2029 scheduled so as to ensure that each hop has at least two forwarding
2030 solutions, and the forwarding decision is to try the preferred one
2031 and use the other in case of Layer-2 transmission failure as detected
2032 by ARQ.
2034 5.3.1. TSCH and 6top
2036 6top is a logical link control sitting between the IP layer and the
2037 TSCH MAC layer, which provides the link abstraction that is required
2038 for IP operations. The 6top operations are specified in
2039 [I-D.wang-6tisch-6top-sublayer].
2041 The 6top data model and management interfaces are further discussed
2042 in [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap].
2044 The architecture defines "soft" cells and "hard" cells. "Hard" cells
2045 are owned and managed by an separate scheduling entity (e.g. a PCE)
2046 that specifies the slotOffset/channelOffset of the cells to be
2047 added/moved/deleted, in which case 6top can only act as instructed,
2048 and may not move hard cells in the TSCH schedule on its own.
2050 5.3.2. SlotFrames and Priorities
2052 A slotFrame is the base object that the PCE needs to manipulate to
2053 program a schedule into an LLN node. Elaboration on that concept can
2054 be found in section "SlotFrames and Priorities" of the 6TiSCH
2055 architecture [I-D.ietf-6tisch-architecture]. The architecture also
2056 details how the schedule is constructed and how transmission
2057 resources called cells can be allocated to particular transmissions
2058 so as to avoid collisions.
2060 5.3.3. Schedule Management by a PCE
2062 6TiSCH supports a mixed model of centralized routes and distributed
2063 routes. Centralized routes can for example be computed by a entity
2064 such as a PCE. Distributed routes are computed by RPL.
2066 Both methods may inject routes in the Routing Tables of the 6TiSCH
2067 routers. In either case, each route is associated with a 6TiSCH
2068 topology that can be a RPL Instance topology or a track. The 6TiSCH
2069 topology is indexed by a Instance ID, in a format that reuses the
2070 RPLInstanceID as defined in RPL [RFC6550].
2072 Both RPL and PCE rely on shared sources such as policies to define
2073 Global and Local RPLInstanceIDs that can be used by either method.
2074 It is possible for centralized and distributed routing to share a
2075 same topology. Generally they will operate in different slotFrames,
2076 and centralized routes will be used for scheduled traffic and will
2077 have precedence over distributed routes in case of conflict between
2078 the slotFrames.
2080 Section "Schedule Management Mechanisms" of the 6TiSCH architecture
2081 describes 4 paradigms to manage the TSCH schedule of the LLN nodes:
2083 Static Scheduling, neighbor-to-neighbor Scheduling, remote monitoring
2084 and scheduling management, and Hop-by-hop scheduling. The Track
2085 operation for DetNet corresponds to a remote monitoring and
2086 scheduling management by a PCE.
2088 The 6top interface document [I-D.ietf-6tisch-6top-interface]
2089 specifies the generic data model that can be used to monitor and
2090 manage resources of the 6top sublayer. Abstract methods are
2091 suggested for use by a management entity in the device. The data
2092 model also enables remote control operations on the 6top sublayer.
2094 [I-D.ietf-6tisch-coap] defines an mapping of the 6top set of
2095 commands, which is described in [I-D.ietf-6tisch-6top-interface], to
2096 CoAP resources. This allows an entity to interact with the 6top
2097 layer of a node that is multiple hops away in a RESTful fashion.
2099 [I-D.ietf-6tisch-coap] also defines a basic set CoAP resources and
2100 associated RESTful access methods (GET/PUT/POST/DELETE). The payload
2101 (body) of the CoAP messages is encoded using the CBOR format. The
2102 PCE commands are expected to be issued directly as CoAP requests or
2103 to be mapped back and forth into CoAP by a gateway function at the
2104 edge of the 6TiSCH network. For instance, it is possible that a
2105 mapping entity on the backbone transforms a non-CoAP protocol such as
2106 PCEP into the RESTful interfaces that the 6TiSCH devices support.
2107 This architecture will be refined to comply with DetNet
2108 [I-D.finn-detnet-architecture] when the work is formalized.
2110 5.3.4. Track Forwarding
2112 By forwarding, this specification means the per-packet operation that
2113 allows to deliver a packet to a next hop or an upper layer in this
2114 node. Forwarding is based on pre-existing state that was installed
2115 as a result of the routing computation of a Track by a PCE. The
2116 6TiSCH architecture supports three different forwarding model, G-MPLS
2117 Track Forwarding (TF), 6LoWPAN Fragment Forwarding (FF) and IPv6
2118 Forwarding (6F) which is the classical IP operation. The DetNet case
2119 relates to the Track Forwarding operation under the control of a PCE.
2121 A Track is a unidirectional path between a source and a destination.
2122 In a Track cell, the normal operation of IEEE802.15.4 Automatic
2123 Repeat-reQuest (ARQ) usually happens, though the acknowledgment may
2124 be omitted in some cases, for instance if there is no scheduled cell
2125 for a retry.
2127 Track Forwarding is the simplest and fastest. A bundle of cells set
2128 to receive (RX-cells) is uniquely paired to a bundle of cells that
2129 are set to transmit (TX-cells), representing a layer-2 forwarding
2130 state that can be used regardless of the network layer protocol.
2132 This model can effectively be seen as a Generalized Multi-protocol
2133 Label Switching (G-MPLS) operation in that the information used to
2134 switch a frame is not an explicit label, but rather related to other
2135 properties of the way the packet was received, a particular cell in
2136 the case of 6TiSCH. As a result, as long as the TSCH MAC (and
2137 Layer-2 security) accepts a frame, that frame can be switched
2138 regardless of the protocol, whether this is an IPv6 packet, a 6LoWPAN
2139 fragment, or a frame from an alternate protocol such as WirelessHART
2140 or ISA100.11a.
2142 A data frame that is forwarded along a Track normally has a
2143 destination MAC address that is set to broadcast - or a multicast
2144 address depending on MAC support. This way, the MAC layer in the
2145 intermediate nodes accepts the incoming frame and 6top switches it
2146 without incurring a change in the MAC header. In the case of
2147 IEEE802.15.4, this means effectively broadcast, so that along the
2148 Track the short address for the destination of the frame is set to
2149 0xFFFF.
2151 A Track is thus formed end-to-end as a succession of paired bundles,
2152 a receive bundle from the previous hop and a transmit bundle to the
2153 next hop along the Track, and a cell in such a bundle belongs to at
2154 most one Track. For a given iteration of the device schedule, the
2155 effective channel of the cell is obtained by adding a pseudo-random
2156 number to the channelOffset of the cell, which results in a rotation
2157 of the frequency that used for transmission. The bundles may be
2158 computed so as to accommodate both variable rates and
2159 retransmissions, so they might not be fully used at a given iteration
2160 of the schedule. The 6TiSCH architecture provides additional means
2161 to avoid waste of cells as well as overflows in the transmit bundle,
2162 as follows:
2164 In one hand, a TX-cell that is not needed for the current iteration
2165 may be reused opportunistically on a per-hop basis for routed
2166 packets. When all of the frame that were received for a given Track
2167 are effectively transmitted, any available TX-cell for that Track can
2168 be reused for upper layer traffic for which the next-hop router
2169 matches the next hop along the Track. In that case, the cell that is
2170 being used is effectively a TX-cell from the Track, but the short
2171 address for the destination is that of the next-hop router. It
2172 results that a frame that is received in a RX-cell of a Track with a
2173 destination MAC address set to this node as opposed to broadcast must
2174 be extracted from the Track and delivered to the upper layer (a frame
2175 with an unrecognized MAC address is dropped at the lower MAC layer
2176 and thus is not received at the 6top sublayer).
2178 On the other hand, it might happen that there are not enough TX-cells
2179 in the transmit bundle to accommodate the Track traffic, for instance
2180 if more retransmissions are needed than provisioned. In that case,
2181 the frame can be placed for transmission in the bundle that is used
2182 for layer-3 traffic towards the next hop along the track as long as
2183 it can be routed by the upper layer, that is, typically, if the frame
2184 transports an IPv6 packet. The MAC address should be set to the
2185 next-hop MAC address to avoid confusion. It results that a frame
2186 that is received over a layer-3 bundle may be in fact associated to a
2187 Track. In a classical IP link such as an Ethernet, off-track traffic
2188 is typically in excess over reservation to be routed along the non-
2189 reserved path based on its QoS setting. But with 6TiSCH, since the
2190 use of the layer-3 bundle may be due to transmission failures, it
2191 makes sense for the receiver to recognize a frame that should be re-
2192 tracked, and to place it back on the appropriate bundle if possible.
2193 A frame should be re-tracked if the Per-Hop-Behavior group indicated
2194 in the Differentiated Services Field in the IPv6 header is set to
2195 Deterministic Forwarding, as discussed in Section 5.4.1. A frame is
2196 re-tracked by scheduling it for transmission over the transmit bundle
2197 associated to the Track, with the destination MAC address set to
2198 broadcast.
2200 There are 2 modes for a Track, transport mode and tunnel mode.
2202 5.3.4.1. Transport Mode
2204 In transport mode, the Protocol Data Unit (PDU) is associated with
2205 flow-dependant meta-data that refers uniquely to the Track, so the
2206 6top sublayer can place the frame in the appropriate cell without
2207 ambiguity. In the case of IPv6 traffic, this flow identification is
2208 transported in the Flow Label of the IPv6 header. Associated with
2209 the source IPv6 address, the Flow Label forms a globally unique
2210 identifier for that particular Track that is validated at egress
2211 before restoring the destination MAC address (DMAC) and punting to
2212 the upper layer.
2214 | ^
2215 +--------------+ | |
2216 | IPv6 | | |
2217 +--------------+ | |
2218 | 6LoWPAN HC | | |
2219 +--------------+ ingress egress
2220 | 6top | sets +----+ +----+ restores
2221 +--------------+ dmac to | | | | dmac to
2222 | TSCH MAC | brdcst | | | | self
2223 +--------------+ | | | | | |
2224 | LLN PHY | +-------+ +--...-----+ +-------+
2225 +--------------+
2227 Track Forwarding, Transport Mode
2229 5.3.4.2. Tunnel Mode
2231 In tunnel mode, the frames originate from an arbitrary protocol over
2232 a compatible MAC that may or may not be synchronized with the 6TiSCH
2233 network. An example of this would be a router with a dual radio that
2234 is capable of receiving and sending WirelessHART or ISA100.11a frames
2235 with the second radio, by presenting itself as an access Point or a
2236 Backbone Router, respectively.
2238 In that mode, some entity (e.g. PCE) can coordinate with a
2239 WirelessHART Network Manager or an ISA100.11a System Manager to
2240 specify the flows that are to be transported transparently over the
2241 Track.
2243 +--------------+
2244 | IPv6 |
2245 +--------------+
2246 | 6LoWPAN HC |
2247 +--------------+ set restore
2248 | 6top | +dmac+ +dmac+
2249 +--------------+ to|brdcst to|nexthop
2250 | TSCH MAC | | | | |
2251 +--------------+ | | | |
2252 | LLN PHY | +-------+ +--...-----+ +-------+
2253 +--------------+ | ingress egress |
2254 | |
2255 +--------------+ | |
2256 | LLN PHY | | |
2257 +--------------+ | |
2258 | TSCH MAC | | |
2259 +--------------+ | dmac = | dmac =
2260 |ISA100/WiHART | | nexthop v nexthop
2261 +--------------+
2263 Figure 7: Track Forwarding, Tunnel Mode
2265 In that case, the flow information that identifies the Track at the
2266 ingress 6TiSCH router is derived from the RX-cell. The dmac is set
2267 to this node but the flow information indicates that the frame must
2268 be tunneled over a particular Track so the frame is not passed to the
2269 upper layer. Instead, the dmac is forced to broadcast and the frame
2270 is passed to the 6top sublayer for switching.
2272 At the egress 6TiSCH router, the reverse operation occurs. Based on
2273 metadata associated to the Track, the frame is passed to the
2274 appropriate link layer with the destination MAC restored.
2276 5.3.4.3. Tunnel Metadata
2278 Metadata coming with the Track configuration is expected to provide
2279 the destination MAC address of the egress endpoint as well as the
2280 tunnel mode and specific data depending on the mode, for instance a
2281 service access point for frame delivery at egress. If the tunnel
2282 egress point does not have a MAC address that matches the
2283 configuration, the Track installation fails.
2285 In transport mode, if the final layer-3 destination is the tunnel
2286 termination, then it is possible that the IPv6 address of the
2287 destination is compressed at the 6LoWPAN sublayer based on the MAC
2288 address. It is thus mandatory at the ingress point to validate that
2289 the MAC address that was used at the 6LoWPAN sublayer for compression
2290 matches that of the tunnel egress point. For that reason, the node
2291 that injects a packet on a Track checks that the destination is
2292 effectively that of the tunnel egress point before it overwrites it
2293 to broadcast. The 6top sublayer at the tunnel egress point reverts
2294 that operation to the MAC address obtained from the tunnel metadata.
2296 5.4. Operations of Interest for DetNet and PCE
2298 In a classical system, the 6TiSCH device does not place the request
2299 for bandwidth between self and another device in the network.
2300 Rather, an Operation Control System invoked through an Human/Machine
2301 Interface (HMI) indicates the Traffic Specification, in particular in
2302 terms of latency and reliability, and the end nodes. With this, the
2303 PCE must compute a Track between the end nodes and provision the
2304 network with per-flow state that describes the per-hop operation for
2305 a given packet, the corresponding timeSlots, and the flow
2306 identification that enables to recognize when a certain packet
2307 belongs to a certain Track, sort out duplicates, etc...
2309 For a static configuration that serves a certain purpose for a long
2310 period of time, it is expected that a node will be provisioned in one
2311 shot with a full schedule, which incorporates the aggregation of its
2312 behavior for multiple Tracks. 6TiSCH expects that the programing of
2313 the schedule will be done over COAP as discussed in 6TiSCH Resource
2314 Management and Interaction using CoAP [I-D.ietf-6tisch-coap].
2316 But an Hybrid mode may be required as well whereby a single Track is
2317 added, modified, or removed, for instance if it appears that a Track
2318 does not perform as expected for, say, PDR. For that case, the
2319 expectation is that a protocol that flows along a Track (to be), in a
2320 fashion similar to classical Traffic Engineering (TE) [CCAMP], may be
2321 used to update the state in the devices. 6TiSCH provides means for a
2322 device to negotiate a timeSlot with a neighbor, but in general that
2323 flow was not designed and no protocol was selected and it is expected
2324 that DetNet will determine the appropriate end-to-end protocols to be
2325 used in that case.
2327 Operational System and HMI
2329 -+-+-+-+-+-+-+ Northbound -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
2331 PCE PCE PCE PCE
2333 -+-+-+-+-+-+-+ Southbound -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
2335 --- 6TiSCH------6TiSCH------6TiSCH------6TiSCH--
2336 6TiSCH / Device Device Device Device \
2337 Device- - 6TiSCH
2338 \ 6TiSCH 6TiSCH 6TiSCH 6TiSCH / Device
2339 ----Device------Device------Device------Device--
2341 Figure 8: Stream Management Entity
2343 5.4.1. Packet Marking and Handling
2345 Section "Packet Marking and Handling" of
2346 [I-D.ietf-6tisch-architecture] describes the packet tagging and
2347 marking that is expected in 6TiSCH networks.
2349 5.4.1.1. Tagging Packets for Flow Identification
2351 For packets that are routed by a PCE along a Track, the tuple formed
2352 by the IPv6 source address and a local RPLInstanceID is tagged in the
2353 packets to identify uniquely the Track and associated transmit bundle
2354 of timeSlots.
2356 It results that the tagging that is used for a DetNet flow outside
2357 the 6TiSCH LLN MUST be swapped into 6TiSCH formats and back as the
2358 packet enters and then leaves the 6TiSCH network.
2360 Note: The method and format used for encoding the RPLInstanceID at
2361 6lo is generalized to all 6TiSCH topological Instances, which
2362 includes Tracks.
2364 5.4.1.2. Replication, Retries and Elimination
2366 6TiSCH expects elimination and replication of packets along a complex
2367 Track, but has no position about how the sequence numbers would be
2368 tagged in the packet.
2370 As it goes, 6TiSCH expects that timeSlots corresponding to copies of
2371 a same packet along a Track are correlated by configuration, and does
2372 not need to process the sequence numbers.
2374 The semantics of the configuration MUST enable correlated timeSlots
2375 to be grouped for transmit (and respectively receive) with a 'OR'
2376 relations, and then a 'AND' relation MUST be configurable between
2377 groups. The semantics is that if the transmit (and respectively
2378 receive) operation succeeded in one timeSlot in a 'OR' group, then
2379 all the other timeSLots in the group are ignored. Now, if there are
2380 at least two groups, the 'AND' relation between the groups indicates
2381 that one operation must succeed in each of the groups.
2383 On the transmit side, timeSlots provisioned for retries along a same
2384 branch of a Track are placed a same 'OR' group. The 'OR' relation
2385 indicates that if a transmission is acknowledged, then further
2386 transmissions SHOULD NOT be attempted for timeSlots in that group.
2387 There are as many 'OR' groups as there are branches of the Track
2388 departing from this node. Different 'OR' groups are programmed for
2389 the purpose of replication, each group corresponding to one branch of
2390 the Track. The 'AND' relation between the groups indicates that
2391 transmission over any of branches MUST be attempted regardless of
2392 whether a transmission succeeded in another branch. It is also
2393 possible to place cells to different next-hop routers in a same 'OR'
2394 group. This allows to route along multi-path tracks, trying one
2395 next-hop and then another only if sending to the first fails.
2397 On the receive side, all timeSlots are programmed in a same 'OR'
2398 group. Retries of a same copy as well as converging branches for
2399 elimination are converged, meaning that the first successful
2400 reception is enough and that all the other timeSlots can be ignored.
2402 5.4.1.3. Differentiated Services Per-Hop-Behavior
2404 Additionally, an IP packet that is sent along a Track uses the
2405 Differentiated Services Per-Hop-Behavior Group called Deterministic
2406 Forwarding, as described in
2407 [I-D.svshah-tsvwg-deterministic-forwarding].
2409 5.4.2. Topology and capabilities
2411 6TiSCH nodes are usually IoT devices, characterized by very limited
2412 amount of memory, just enough buffers to store one or a few IPv6
2413 packets, and limited bandwidth between peers. It results that a node
2414 will maintain only a small number of peering information, and will
2415 not be able to store many packets waiting to be forwarded. Peers can
2416 be identified through MAC or IPv6 addresses, but a Cryptographically
2417 Generated Address [RFC3972] (CGA) may also be used.
2419 Neighbors can be discovered over the radio using mechanism such as
2420 beacons, but, though the neighbor information is available in the
2421 6TiSCH interface data model, 6TiSCH does not describe a protocol to
2422 pro-actively push the neighborhood information to a PCE. This
2423 protocol should be described and should operate over CoAP. The
2424 protocol should be able to carry multiple metrics, in particular the
2425 same metrics as used for RPL operations [RFC6551]
2427 The energy that the device consumes in sleep, transmit and receive
2428 modes can be evaluated and reported. So can the amount of energy
2429 that is stored in the device and the power that it can be scavenged
2430 from the environment. The PCE SHOULD be able to compute Tracks that
2431 will implement policies on how the energy is consumed, for instance
2432 balance between nodes, ensure that the spent energy does not exceeded
2433 the scavenged energy over a period of time, etc...
2435 5.5. Security Considerations
2437 On top of the classical protection of control signaling that can be
2438 expected to support DetNet, it must be noted that 6TiSCH networks
2439 operate on limited resources that can be depleted rapidly if an
2440 attacker manages to operate a DoS attack on the system, for instance
2441 by placing a rogue device in the network, or by obtaining management
2442 control and to setup extra paths.
2444 6. Cellular Radio Use Cases
2446 6.1. Use Case Description
2448 This use case describes the application of deterministic networking
2449 in the context of cellular telecom transport networks. Important
2450 elements include time synchronization, clock distribution, and ways
2451 of establishing time-sensitive streams for both Layer-2 and Layer-3
2452 user plane traffic.
2454 6.1.1. Network Architecture
2456 Figure 9 illustrates a typical 3GPP-defined cellular network
2457 architecture, which includes "Fronthaul" and "Midhaul" network
2458 segments. The "Fronthaul" is the network connecting base stations
2459 (baseband processing units) to the remote radio heads (antennas).
2460 The "Midhaul" is the network inter-connecting base stations (or small
2461 cell sites).
2463 In Figure 9 "eNB" ("E-UTRAN Node B") is the hardware that is
2464 connected to the mobile phone network which communicates directly
2465 with mobile handsets ([TS36300]).
2467 Y (remote radio heads (antennas))
2468 \
2469 Y__ \.--. .--. +------+
2470 \_( `. +---+ _(Back`. | 3GPP |
2471 Y------( Front )----|eNB|----( Haul )----| core |
2472 ( ` .Haul ) +---+ ( ` . ) ) | netw |
2473 /`--(___.-' \ `--(___.-' +------+
2474 Y_/ / \.--. \
2475 Y_/ _( Mid`. \
2476 ( Haul ) \
2477 ( ` . ) ) \
2478 `--(___.-'\_____+---+ (small cell sites)
2479 \ |SCe|__Y
2480 +---+ +---+
2481 Y__|eNB|__Y
2482 +---+
2483 Y_/ \_Y ("local" radios)
2485 Figure 9: Generic 3GPP-based Cellular Network Architecture
2487 The available processing time for Fronthaul networking overhead is
2488 limited to the available time after the baseband processing of the
2489 radio frame has completed. For example in Long Term Evolution (LTE)
2490 radio, processing of a radio frame is allocated 3ms, but typically
2491 the processing completes much earlier (<400us) allowing the remaining
2492 time to be used by the Fronthaul network. This ultimately determines
2493 the distance the remote radio heads can be located from the base
2494 stations (200us equals roughly 40 km of optical fiber-based
2495 transport, thus round trip time is 2*200us = 400us).
2497 The remainder of the "maximum delay budget" is consumed by all nodes
2498 and buffering between the remote radio head and the baseband
2499 processing, plus the distance-incurred delay.
2501 The baseband processing time and the available "delay budget" for the
2502 fronthaul is likely to change in the forthcoming "5G" due to reduced
2503 radio round trip times and other architectural and service
2504 requirements [NGMN].
2506 6.1.2. Time Synchronization Requirements
2508 Fronthaul time synchronization requirements are given by [TS25104],
2509 [TS36104], [TS36211], and [TS36133]. These can be summarized for the
2510 current 3GPP LTE-based networks as:
2512 Delay Accuracy:
2513 +-8ns (i.e. +-1/32 Tc, where Tc is the UMTS Chip time of 1/3.84
2514 MHz) resulting in a round trip accuracy of +-16ns. The value is
2515 this low to meet the 3GPP Timing Alignment Error (TAE) measurement
2516 requirements.
2518 Packet Delay Variation:
2519 Packet Delay Variation (PDV aka Jitter aka Timing Alignment Error)
2520 is problematic to Fronthaul networks and must be minimized. If
2521 the transport network cannot guarantee low enough PDV then
2522 additional buffering has to be introduced at the edges of the
2523 network to buffer out the jitter. Buffering is not desirable as
2524 it reduces the total available delay budget.
2526 * For multiple input multiple output (MIMO) or TX diversity
2527 transmissions, at each carrier frequency, TAE shall not exceed
2528 65 ns (i.e. 1/4 Tc).
2530 * For intra-band contiguous carrier aggregation, with or without
2531 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2
2532 Tc).
2534 * For intra-band non-contiguous carrier aggregation, with or
2535 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e.
2536 one Tc).
2538 * For inter-band carrier aggregation, with or without MIMO or TX
2539 diversity, TAE shall not exceed 260 ns.
2541 Transport link contribution to radio frequency error:
2542 +-2 PPB. This value is considered to be "available" for the
2543 Fronthaul link out of the total 50 PPB budget reserved for the
2544 radio interface. Note: the reason that the transport link
2545 contributes to radio frequency error is as follows. The current
2546 way of doing Fronthaul is from the radio unit to remote radio head
2547 directly. The remote radio head is essentially a passive device
2548 (without buffering etc.) The transport drives the antenna
2549 directly by feeding it with samples and everything the transport
2550 adds will be introduced to radio as-is. So if the transport
2551 causes additional frequence error that shows immediately on the
2552 radio as well.
2554 The above listed time synchronization requirements are difficult to
2555 meet with point-to-point connected networks, and more difficult when
2556 the network includes multiple hops. It is expected that networks
2557 must include buffering at the ends of the connections as imposed by
2558 the jitter requirements, since trying to meet the jitter requirements
2559 in every intermediate node is likely to be too costly. However,
2560 every measure to reduce jitter and delay on the path makes it easier
2561 to meet the end-to-end requirements.
2563 In order to meet the timing requirements both senders and receivers
2564 must remain time synchronized, demanding very accurate clock
2565 distribution, for example support for IEEE 1588 transparent clocks in
2566 every intermediate node.
2568 In cellular networks from the LTE radio era onward, phase
2569 synchronization is needed in addition to frequency synchronization
2570 ([TS36300], [TS23401]).
2572 6.1.3. Time-Sensitive Stream Requirements
2574 In addition to the time synchronization requirements listed in
2575 Section Section 6.1.2 the Fronthaul networks assume practically
2576 error-free transport. The maximum bit error rate (BER) has been
2577 defined to be 10^-12. When packetized that would imply a packet
2578 error rate (PER) of 2.4*10^-9 (assuming ~300 bytes packets).
2579 Retransmitting lost packets and/or using forward error correction
2580 (FEC) to circumvent bit errors is practically impossible due to the
2581 additional delay incurred. Using redundant streams for better
2582 guarantees for delivery is also practically impossible in many cases
2583 due to high bandwidth requirements of Fronthaul networks. For
2584 instance, current uncompressed CPRI bandwidth expansion ratio is
2585 roughly 20:1 compared to the IP layer user payload it carries.
2586 Protection switching is also a candidate but current technologies for
2587 the path switch are too slow. We do not currently know of a better
2588 solution for this issue.
2590 Fronthaul links are assumed to be symmetric, and all Fronthaul
2591 streams (i.e. those carrying radio data) have equal priority and
2592 cannot delay or pre-empt each other. This implies that the network
2593 must guarantee that each time-sensitive flow meets their schedule.
2595 6.1.4. Security Considerations
2597 Establishing time-sensitive streams in the network entails reserving
2598 networking resources for long periods of time. It is important that
2599 these reservation requests be authenticated to prevent malicious
2600 reservation attempts from hostile nodes (or accidental
2601 misconfiguration). This is particularly important in the case where
2602 the reservation requests span administrative domains. Furthermore,
2603 the reservation information itself should be digitally signed to
2604 reduce the risk of a legitimate node pushing a stale or hostile
2605 configuration into another networking node.
2607 6.2. Cellular Radio Networks Today
2609 Today's Fronthaul networks typically consist of:
2611 o Dedicated point-to-point fiber connection is common
2613 o Proprietary protocols and framings
2615 o Custom equipment and no real networking
2617 Today's Midhaul and Backhaul networks typically consist of:
2619 o Mostly normal IP networks, MPLS-TP, etc.
2621 o Clock distribution and sync using 1588 and SyncE
2623 Telecommunication networks in the cellular domain are already heading
2624 towards transport networks where precise time synchronization support
2625 is one of the basic building blocks. While the transport networks
2626 themselves have practically transitioned to all-IP packet based
2627 networks to meet the bandwidth and cost requirements, highly accurate
2628 clock distribution has become a challenge.
2630 Transport networks in the cellular domain are typically based on Time
2631 Division Multiplexing (TDM-based) and provide frequency
2632 synchronization capabilities as a part of the transport media.
2633 Alternatively other technologies such as Global Positioning System
2634 (GPS) or Synchronous Ethernet (SyncE) are used [SyncE].
2636 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985]
2637 for legacy transport support) have become popular tools to build and
2638 manage new all-IP Radio Access Networks (RAN)
2639 [I-D.kh-spring-ip-ran-use-case]. Although various timing and
2640 synchronization optimizations have already been proposed and
2641 implemented including 1588 PTP enhancements
2642 [I-D.ietf-tictoc-1588overmpls][I-D.mirsky-mpls-residence-time], these
2643 solution are not necessarily sufficient for the forthcoming RAN
2644 architectures or guarantee the higher time-synchronization
2645 requirements [CPRI]. There are also existing solutions for the TDM
2646 over IP [RFC5087] [RFC4553] or Ethernet transports [RFC5086].
2648 6.3. Cellular Radio Networks Future
2650 We would like to see the following in future Cellular Radio networks:
2652 o Unified standards-based transport protocols and standard
2653 networking equipment that can make use of underlying deterministic
2654 link-layer services
2656 o Unified and standards-based network management systems and
2657 protocols in all parts of the network (including Fronthaul)
2659 New radio access network deployment models and architectures may
2660 require time sensitive networking services with strict requirements
2661 on other parts of the network that previously were not considered to
2662 be packetized at all. The time and synchronization support are
2663 already topical for Backhaul and Midhaul packet networks [MEF], and
2664 becoming a real issue for Fronthaul networks. Specifically in the
2665 Fronthaul networks the timing and synchronization requirements can be
2666 extreme for packet based technologies, for example, on the order of
2667 sub +-20 ns packet delay variation (PDV) and frequency accuracy of
2668 +0.002 PPM [Fronthaul].
2670 The actual transport protocols and/or solutions to establish required
2671 transport "circuits" (pinned-down paths) for Fronthaul traffic are
2672 still undefined. Those are likely to include (but are not limited
2673 to) solutions directly over Ethernet, over IP, and MPLS/PseudoWire
2674 transport.
2676 Even the current time-sensitive networking features may not be
2677 sufficient for Fronthaul traffic. Therefore, having specific
2678 profiles that take the requirements of Fronthaul into account is
2679 desirable [IEEE8021CM].
2681 The really interesting and important existing work for time sensitive
2682 networking has been done for Ethernet [TSNTG], which specifies the
2683 use of IEEE 1588 time precision protocol (PTP) [IEEE1588] in the
2684 context of IEEE 802.1D and IEEE 802.1Q. While IEEE 802.1AS
2685 [IEEE8021AS] specifies a Layer-2 time synchronizing service other
2686 specification, such as IEEE 1722 [IEEE1722] specify Ethernet-based
2687 Layer-2 transport for time-sensitive streams. New promising work
2688 seeks to enable the transport of time-sensitive fronthaul streams in
2689 Ethernet bridged networks [IEEE8021CM]. Similarly to IEEE 1722 there
2690 is an ongoing standardization effort to define Layer-2 transport
2691 encapsulation format for transporting radio over Ethernet (RoE) in
2692 IEEE 1904.3 Task Force [IEEE19043].
2694 All-IP RANs and various "haul" networks would benefit from time
2695 synchronization and time-sensitive transport services. Although
2696 Ethernet appears to be the unifying technology for the transport
2697 there is still a disconnect providing Layer-3 services. The protocol
2698 stack typically has a number of layers below the Ethernet Layer-2
2699 that shows up to the Layer-3 IP transport. It is not uncommon that
2700 on top of the lowest layer (optical) transport there is the first
2701 layer of Ethernet followed one or more layers of MPLS, PseudoWires
2702 and/or other tunneling protocols finally carrying the Ethernet layer
2703 visible to the user plane IP traffic. While there are existing
2704 technologies, especially in MPLS/PWE space, to establish circuits
2705 through the routed and switched networks, there is a lack of
2706 signaling the time synchronization and time-sensitive stream
2707 requirements/reservations for Layer-3 flows in a way that the entire
2708 transport stack is addressed and the Ethernet layers that needs to be
2709 configured are addressed.
2711 Furthermore, not all "user plane" traffic will be IP. Therefore, the
2712 same solution also must address the use cases where the user plane
2713 traffic is again another layer or Ethernet frames. There is existing
2714 work describing the problem statement
2715 [I-D.finn-detnet-problem-statement] and the architecture
2716 [I-D.finn-detnet-architecture] for deterministic networking (DetNet)
2717 that targets solutions for time-sensitive (IP/transport) streams with
2718 deterministic properties over Ethernet-based switched networks.
2720 6.4. Cellular Radio Networks Asks
2722 A standard for data plane transport specification which is:
2724 o Unified among all *hauls
2726 o Deployed in a highly deterministic network environment
2728 A standard for data flow information models that are:
2730 o Aware of the time sensitivity and constraints of the target
2731 networking environment
2733 o Aware of underlying deterministic networking services (e.g. on the
2734 Ethernet layer)
2736 Mapping the Fronthaul requirements to IETF DetNet
2737 [I-D.finn-detnet-architecture] Section 3 "Providing the DetNet
2738 Quality of Service", the relevant features are:
2740 o Zero congestion loss.
2742 o Pinned-down paths.
2744 7. Cellular Coordinated Multipoint Processing (CoMP)
2746 7.1. Use Case Description
2748 In cellular wireless communication systems, Inter-Site Coordinated
2749 Multipoint Processing (CoMP, see [CoMP]) is a technique implemented
2750 within a cell site which improves system efficiency and user quality
2751 experience by significantly improving throughput in the cell-edge
2752 region (i.e. at the edges of that cell site's radio coverage area).
2753 CoMP techniques depend on deterministic high-reliability
2754 communication between cell sites, however such connections today are
2755 IP-based which in current mobile networks can not meet the QoS
2756 requirements, so CoMP is an emerging technology which can benefit
2757 from DetNet.
2759 Here we consider the JT (Joint Transmit) application for CoMP, which
2760 provides the highest performance gain (compared to other
2761 applications).
2763 7.1.1. CoMP Architecture
2765 +--------------------------+
2766 | CoMP |
2767 +--+--------------------+--+
2768 | |
2769 +----------+ +------------+
2770 | Uplink | | Downlink |
2771 +-----+----+ +--------+---+
2772 | |
2773 ------------------- -----------------------
2774 | | | | | |
2775 +---------+ +----+ +-----+ +------------+ +-----+ +-----+
2776 | Joint | | CS | | DPS | | Joint | | CS/ | | DPS |
2777 |Reception| | | | | |Transmission| | CB | | |
2778 +---------+ +----+ +-----+ +------------+ +-----+ +-----+
2779 | |
2780 |----------- |-------------
2781 | | | |
2782 +------------+ +---------+ +----------+ +------------+
2783 | Joint | | Soft | | Coherent | | Non- |
2784 |Equalization| |Combining| | JT | | Coherent JT|
2785 +------------+ +---------+ +----------+ +------------+
2787 Figure 10: Framework of CoMP Technology
2789 As shown in Figure 10, CoMP reception and transmission is a framework
2790 in which multiple geographically distributed antenna nodes cooperate
2791 to improve the performance of the users served in the common
2792 cooperation area. The design principal of CoMP is to extend the
2793 current single-cell to multi-UE (User Equipment) transmission to a
2794 multi-cell- to-multi-UEs transmission by base station cooperation.
2796 7.1.2. Delay Sensitivity in CoMP
2798 In contrast to the single-cell scenario, CoMP has delay-sensitive
2799 performance parameters, which are "backhaul latency" and "CSI
2800 (Channel State Information) reporting and accuracy". The essential
2801 feature of CoMP is signaling between eNBs, so the backhaul latency is
2802 the dominating limitation of the CoMP performance. Generally, JT can
2803 benefit from coordinated scheduling (either distributed or
2804 centralized) of different cells if the signaling delay between eNBs
2805 is within 4-10ms. This delay requirement is both rigid and absolute
2806 because any uncertainty in delay will degrade the performance
2807 significantly.
2809 7.2. CoMP Today
2811 Due to the strict sensitivity to latency and synchronization, CoMP
2812 between eNB has not been deployed yet. This is because the current
2813 interface path between eNBs cannot meet the delay bound because it is
2814 usually IP-based and passing through multiple network hops (this
2815 interface is called "X2" or "eX2" for "enhanced X2"). Today lack of
2816 absolute delay guarantee on X2/eX2 traffic is the main obstacle to JT
2817 and multi-eNB coordination.
2819 There is still lack of Layer-3 (IP) transport protocol and signaling
2820 that is capable of low latency services; current techniques such as
2821 MPLS and PWE focus on establishing circuits using pre-routed paths
2822 but there is no such signaling for reservation of time-sensitive
2823 stream.
2825 7.3. CoMP Future
2827 7.3.1. Mobile Industry Overall Goals
2829 [METIS] documents the fundamental challenges as well as overall
2830 technical goals of the 5G mobile and wireless system as the starting
2831 point. These future systems should support (at similar cost and
2832 energy consumption levels as today's system):
2834 o 1000 times higher mobile data volume per area
2836 o 10 times to 100 times higher typical user data rate
2838 o 10 times to 100 times higher number of connected devices
2840 o 10 times longer battery life for low power devices
2842 o 5 times reduced End-to-End (E2E) latency
2843 The current LTE networking system has E2E latency less than 20ms
2844 [LTE-Latency] which leads to around 5ms E2E latency for 5G networks.
2845 To fulfill these latency demands at similar cost will be challenging
2846 because as the system also requires 100x bandwidth and 100x connected
2847 devices, simply adding redundant bandwidth provisioning can no longer
2848 be an efficient solution.
2850 In addition to bandwidth provisioning, reserved critical flows should
2851 not be affected by other flows no matter the pressure of the network.
2852 Deterministic networking techniques in both layer-2 and layer-3 using
2853 IETF protocol solutions can be promising to serve these scenarios.
2855 7.3.2. CoMP Infrastructure Goals
2857 Inter-site CoMP is one of the key requirements for 5G and is also a
2858 near-term goal for the current 4.5G network architecture. Assuming
2859 network architecture remains unchanged (i.e. no Fronthaul network and
2860 data flow between eNB is via X2/eX2) we would like to see the
2861 following in the near future:
2863 o Unified protocols and delay-guaranteed forwarding network
2864 equipment that is capable of delivering deterministic latency
2865 services.
2867 o Unified management and protocols which take delay and timing into
2868 account.
2870 o Unified deterministic latency data model and signaling for
2871 resource reservation.
2873 7.4. CoMP Asks
2875 To fully utilize the power of CoMP, it requires:
2877 o Very tight absolute delay bound (100-500us) within 7-10 hops.
2879 o Standardized data plane with highly deterministic networking
2880 capability.
2882 o Standardized control plane to unify backhaul network elements with
2883 time-sensitive stream reservation signaling.
2885 In addition, a standardized deterministic latency data flow model
2886 that includes:
2888 o Network-aware constraints on the networking environment
2889 o Time-aware description of flow characteristics and network
2890 resources, which may not need to be bandwidth based
2892 o Application-aware description of deterministic latency services.
2894 8. Industrial M2M
2896 8.1. Use Case Description
2898 Industrial Automation in general refers to automation of
2899 manufacturing, quality control and material processing. In this
2900 "machine to machine" (M2M) use case we consider machine units in a
2901 plant floor which periodically exchange data with upstream or
2902 downstream machine modules and/or a supervisory controller within a
2903 local area network.
2905 The actors of M2M communication are Programmable Logic Controllers
2906 (PLCs). Communication between PLCs and between PLCs and the
2907 supervisory PLC (S-PLC) is achieved via critical control/data streams
2908 Figure 11.
2910 S (Sensor)
2911 \ +-----+
2912 PLC__ \.--. .--. ---| MES |
2913 \_( `. _( `./ +-----+
2914 A------( Local )-------------( L2 )
2915 ( Net ) ( Net ) +-------+
2916 /`--(___.-' `--(___.-' ----| S-PLC |
2917 S_/ / PLC .--. / +-------+
2918 A_/ \_( `.
2919 (Actuator) ( Local )
2920 ( Net )
2921 /`--(___.-'\
2922 / \ A
2923 S A
2925 Figure 11: Current Generic Industrial M2M Network Architecture
2927 This use case focuses on PLC-related communications; communication to
2928 Manufacturing-Execution-Systems (MESs) are not addressed.
2930 This use case covers only critical control/data streams; non-critical
2931 traffic between industrial automation applications (such as
2932 communication of state, configuration, set-up, and database
2933 communication) are adequately served by currently available
2934 prioritizing techniques. Such traffic can use up to 80% of the total
2935 bandwidth required. There is also a subset of non-time-critical
2936 traffic that must be reliable even though it is not time sensitive.
2938 In this use case the primary need for deterministic networking is to
2939 provide end-to-end delivery of M2M messages within specific timing
2940 constraints, for example in closed loop automation control. Today
2941 this level of determinism is provided by proprietary networking
2942 technologies. In addition, standard networking technologies are used
2943 to connect the local network to remote industrial automation sites,
2944 e.g. over an enterprise or metro network which also carries other
2945 types of traffic. Therefore, flows that should be forwarded with
2946 deterministic guarantees need to be sustained regardless of the
2947 amount of other flows in those networks.
2949 8.2. Industrial M2M Communication Today
2951 Today, proprietary networks fulfill the needed timing and
2952 availability for M2M networks.
2954 The network topologies used today by industrial automation are
2955 similar to those used by telecom networks: Daisy Chain, Ring, Hub and
2956 Spoke, and Comb (a subset of Daisy Chain).
2958 PLC-related control/data streams are transmitted periodically and
2959 carry either a pre-configured payload or a payload configured during
2960 runtime.
2962 Some industrial applications require time synchronization at the end
2963 nodes. For such time-coordinated PLCs, accuracy of 1 microsecond is
2964 required. Even in the case of "non-time-coordinated" PLCs time sync
2965 may be needed e.g. for timestamping of sensor data.
2967 Industrial network scenarios require advanced security solutions.
2968 Many of the current industrial production networks are physically
2969 separated. Preventing critical flows from be leaked outside a domain
2970 is handled today by filtering policies that are typically enforced in
2971 firewalls.
2973 8.2.1. Transport Parameters
2975 The Cycle Time defines the frequency of message(s) between industrial
2976 actors. The Cycle Time is application dependent, in the range of 1ms
2977 - 100ms for critical control/data streams.
2979 Because industrial applications assume deterministic transport for
2980 critical Control-Data-Stream parameters (instead of defining latency
2981 and delay variation parameters) it is sufficient to fulfill the upper
2982 bound of latency (maximum latency). The underlying networking
2983 infrastructure must ensure a maximum end-to-end delivery time of
2984 messages in the range of 100 microseconds to 50 milliseconds
2985 depending on the control loop application.
2987 The bandwidth requirements of control/data streams are usually
2988 calculated directly from the bytes-per-cycle parameter of the control
2989 loop. For PLC-to-PLC communication one can expect 2 - 32 streams
2990 with packet size in the range of 100 - 700 bytes. For S-PLC to PLCs
2991 the number of streams is higher - up to 256 streams. Usually no more
2992 than 20% of available bandwidth is used for critical control/data
2993 streams. In today's networks 1Gbps links are commonly used.
2995 Most PLC control loops are rather tolerant of packet loss, however
2996 critical control/data streams accept no more than 1 packet loss per
2997 consecutive communication cycle (i.e. if a packet gets lost in cycle
2998 "n", then the next cycle ("n+1") must be lossless). After two or
2999 more consecutive packet losses the network may be considered to be
3000 "down" by the Application.
3002 As network downtime may impact the whole production system the
3003 required network availability is rather high (99,999%).
3005 Based on the above parameters we expect that some form of redundancy
3006 will be required for M2M communications, however any individual
3007 solution depends on several parameters including cycle time, delivery
3008 time, etc.
3010 8.2.2. Stream Creation and Destruction
3012 In an industrial environment, critical control/data streams are
3013 created rather infrequently, on the order of ~10 times per day / week
3014 / month. Most of these critical control/data streams get created at
3015 machine startup, however flexibility is also needed during runtime,
3016 for example when adding or removing a machine. Going forward as
3017 production systems become more flexible, we expect a significant
3018 increase in the rate at which streams are created, changed and
3019 destroyed.
3021 8.3. Industrial M2M Future
3023 We would like to see the various proprietary networks replaced with a
3024 converged IP-standards-based network with deterministic properties
3025 that can satisfy the timing, security and reliability constraints
3026 described above.
3028 8.4. Industrial M2M Asks
3030 o Converged IP-based network
3032 o Deterministic behavior (bounded latency and jitter )
3034 o High availability (presumably through redundancy) (99.999 %)
3036 o Low message delivery time (100us - 50ms)
3038 o Low packet loss (burstless, 0.1-1 %)
3040 o Precise time synchronization accuracy (1us)
3042 o Security (e.g. prevent critical flows from being leaked between
3043 physically separated networks)
3045 9. Internet-based Applications
3047 9.1. Use Case Description
3049 There are many applications that communicate across the open Internet
3050 that could benefit from guaranteed delivery and bounded latency. The
3051 following are some representative examples.
3053 9.1.1. Media Content Delivery
3055 Media content delivery continues to be an important use of the
3056 Internet, yet users often experience poor quality audio and video due
3057 to the delay and jitter inherent in today's Internet.
3059 9.1.2. Online Gaming
3061 Online gaming is a significant part of the gaming market, however
3062 latency can degrade the end user experience. For example "First
3063 Person Shooter" (FPS) games are highly delay-sensitive.
3065 9.1.3. Virtual Reality
3067 Virtual reality (VR) has many commercial applications including real
3068 estate presentations, remote medical procedures, and so on. Low
3069 latency is critical to interacting with the virtual world because
3070 perceptual delays can cause motion sickness.
3072 9.2. Internet-Based Applications Today
3074 Internet service today is by definition "best effort", with no
3075 guarantees on delivery or bandwidth.
3077 9.3. Internet-Based Applications Future
3079 We imagine an Internet from which we will be able to play a video
3080 without glitches and play games without lag.
3082 For online gaming, the maximum round-trip delay can be 100ms and
3083 stricter for FPS gaming which can be 10-50ms. Transport delay is the
3084 dominate part with a 5-20ms budget.
3086 For VR, 1-10ms maximum delay is needed and total network budget is
3087 1-5ms if doing remote VR.
3089 Flow identification can be used for gaming and VR, i.e. it can
3090 recognize a critical flow and provide appropriate latency bounds.
3092 9.4. Internet-Based Applications Asks
3094 o Unified control and management protocols to handle time-critical
3095 data flow
3097 o Application-aware flow filtering mechanism to recognize the timing
3098 critical flow without doing 5-tuple matching
3100 o Unified control plane to provide low latency service on Layer-3
3101 without changing the data plane
3103 o OAM system and protocols which can help to provide E2E-delay
3104 sensitive service provisioning
3106 10. Use Case Common Elements
3108 Looking at the use cases collectively, the following common desires
3109 for the DetNet-based networks of the future emerge:
3111 o Open standards-based network (replace various proprietary
3112 networks, reduce cost, create multi-vendor market)
3114 o Centrally administered (though such administration may be
3115 distributed for scale and resiliency)
3117 o Integrates L2 (bridged) and L3 (routed) environments (independent
3118 of the Link layer, e.g. can be used with Ethernet, 6TiSCH, etc.)
3120 o Carries both deterministic and best-effort traffic (guaranteed
3121 end-to-end delivery of deterministic flows, deterministic flows
3122 isolated from each other and from best-effort traffic congestion,
3123 unused deterministic BW available to best-effort traffic)
3125 o Ability to add or remove systems from the network with minimal,
3126 bounded service interruption (applications include replacement of
3127 failed devices as well as plug and play)
3129 o Uses standardized data flow information models capable of
3130 expressing deterministic properties (models express device
3131 capabilities, flow properties. Protocols for pushing models from
3132 controller to devices, devices to controller)
3134 o Scalable size (long distances (many km) and short distances
3135 (within a single machine), many hops (radio repeaters, microwave
3136 links, fiber links...) and short hops (single machine))
3138 o Scalable timing parameters and accuracy (bounded latency,
3139 guaranteed worst case maximum, minimum. Low latency, e.g. control
3140 loops may be less than 1ms, but larger for wide area networks)
3142 o High availability (99.9999 percent up time requested, but may be
3143 up to twelve 9s)
3145 o Reliability, redundancy (lives at stake)
3147 o Security (from failures, attackers, misbehaving devices -
3148 sensitive to both packet content and arrival time)
3150 11. Acknowledgments
3152 11.1. Pro Audio
3154 This section was derived from draft-gunther-detnet-proaudio-req-01.
3156 The editors would like to acknowledge the help of the following
3157 individuals and the companies they represent:
3159 Jeff Koftinoff, Meyer Sound
3161 Jouni Korhonen, Associate Technical Director, Broadcom
3163 Pascal Thubert, CTAO, Cisco
3165 Kieran Tyrrell, Sienda New Media Technologies GmbH
3167 11.2. Utility Telecom
3169 This section was derived from draft-wetterwald-detnet-utilities-reqs-
3170 02.
3172 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy
3173 Practice Cisco
3175 Pascal Thubert, CTAO Cisco
3177 11.3. Building Automation Systems
3179 This section was derived from draft-bas-usecase-detnet-00.
3181 11.4. Wireless for Industrial
3183 This section was derived from draft-thubert-6tisch-4detnet-01.
3185 This specification derives from the 6TiSCH architecture, which is the
3186 result of multiple interactions, in particular during the 6TiSCH
3187 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at
3188 the IETF.
3190 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier
3191 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael
3192 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon,
3193 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey,
3194 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria
3195 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation
3196 and various contributions.
3198 11.5. Cellular Radio
3200 This section was derived from draft-korhonen-detnet-telreq-00.
3202 11.6. Industrial M2M
3204 The authors would like to thank Feng Chen and Marcel Kiessling for
3205 their comments and suggestions.
3207 11.7. Other
3209 This section was derived from draft-zha-detnet-use-case-00.
3211 This document has benefited from reviews, suggestions, comments and
3212 proposed text provided by the following members, listed in
3213 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver
3214 Huang.
3216 12. Informative References
3218 [ACE] IETF, "Authentication and Authorization for Constrained
3219 Environments", .
3222 [bacnetip]
3223 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP",
3224 January 1999.
3226 [CCAMP] IETF, "Common Control and Measurement Plane",
3227 .
3229 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND
3230 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_
3231 and_Enhancement_v2.0, March 2015,
3232 .
3235 [CONTENT_PROTECTION]
3236 Olsen, D., "1722a Content Protection", 2012,
3237 .
3240 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI);
3241 Interface Specification", CPRI Specification V6.1, July
3242 2014, .
3245 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification,
3246 Version 1.2", 2012, .
3248 [DICE] IETF, "DTLS In Constrained Environments",
3249 .
3251 [EA12] Evans, P. and M. Annunziata, "Industrial Internet: Pushing
3252 the Boundaries of Minds and Machines", November 2012.
3254 [ESPN_DC2]
3255 Daley, D., "ESPN's DC2 Scales AVB Large", 2014,
3256 .
3259 [flnet] Japan Electrical Manufacturers' Association, "JEMA 1479 -
3260 English Edition", September 2012.
3262 [Fronthaul]
3263 Chen, D. and T. Mustala, "Ethernet Fronthaul
3264 Considerations", IEEE 1904.3, February 2015,
3265 .
3268 [HART] www.hartcomm.org, "Highway Addressable remote Transducer,
3269 a group of specifications for industrial process and
3270 control devices administered by the HART Foundation".
3272 [I-D.finn-detnet-architecture]
3273 Finn, N., Thubert, P., and M. Teener, "Deterministic
3274 Networking Architecture", draft-finn-detnet-
3275 architecture-02 (work in progress), November 2015.
3277 [I-D.finn-detnet-problem-statement]
3278 Finn, N. and P. Thubert, "Deterministic Networking Problem
3279 Statement", draft-finn-detnet-problem-statement-04 (work
3280 in progress), October 2015.
3282 [I-D.ietf-6tisch-6top-interface]
3283 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
3284 (6top) Interface", draft-ietf-6tisch-6top-interface-04
3285 (work in progress), July 2015.
3287 [I-D.ietf-6tisch-architecture]
3288 Thubert, P., "An Architecture for IPv6 over the TSCH mode
3289 of IEEE 802.15.4", draft-ietf-6tisch-architecture-09 (work
3290 in progress), November 2015.
3292 [I-D.ietf-6tisch-coap]
3293 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and
3294 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work
3295 in progress), March 2015.
3297 [I-D.ietf-6tisch-terminology]
3298 Palattella, M., Thubert, P., Watteyne, T., and Q. Wang,
3299 "Terminology in IPv6 over the TSCH mode of IEEE
3300 802.15.4e", draft-ietf-6tisch-terminology-06 (work in
3301 progress), November 2015.
3303 [I-D.ietf-ipv6-multilink-subnets]
3304 Thaler, D. and C. Huitema, "Multi-link Subnet Support in
3305 IPv6", draft-ietf-ipv6-multilink-subnets-00 (work in
3306 progress), July 2002.
3308 [I-D.ietf-roll-rpl-industrial-applicability]
3309 Phinney, T., Thubert, P., and R. Assimiti, "RPL
3310 applicability in industrial networks", draft-ietf-roll-
3311 rpl-industrial-applicability-02 (work in progress),
3312 October 2013.
3314 [I-D.ietf-tictoc-1588overmpls]
3315 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L.
3316 Montini, "Transporting Timing messages over MPLS
3317 Networks", draft-ietf-tictoc-1588overmpls-07 (work in
3318 progress), October 2015.
3320 [I-D.kh-spring-ip-ran-use-case]
3321 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing
3322 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02
3323 (work in progress), November 2014.
3325 [I-D.mirsky-mpls-residence-time]
3326 Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S.,
3327 and S. Vainshtein, "Residence Time Measurement in MPLS
3328 network", draft-mirsky-mpls-residence-time-07 (work in
3329 progress), July 2015.
3331 [I-D.svshah-tsvwg-deterministic-forwarding]
3332 Shah, S. and P. Thubert, "Deterministic Forwarding PHB",
3333 draft-svshah-tsvwg-deterministic-forwarding-04 (work in
3334 progress), August 2015.
3336 [I-D.thubert-6lowpan-backbone-router]
3337 Thubert, P., "6LoWPAN Backbone Router", draft-thubert-
3338 6lowpan-backbone-router-03 (work in progress), February
3339 2013.
3341 [I-D.wang-6tisch-6top-sublayer]
3342 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
3343 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in
3344 progress), November 2015.
3346 [IEC61850-90-12]
3347 TC57 WG10, IEC., "IEC 61850-90-12 TR: Communication
3348 networks and systems for power utility automation - Part
3349 90-12: Wide area network engineering guidelines", 2015.
3351 [IEC62439-3:2012]
3352 TC65, IEC., "IEC 62439-3: Industrial communication
3353 networks - High availability automation networks - Part 3:
3354 Parallel Redundancy Protocol (PRP) and High-availability
3355 Seamless Redundancy (HSR)", 2012.
3357 [IEEE1588]
3358 IEEE, "IEEE Standard for a Precision Clock Synchronization
3359 Protocol for Networked Measurement and Control Systems",
3360 IEEE Std 1588-2008, 2008,
3361 .
3364 [IEEE1722]
3365 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport
3366 Protocol for Time Sensitive Applications in a Bridged
3367 Local Area Network", IEEE Std 1722-2011, 2011,
3368 .
3371 [IEEE19043]
3372 IEEE Standards Association, "IEEE 1904.3 TF", IEEE 1904.3,
3373 2015, .
3375 [IEEE802.1TSNTG]
3376 IEEE Standards Association, "IEEE 802.1 Time-Sensitive
3377 Networks Task Group", March 2013,
3378 .
3380 [IEEE802154]
3381 IEEE standard for Information Technology, "IEEE std.
3382 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC)
3383 and Physical Layer (PHY) Specifications for Low-Rate
3384 Wireless Personal Area Networks".
3386 [IEEE802154e]
3387 IEEE standard for Information Technology, "IEEE standard
3388 for Information Technology, IEEE std. 802.15.4, Part.
3389 15.4: Wireless Medium Access Control (MAC) and Physical
3390 Layer (PHY) Specifications for Low-Rate Wireless Personal
3391 Area Networks, June 2011 as amended by IEEE std.
3392 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area
3393 Networks (LR-WPANs) Amendment 1: MAC sublayer", April
3394 2012.
3396 [IEEE8021AS]
3397 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)",
3398 IEEE 802.1AS-2001, 2011,
3399 .
3402 [IEEE8021CM]
3403 Farkas, J., "Time-Sensitive Networking for Fronthaul",
3404 Unapproved PAR, PAR for a New IEEE Standard;
3405 IEEE P802.1CM, April 2015,
3406 .
3409 [IEEE8021TSN]
3410 IEEE 802.1, "The charter of the TG is to provide the
3411 specifications that will allow time-synchronized low
3412 latency streaming services through 802 networks.", 2016,
3413 .
3415 [IETFDetNet]
3416 IETF, "Charter for IETF DetNet Working Group", 2015,
3417 .
3419 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation",
3420 .
3422 [ISA100.11a]
3423 ISA/ANSI, "Wireless Systems for Industrial Automation:
3424 Process Control and Related Applications - ISA100.11a-2011
3425 - IEC 62734", 2011, .
3428 [ISO7240-16]
3429 ISO, "ISO 7240-16:2007 Fire detection and alarm systems --
3430 Part 16: Sound system control and indicating equipment",
3431 2007, .
3434 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006.
3436 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0",
3437 1994.
3439 [LTE-Latency]
3440 Johnston, S., "LTE Latency: How does it compare to other
3441 technologies", March 2014,
3442 .
3445 [MEF] MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells",
3446 MEF 22.1.1, July 2014,
3447 .
3450 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and
3451 wireless system", ICT-317669-METIS/D1.1 ICT-
3452 317669-METIS/D1.1, April 2013, .
3455 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL
3456 SPECIFICATION V1.1b", December 2006.
3458 [net5G] Ericsson, "5G Radio Access, Challenges for 2020 and
3459 Beyond", Ericsson white paper wp-5g, June 2013,
3460 .
3462 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0,
3463 February 2015, .
3466 [PCE] IETF, "Path Computation Element",
3467 .
3469 [profibus]
3470 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001.
3472 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
3473 Requirement Levels", BCP 14, RFC 2119,
3474 DOI 10.17487/RFC2119, March 1997,
3475 .
3477 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6
3478 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460,
3479 December 1998, .
3481 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black,
3482 "Definition of the Differentiated Services Field (DS
3483 Field) in the IPv4 and IPv6 Headers", RFC 2474,
3484 DOI 10.17487/RFC2474, December 1998,
3485 .
3487 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol
3488 Label Switching Architecture", RFC 3031,
3489 DOI 10.17487/RFC3031, January 2001,
3490 .
3492 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
3493 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
3494 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001,
3495 .
3497 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation
3498 Metric for IP Performance Metrics (IPPM)", RFC 3393,
3499 DOI 10.17487/RFC3393, November 2002,
3500 .
3502 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between
3503 Information Models and Data Models", RFC 3444,
3504 DOI 10.17487/RFC3444, January 2003,
3505 .
3507 [RFC3972] Aura, T., "Cryptographically Generated Addresses (CGA)",
3508 RFC 3972, DOI 10.17487/RFC3972, March 2005,
3509 .
3511 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation
3512 Edge-to-Edge (PWE3) Architecture", RFC 3985,
3513 DOI 10.17487/RFC3985, March 2005,
3514 .
3516 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing
3517 Architecture", RFC 4291, DOI 10.17487/RFC4291, February
3518 2006, .
3520 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure-
3521 Agnostic Time Division Multiplexing (TDM) over Packet
3522 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006,
3523 .
3525 [RFC4903] Thaler, D., "Multi-Link Subnet Issues", RFC 4903,
3526 DOI 10.17487/RFC4903, June 2007,
3527 .
3529 [RFC4919] Kushalnagar, N., Montenegro, G., and C. Schumacher, "IPv6
3530 over Low-Power Wireless Personal Area Networks (6LoWPANs):
3531 Overview, Assumptions, Problem Statement, and Goals",
3532 RFC 4919, DOI 10.17487/RFC4919, August 2007,
3533 .
3535 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and
3536 P. Pate, "Structure-Aware Time Division Multiplexed (TDM)
3537 Circuit Emulation Service over Packet Switched Network
3538 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007,
3539 .
3541 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi,
3542 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087,
3543 DOI 10.17487/RFC5087, December 2007,
3544 .
3546 [RFC6282] Hui, J., Ed. and P. Thubert, "Compression Format for IPv6
3547 Datagrams over IEEE 802.15.4-Based Networks", RFC 6282,
3548 DOI 10.17487/RFC6282, September 2011,
3549 .
3551 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J.,
3552 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur,
3553 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for
3554 Low-Power and Lossy Networks", RFC 6550,
3555 DOI 10.17487/RFC6550, March 2012,
3556 .
3558 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N.,
3559 and D. Barthel, "Routing Metrics Used for Path Calculation
3560 in Low-Power and Lossy Networks", RFC 6551,
3561 DOI 10.17487/RFC6551, March 2012,
3562 .
3564 [RFC6775] Shelby, Z., Ed., Chakrabarti, S., Nordmark, E., and C.
3565 Bormann, "Neighbor Discovery Optimization for IPv6 over
3566 Low-Power Wireless Personal Area Networks (6LoWPANs)",
3567 RFC 6775, DOI 10.17487/RFC6775, November 2012,
3568 .
3570 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using
3571 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the
3572 Internet of Things (IoT): Problem Statement", RFC 7554,
3573 DOI 10.17487/RFC7554, May 2015,
3574 .
3576 [SRP_LATENCY]
3577 Gunther, C., "Specifying SRP Latency", 2014,
3578 .
3581 [STUDIO_IP]
3582 Mace, G., "IP Networked Studio Infrastructure for
3583 Synchronized & Real-Time Multimedia Transmissions", 2007,
3584 .
3587 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in
3588 packet networks", Recommendation G.8261, August 2013,
3589 .
3591 [TEAS] IETF, "Traffic Engineering Architecture and Signaling",
3592 .
3594 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements
3595 for Evolved Universal Terrestrial Radio Access Network
3596 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013.
3598 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception
3599 (FDD)", 3GPP TS 25.104 3.14.0, March 2007.
3601 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access
3602 (E-UTRA); Base Station (BS) radio transmission and
3603 reception", 3GPP TS 36.104 10.11.0, July 2013.
3605 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access
3606 (E-UTRA); Requirements for support of radio resource
3607 management", 3GPP TS 36.133 12.7.0, April 2015.
3609 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access
3610 (E-UTRA); Physical channels and modulation", 3GPP
3611 TS 36.211 10.7.0, March 2013.
3613 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA)
3614 and Evolved Universal Terrestrial Radio Access Network
3615 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300
3616 10.11.0, September 2013.
3618 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive
3619 Networks Task Group", 2013,
3620 .
3622 [UHD-video]
3623 Holub, P., "Ultra-High Definition Videos and Their
3624 Applications over the Network", The 7th International
3625 Symposium on VICTORIES Project PetrHolub_presentation,
3626 October 2014, .
3629 [WirelessHART]
3630 www.hartcomm.org, "Industrial Communication Networks -
3631 Wireless Communication Network and Communication Profiles
3632 - WirelessHART - IEC 62591", 2010.
3634 Authors' Addresses
3635 Ethan Grossman (editor)
3636 Dolby Laboratories, Inc.
3637 1275 Market Street
3638 San Francisco, CA 94103
3639 USA
3641 Phone: +1 415 645 4726
3642 Email: ethan.grossman@dolby.com
3643 URI: http://www.dolby.com
3645 Craig Gunther
3646 Harman International
3647 10653 South River Front Parkway
3648 South Jordan, UT 84095
3649 USA
3651 Phone: +1 801 568-7675
3652 Email: craig.gunther@harman.com
3653 URI: http://www.harman.com
3655 Pascal Thubert
3656 Cisco Systems, Inc
3657 Building D
3658 45 Allee des Ormes - BP1200
3659 MOUGINS - Sophia Antipolis 06254
3660 FRANCE
3662 Phone: +33 497 23 26 34
3663 Email: pthubert@cisco.com
3665 Patrick Wetterwald
3666 Cisco Systems
3667 45 Allees des Ormes
3668 Mougins 06250
3669 FRANCE
3671 Phone: +33 4 97 23 26 36
3672 Email: pwetterw@cisco.com
3673 Jean Raymond
3674 Hydro-Quebec
3675 1500 University
3676 Montreal H3A3S7
3677 Canada
3679 Phone: +1 514 840 3000
3680 Email: raymond.jean@hydro.qc.ca
3682 Jouni Korhonen
3683 Broadcom Corporation
3684 3151 Zanker Road
3685 San Jose, CA 95134
3686 USA
3688 Email: jouni.nospam@gmail.com
3690 Yu Kaneko
3691 Toshiba
3692 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi
3693 Kanagawa, Japan
3695 Email: yu1.kaneko@toshiba.co.jp
3697 Subir Das
3698 Applied Communication Sciences
3699 150 Mount Airy Road, Basking Ridge
3700 New Jersey, 07920, USA
3702 Email: sdas@appcomsci.com
3704 Yiyong Zha
3705 Huawei Technologies
3707 Email: zhayiyong@huawei.com
3709 Balazs Varga
3710 Ericsson
3711 Konyves Kalman krt. 11/B
3712 Budapest 1097
3713 Hungary
3715 Email: balazs.a.varga@ericsson.com
3716 Janos Farkas
3717 Ericsson
3718 Konyves Kalman krt. 11/B
3719 Budapest 1097
3720 Hungary
3722 Email: janos.farkas@ericsson.com
3724 Franz-Josef Goetz
3725 Siemens
3726 Gleiwitzerstr. 555
3727 Nurnberg 90475
3728 Germany
3730 Email: franz-josef.goetz@siemens.com
3732 Juergen Schmitt
3733 Siemens
3734 Gleiwitzerstr. 555
3735 Nurnberg 90475
3736 Germany
3738 Email: juergen.jues.schmitt@siemens.com