idnits 2.17.1
draft-ietf-detnet-use-cases-00.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
No issues found here.
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
== The document seems to lack the recommended RFC 2119 boilerplate, even if
it appears to use RFC 2119 keywords.
(The document does seem to have the reference to RFC 2119 which the
ID-Checklist requires).
-- The document date (December 15, 2015) is 3053 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Unused Reference: 'ACE' is defined on line 3162, but no explicit
reference was found in the text
== Unused Reference: 'DICE' is defined on line 3192, but no explicit
reference was found in the text
== Unused Reference: 'HART' is defined on line 3212, but no explicit
reference was found in the text
== Unused Reference: 'I-D.thubert-6lowpan-backbone-router' is defined on
line 3280, but no explicit reference was found in the text
== Unused Reference: 'IEC61850-90-12' is defined on line 3290, but no
explicit reference was found in the text
== Unused Reference: 'ISA100' is defined on line 3353, but no explicit
reference was found in the text
== Unused Reference: 'RFC2119' is defined on line 3406, but no explicit
reference was found in the text
== Unused Reference: 'RFC2460' is defined on line 3411, but no explicit
reference was found in the text
== Unused Reference: 'RFC2474' is defined on line 3415, but no explicit
reference was found in the text
== Unused Reference: 'RFC3209' is defined on line 3426, but no explicit
reference was found in the text
== Unused Reference: 'RFC3393' is defined on line 3431, but no explicit
reference was found in the text
== Unused Reference: 'RFC4903' is defined on line 3459, but no explicit
reference was found in the text
== Unused Reference: 'RFC4919' is defined on line 3463, but no explicit
reference was found in the text
== Unused Reference: 'RFC6282' is defined on line 3480, but no explicit
reference was found in the text
== Unused Reference: 'RFC6775' is defined on line 3498, but no explicit
reference was found in the text
== Unused Reference: 'TEAS' is defined on line 3525, but no explicit
reference was found in the text
== Unused Reference: 'WirelessHART' is defined on line 3563, but no
explicit reference was found in the text
== Outdated reference: A later version (-08) exists of
draft-finn-detnet-architecture-02
== Outdated reference: A later version (-05) exists of
draft-finn-detnet-problem-statement-04
== Outdated reference: A later version (-30) exists of
draft-ietf-6tisch-architecture-09
== Outdated reference: A later version (-10) exists of
draft-ietf-6tisch-terminology-06
-- Obsolete informational reference (is this intentional?): RFC 2460
(Obsoleted by RFC 8200)
Summary: 0 errors (**), 0 flaws (~~), 23 warnings (==), 2 comments (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Internet Engineering Task Force E. Grossman, Ed.
3 Internet-Draft DOLBY
4 Intended status: Informational C. Gunther
5 Expires: June 17, 2016 HARMAN
6 P. Thubert
7 P. Wetterwald
8 CISCO
9 J. Raymond
10 HYDRO-QUEBEC
11 J. Korhonen
12 BROADCOM
13 Y. Kaneko
14 Toshiba
15 S. Das
16 Applied Communication Sciences
17 Y. Zha
18 HUAWEI
19 December 15, 2015
21 Deterministic Networking Use Cases
22 draft-ietf-detnet-use-cases-00
24 Abstract
26 This draft documents requirements in several diverse industries to
27 establish multi-hop paths for characterized flows with deterministic
28 properties. In this context deterministic implies that streams can
29 be established which provide guaranteed bandwidth and latency which
30 can be established from either a Layer 2 or Layer 3 (IP) interface,
31 and which can co-exist on an IP network with best-effort traffic.
33 Additional requirements include optional redundant paths, very high
34 reliability paths, time synchronization, and clock distribution.
35 Industries considered include wireless for industrial applications,
36 professional audio, electrical utilities, building automation
37 systems, radio/mobile access networks, automotive, and gaming.
39 For each case, this document will identify the application, identify
40 representative solutions used today, and what new uses an IETF DetNet
41 solution may enable.
43 Status of This Memo
45 This Internet-Draft is submitted in full conformance with the
46 provisions of BCP 78 and BCP 79.
48 Internet-Drafts are working documents of the Internet Engineering
49 Task Force (IETF). Note that other groups may also distribute
50 working documents as Internet-Drafts. The list of current Internet-
51 Drafts is at http://datatracker.ietf.org/drafts/current/.
53 Internet-Drafts are draft documents valid for a maximum of six months
54 and may be updated, replaced, or obsoleted by other documents at any
55 time. It is inappropriate to use Internet-Drafts as reference
56 material or to cite them other than as "work in progress."
58 This Internet-Draft will expire on June 17, 2016.
60 Copyright Notice
62 Copyright (c) 2015 IETF Trust and the persons identified as the
63 document authors. All rights reserved.
65 This document is subject to BCP 78 and the IETF Trust's Legal
66 Provisions Relating to IETF Documents
67 (http://trustee.ietf.org/license-info) in effect on the date of
68 publication of this document. Please review these documents
69 carefully, as they describe your rights and restrictions with respect
70 to this document. Code Components extracted from this document must
71 include Simplified BSD License text as described in Section 4.e of
72 the Trust Legal Provisions and are provided without warranty as
73 described in the Simplified BSD License.
75 Table of Contents
77 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4
78 2. Pro Audio Use Cases . . . . . . . . . . . . . . . . . . . . . 5
79 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 5
80 2.2. Fundamental Stream Requirements . . . . . . . . . . . . . 6
81 2.2.1. Guaranteed Bandwidth . . . . . . . . . . . . . . . . 6
82 2.2.2. Bounded and Consistent Latency . . . . . . . . . . . 6
83 2.2.2.1. Optimizations . . . . . . . . . . . . . . . . . . 8
84 2.3. Additional Stream Requirements . . . . . . . . . . . . . 8
85 2.3.1. Deterministic Time to Establish Streaming . . . . . . 8
86 2.3.2. Use of Unused Reservations by Best-Effort Traffic . . 9
87 2.3.3. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 9
88 2.3.4. Secure Transmission . . . . . . . . . . . . . . . . . 9
89 2.3.5. Redundant Paths . . . . . . . . . . . . . . . . . . . 10
90 2.3.6. Link Aggregation . . . . . . . . . . . . . . . . . . 10
91 2.3.7. Traffic Segregation . . . . . . . . . . . . . . . . . 10
92 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets . . . 11
93 2.3.7.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 11
94 2.4. Integration of Reserved Streams into IT Networks . . . . 11
95 2.5. Security Considerations . . . . . . . . . . . . . . . . . 11
96 2.5.1. Denial of Service . . . . . . . . . . . . . . . . . . 12
97 2.5.2. Control Protocols . . . . . . . . . . . . . . . . . . 12
98 2.6. A State-of-the-Art Broadcast Installation Hits Technology
99 Limits . . . . . . . . . . . . . . . . . . . . . . . . . 12
100 2.7. Acknowledgements . . . . . . . . . . . . . . . . . . . . 13
101 3. Utility Telecom Use Cases . . . . . . . . . . . . . . . . . . 13
102 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 13
103 3.2. Telecommunications Trends and General telecommunications
104 Requirements . . . . . . . . . . . . . . . . . . . . . . 14
105 3.2.1. General Telecommunications Requirements . . . . . . . 14
106 3.2.1.1. Migration to Packet-Switched Network . . . . . . 15
107 3.2.2. Applications, Use cases and traffic patterns . . . . 16
108 3.2.2.1. Transmission use cases . . . . . . . . . . . . . 16
109 3.2.2.2. Distribution use case . . . . . . . . . . . . . . 26
110 3.2.2.3. Generation use case . . . . . . . . . . . . . . . 29
111 3.2.3. Specific Network topologies of Smart Grid
112 Applications . . . . . . . . . . . . . . . . . . . . 30
113 3.2.4. Precision Time Protocol . . . . . . . . . . . . . . . 31
114 3.3. IANA Considerations . . . . . . . . . . . . . . . . . . . 32
115 3.4. Security Considerations . . . . . . . . . . . . . . . . . 32
116 3.4.1. Current Practices and Their Limitations . . . . . . . 32
117 3.4.2. Security Trends in Utility Networks . . . . . . . . . 34
118 3.5. Acknowledgements . . . . . . . . . . . . . . . . . . . . 35
119 4. Building Automation Systems Use Cases . . . . . . . . . . . . 35
120 4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 36
121 4.2. BAS Functionality . . . . . . . . . . . . . . . . . . . . 36
122 4.3. BAS Architecture . . . . . . . . . . . . . . . . . . . . 37
123 4.4. Deployment Model . . . . . . . . . . . . . . . . . . . . 39
124 4.5. Use cases and Field Network Requirements . . . . . . . . 40
125 4.5.1. Environmental Monitoring . . . . . . . . . . . . . . 41
126 4.5.2. Fire Detection . . . . . . . . . . . . . . . . . . . 41
127 4.5.3. Feedback Control . . . . . . . . . . . . . . . . . . 42
128 4.6. Security Considerations . . . . . . . . . . . . . . . . . 43
129 5. Wireless for Industrial Use Cases . . . . . . . . . . . . . . 44
130 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 44
131 5.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 45
132 5.3. 6TiSCH Overview . . . . . . . . . . . . . . . . . . . . . 45
133 5.3.1. TSCH and 6top . . . . . . . . . . . . . . . . . . . . 48
134 5.3.2. SlotFrames and Priorities . . . . . . . . . . . . . . 48
135 5.3.3. Schedule Management by a PCE . . . . . . . . . . . . 48
136 5.3.4. Track Forwarding . . . . . . . . . . . . . . . . . . 49
137 5.3.4.1. Transport Mode . . . . . . . . . . . . . . . . . 51
138 5.3.4.2. Tunnel Mode . . . . . . . . . . . . . . . . . . . 52
139 5.3.4.3. Tunnel Metadata . . . . . . . . . . . . . . . . . 53
140 5.4. Operations of Interest for DetNet and PCE . . . . . . . . 54
141 5.4.1. Packet Marking and Handling . . . . . . . . . . . . . 55
142 5.4.1.1. Tagging Packets for Flow Identification . . . . . 55
143 5.4.1.2. Replication, Retries and Elimination . . . . . . 55
144 5.4.1.3. Differentiated Services Per-Hop-Behavior . . . . 56
145 5.4.2. Topology and capabilities . . . . . . . . . . . . . . 56
146 5.5. Security Considerations . . . . . . . . . . . . . . . . . 57
147 5.6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . 57
148 6. Cellular Radio Use Cases . . . . . . . . . . . . . . . . . . 57
149 6.1. Introduction and background . . . . . . . . . . . . . . . 58
150 6.2. Network architecture . . . . . . . . . . . . . . . . . . 61
151 6.3. Time synchronization requirements . . . . . . . . . . . . 62
152 6.4. Time-sensitive stream requirements . . . . . . . . . . . 63
153 6.5. Security considerations . . . . . . . . . . . . . . . . . 64
154 7. Other Use Cases . . . . . . . . . . . . . . . . . . . . . . . 64
155 7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 65
156 7.2. Critical Delay Requirements . . . . . . . . . . . . . . . 66
157 7.3. Coordinated multipoint processing (CoMP) . . . . . . . . 66
158 7.3.1. CoMP Architecture . . . . . . . . . . . . . . . . . . 66
159 7.3.2. Delay Sensitivity in CoMP . . . . . . . . . . . . . . 67
160 7.4. Industrial Automation . . . . . . . . . . . . . . . . . . 68
161 7.5. Vehicle to Vehicle . . . . . . . . . . . . . . . . . . . 68
162 7.6. Gaming, Media and Virtual Reality . . . . . . . . . . . . 69
163 8. Use Case Common Elements . . . . . . . . . . . . . . . . . . 69
164 9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 70
165 10. Informative References . . . . . . . . . . . . . . . . . . . 70
166 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 79
168 1. Introduction
170 This draft presents use cases from diverse industries which have in
171 common a need for deterministic streams, but which also differ
172 notably in their network topologies and specific desired behavior.
173 Together, they provide broad industry context for DetNet and a
174 yardstick against which proposed DetNet designs can be measured (to
175 what extent does a proposed design satisfy these various use cases?)
177 For DetNet, use cases explicitly do not define requirements; The
178 DetNet WG will consider the use cases, decide which elements are in
179 scope for DetNet, and the results will be incorporated into future
180 drafts. Similarly, the DetNet use case draft explicitly does not
181 suggest any specific design, architecture or protocols, which will be
182 topics of future drafts.
184 We present for each use case the answers to the following questions:
186 o What is the use case?
188 o How is it addressed today?
190 o How would you like it to be addressed in the future?
191 o What do you want the IETF to deliver?
193 The level of detail in each use case should be sufficient to express
194 the relevant elements of the use case, but not more.
196 At the end we consider the use cases collectively, and examine the
197 most significant goals they have in common.
199 2. Pro Audio Use Cases
201 (This section was derived from draft-gunther-detnet-proaudio-req-01)
203 2.1. Introduction
205 The professional audio and video industry includes music and film
206 content creation, broadcast, cinema, and live exposition as well as
207 public address, media and emergency systems at large venues
208 (airports, stadiums, churches, theme parks). These industries have
209 already gone through the transition of audio and video signals from
210 analog to digital, however the interconnect systems remain primarily
211 point-to-point with a single (or small number of) signals per link,
212 interconnected with purpose-built hardware.
214 These industries are now attempting to transition to packet based
215 infrastructure for distributing audio and video in order to reduce
216 cost, increase routing flexibility, and integrate with existing IT
217 infrastructure.
219 However, there are several requirements for making a network the
220 primary infrastructure for audio and video which are not met by
221 todays networks and these are our concern in this draft.
223 The principal requirement is that pro audio and video applications
224 become able to establish streams that provide guaranteed (bounded)
225 bandwidth and latency from the Layer 3 (IP) interface. Such streams
226 can be created today within standards-based layer 2 islands however
227 these are not sufficient to enable effective distribution over wider
228 areas (for example broadcast events that span wide geographical
229 areas).
231 Some proprietary systems have been created which enable deterministic
232 streams at layer 3 however they are engineered networks in that they
233 require careful configuration to operate, often require that the
234 system be over designed, and it is implied that all devices on the
235 network voluntarily play by the rules of that network. To enable
236 these industries to successfully transition to an interoperable
237 multi-vendor packet-based infrastructure requires effective open
238 standards, and we believe that establishing relevant IETF standards
239 is a crucial factor.
241 It would be highly desirable if such streams could be routed over the
242 open Internet, however even intermediate solutions with more limited
243 scope (such as enterprise networks) can provide a substantial
244 improvement over todays networks, and a solution that only provides
245 for the enterprise network scenario is an acceptable first step.
247 We also present more fine grained requirements of the audio and video
248 industries such as safety and security, redundant paths, devices with
249 limited computing resources on the network, and that reserved stream
250 bandwidth is available for use by other best-effort traffic when that
251 stream is not currently in use.
253 2.2. Fundamental Stream Requirements
255 The fundamental stream properties are guaranteed bandwidth and
256 deterministic latency as described in this section. Additional
257 stream requirements are described in a subsequent section.
259 2.2.1. Guaranteed Bandwidth
261 Transmitting audio and video streams is unlike common file transfer
262 activities because guaranteed delivery cannot be achieved by re-
263 trying the transmission; by the time the missing or corrupt packet
264 has been identified it is too late to execute a re-try operation and
265 stream playback is interrupted, which is unacceptable in for example
266 a live concert. In some contexts large amounts of buffering can be
267 used to provide enough delay to allow time for one or more retries,
268 however this is not an effective solution when live interaction is
269 involved, and is not considered an acceptable general solution for
270 pro audio and video. (Have you ever tried speaking into a microphone
271 through a sound system that has an echo coming back at you? It makes
272 it almost impossible to speak clearly).
274 Providing a way to reserve a specific amount of bandwidth for a given
275 stream is a key requirement.
277 2.2.2. Bounded and Consistent Latency
279 Latency in this context means the amount of time that passes between
280 when a signal is sent over a stream and when it is received, for
281 example the amount of time delay between when you speak into a
282 microphone and when your voice emerges from the speaker. Any delay
283 longer than about 10-15 milliseconds is noticeable by most live
284 performers, and greater latency makes the system unusable because it
285 prevents them from playing in time with the other players (see slide
286 6 of [SRP_LATENCY]).
288 The 15ms latency bound is made even more challenging because it is
289 often the case in network based music production with live electric
290 instruments that multiple stages of signal processing are used,
291 connected in series (i.e. from one to the other for example from
292 guitar through a series of digital effects processors) in which case
293 the latencies add, so the latencies of each individual stage must all
294 together remain less than 15ms.
296 In some situations it is acceptable at the local location for content
297 from the live remote site to be delayed to allow for a statistically
298 acceptable amount of latency in order to reduce jitter. However,
299 once the content begins playing in the local location any audio
300 artifacts caused by the local network are unacceptable, especially in
301 those situations where a live local performer is mixed into the feed
302 from the remote location.
304 In addition to being bounded to within some predictable and
305 acceptable amount of time (which may be 15 milliseconds or more or
306 less depending on the application) the latency also has to be
307 consistent. For example when playing a film consisting of a video
308 stream and audio stream over a network, those two streams must be
309 synchronized so that the voice and the picture match up. A common
310 tolerance for audio/video sync is one NTSC video frame (about 33ms)
311 and to maintain the audience perception of correct lip sync the
312 latency needs to be consistent within some reasonable tolerance, for
313 example 10%.
315 A common architecture for synchronizing multiple streams that have
316 different paths through the network (and thus potentially different
317 latencies) is to enable measurement of the latency of each path, and
318 have the data sinks (for example speakers) buffer (delay) all packets
319 on all but the slowest path. Each packet of each stream is assigned
320 a presentation time which is based on the longest required delay.
321 This implies that all sinks must maintain a common time reference of
322 sufficient accuracy, which can be achieved by any of various
323 techniques.
325 This type of architecture is commonly implemented using a central
326 controller that determines path delays and arbitrates buffering
327 delays.
329 2.2.2.1. Optimizations
331 The controller might also perform optimizations based on the
332 individual path delays, for example sinks that are closer to the
333 source can inform the controller that they can accept greater latency
334 since they will be buffering packets to match presentation times of
335 farther away sinks. The controller might then move a stream
336 reservation on a short path to a longer path in order to free up
337 bandwidth for other critical streams on that short path. See slides
338 3-5 of [SRP_LATENCY].
340 Additional optimization can be achieved in cases where sinks have
341 differing latency requirements, for example in a live outdoor concert
342 the speaker sinks have stricter latency requirements than the
343 recording hardware sinks. See slide 7 of [SRP_LATENCY].
345 Device cost can be reduced in a system with guaranteed reservations
346 with a small bounded latency due to the reduced requirements for
347 buffering (i.e. memory) on sink devices. For example, a theme park
348 might broadcast a live event across the globe via a layer 3 protocol;
349 in such cases the size of the buffers required is proportional to the
350 latency bounds and jitter caused by delivery, which depends on the
351 worst case segment of the end-to-end network path. For example on
352 todays open internet the latency is typically unacceptable for audio
353 and video streaming without many seconds of buffering. In such
354 scenarios a single gateway device at the local network that receives
355 the feed from the remote site would provide the expensive buffering
356 required to mask the latency and jitter issues associated with long
357 distance delivery. Sink devices in the local location would have no
358 additional buffering requirements, and thus no additional costs,
359 beyond those required for delivery of local content. The sink device
360 would be receiving the identical packets as those sent by the source
361 and would be unaware that there were any latency or jitter issues
362 along the path.
364 2.3. Additional Stream Requirements
366 The requirements in this section are more specific yet are common to
367 multiple audio and video industry applications.
369 2.3.1. Deterministic Time to Establish Streaming
371 Some audio systems installed in public environments (airports,
372 hospitals) have unique requirements with regards to health, safety
373 and fire concerns. One such requirement is a maximum of 3 seconds
374 for a system to respond to an emergency detection and begin sending
375 appropriate warning signals and alarms without human intervention.
376 For this requirement to be met, the system must support a bounded and
377 acceptable time from a notification signal to specific stream
378 establishment. For further details see [ISO7240-16].
380 Similar requirements apply when the system is restarted after a power
381 cycle, cable re-connection, or system reconfiguration.
383 In many cases such re-establishment of streaming state must be
384 achieved by the peer devices themselves, i.e. without a central
385 controller (since such a controller may only be present during
386 initial network configuration).
388 Video systems introduce related requirements, for example when
389 transitioning from one camera feed to another. Such systems
390 currently use purpose-built hardware to switch feeds smoothly,
391 however there is a current initiative in the broadcast industry to
392 switch to a packet-based infrastructure (see [STUDIO_IP] and the ESPN
393 DC2 use case described below).
395 2.3.2. Use of Unused Reservations by Best-Effort Traffic
397 In cases where stream bandwidth is reserved but not currently used
398 (or is under-utilized) that bandwidth must be available to best-
399 effort (i.e. non-time-sensitive) traffic. For example a single
400 stream may be nailed up (reserved) for specific media content that
401 needs to be presented at different times of the day, ensuring timely
402 delivery of that content, yet in between those times the full
403 bandwidth of the network can be utilized for best-effort tasks such
404 as file transfers.
406 This also addresses a concern of IT network administrators that are
407 considering adding reserved bandwidth traffic to their networks that
408 users will just reserve a ton of bandwidth and then never un-reserve
409 it even though they are not using it, and soon they will have no
410 bandwidth left.
412 2.3.3. Layer 3 Interconnecting Layer 2 Islands
414 As an intermediate step (short of providing guaranteed bandwidth
415 across the open internet) it would be valuable to provide a way to
416 connect multiple Layer 2 networks. For example layer 2 techniques
417 could be used to create a LAN for a single broadcast studio, and
418 several such studios could be interconnected via layer 3 links.
420 2.3.4. Secure Transmission
422 Digital Rights Management (DRM) is very important to the audio and
423 video industries. Any time protected content is introduced into a
424 network there are DRM concerns that must be maintained (see
426 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of
427 network technology, however there are cases when a secure link
428 supporting authentication and encryption is required by content
429 owners to carry their audio or video content when it is outside their
430 own secure environment (for example see [DCI]).
432 As an example, two techniques are Digital Transmission Content
433 Protection (DTCP) and High-Bandwidth Digital Content Protection
434 (HDCP). HDCP content is not approved for retransmission within any
435 other type of DRM, while DTCP may be retransmitted under HDCP.
436 Therefore if the source of a stream is outside of the network and it
437 uses HDCP protection it is only allowed to be placed on the network
438 with that same HDCP protection.
440 2.3.5. Redundant Paths
442 On-air and other live media streams must be backed up with redundant
443 links that seamlessly act to deliver the content when the primary
444 link fails for any reason. In point-to-point systems this is
445 provided by an additional point-to-point link; the analogous
446 requirement in a packet-based system is to provide an alternate path
447 through the network such that no individual link can bring down the
448 system.
450 2.3.6. Link Aggregation
452 For transmitting streams that require more bandwidth than a single
453 link in the target network can support, link aggregation is a
454 technique for combining (aggregating) the bandwidth available on
455 multiple physical links to create a single logical link of the
456 required bandwidth. However, if aggregation is to be used, the
457 network controller (or equivalent) must be able to determine the
458 maximum latency of any path through the aggregate link (see Bounded
459 and Consistent Latency section above).
461 2.3.7. Traffic Segregation
463 Sink devices may be low cost devices with limited processing power.
464 In order to not overwhelm the CPUs in these devices it is important
465 to limit the amount of traffic that these devices must process.
467 As an example, consider the use of individual seat speakers in a
468 cinema. These speakers are typically required to be cost reduced
469 since the quantities in a single theater can reach hundreds of seats.
470 Discovery protocols alone in a one thousand seat theater can generate
471 enough broadcast traffic to overwhelm a low powered CPU. Thus an
472 installation like this will benefit greatly from some type of traffic
473 segregation that can define groups of seats to reduce traffic within
474 each group. All seats in the theater must still be able to
475 communicate with a central controller.
477 There are many techniques that can be used to support this
478 requirement including (but not limited to) the following examples.
480 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets
482 Packet forwarding rules can be used to eliminate some extraneous
483 streaming traffic from reaching potentially low powered sink devices,
484 however there may be other types of broadcast traffic that should be
485 eliminated using other means for example VLANs or IP subnets.
487 2.3.7.2. Multicast Addressing (IPv4 and IPv6)
489 Multicast addressing is commonly used to keep bandwidth utilization
490 of shared links to a minimum.
492 Because of the MAC Address forwarding nature of Layer 2 bridges it is
493 important that a multicast MAC address is only associated with one
494 stream. This will prevent reservations from forwarding packets from
495 one stream down a path that has no interested sinks simply because
496 there is another stream on that same path that shares the same
497 multicast MAC address.
499 Since each multicast MAC Address can represent 32 different IPv4
500 multicast addresses there must be a process put in place to make sure
501 this does not occur. Requiring use of IPv6 address can achieve this,
502 however due to their continued prevalence, solutions that are
503 effective for IPv4 installations are also required.
505 2.4. Integration of Reserved Streams into IT Networks
507 A commonly cited goal of moving to a packet based media
508 infrastructure is that costs can be reduced by using off the shelf,
509 commodity network hardware. In addition, economy of scale can be
510 realized by combining media infrastructure with IT infrastructure.
511 In keeping with these goals, stream reservation technology should be
512 compatible with existing protocols, and not compromise use of the
513 network for best effort (non-time-sensitive) traffic.
515 2.5. Security Considerations
517 Many industries that are moving from the point-to-point world to the
518 digital network world have little understanding of the pitfalls that
519 they can create for themselves with improperly implemented network
520 infrastructure. DetNet should consider ways to provide security
521 against DoS attacks in solutions directed at these markets. Some
522 considerations are given here as examples of ways that we can help
523 new users avoid common pitfalls.
525 2.5.1. Denial of Service
527 One security pitfall that this author is aware of involves the use of
528 technology that allows a presenter to throw the content from their
529 tablet or smart phone onto the A/V system that is then viewed by all
530 those in attendance. The facility introducing this technology was
531 quite excited to allow such modern flexibility to those who came to
532 speak. One thing they hadn't realized was that since no security was
533 put in place around this technology it left a hole in the system that
534 allowed other attendees to "throw" their own content onto the A/V
535 system.
537 2.5.2. Control Protocols
539 Professional audio systems can include amplifiers that are capable of
540 generating hundreds or thousands of watts of audio power which if
541 used incorrectly can cause hearing damage to those in the vicinity.
542 Apart from the usual care required by the systems operators to
543 prevent such incidents, the network traffic that controls these
544 devices must be secured (as with any sensitive application traffic).
545 In addition, it would be desirable if the configuration protocols
546 that are used to create the network paths used by the professional
547 audio traffic could be designed to protect devices that are not meant
548 to receive high-amplitude content from having such potentially
549 damaging signals routed to them.
551 2.6. A State-of-the-Art Broadcast Installation Hits Technology Limits
553 ESPN recently constructed a state-of-the-art 194,000 sq ft, $125
554 million broadcast studio called DC2. The DC2 network is capable of
555 handling 46 Tbps of throughput with 60,000 simultaneous signals.
556 Inside the facility are 1,100 miles of fiber feeding four audio
557 control rooms. (See details at [ESPN_DC2] ).
559 In designing DC2 they replaced as much point-to-point technology as
560 they possibly could with packet-based technology. They constructed
561 seven individual studios using layer 2 LANS (using IEEE 802.1 AVB)
562 that were entirely effective at routing audio within the LANs, and
563 they were very happy with the results, however to interconnect these
564 layer 2 LAN islands together they ended up using dedicated links
565 because there is no standards-based routing solution available.
567 This is the kind of motivation we have to develop these standards
568 because customers are ready and able to use them.
570 2.7. Acknowledgements
572 The editors would like to acknowledge the help of the following
573 individuals and the companies they represent:
575 Jeff Koftinoff, Meyer Sound
577 Jouni Korhonen, Associate Technical Director, Broadcom
579 Pascal Thubert, CTAO, Cisco
581 Kieran Tyrrell, Sienda New Media Technologies GmbH
583 3. Utility Telecom Use Cases
585 (This section was derived from draft-wetterwald-detnet-utilities-
586 reqs-02)
588 3.1. Overview
590 [I-D.finn-detnet-problem-statement] defines the characteristics of a
591 deterministic flow as a data communication flow with a bounded
592 latency, extraordinarily low frame loss, and a very narrow jitter.
593 This document intends to define the utility requirements for
594 deterministic networking.
596 Utility Telecom Networks
598 The business and technology trends that are sweeping the utility
599 industry will drastically transform the utility business from the way
600 it has been for many decades. At the core of many of these changes
601 is a drive to modernize the electrical grid with an integrated
602 telecommunications infrastructure. However, interoperability,
603 concerns, legacy networks, disparate tools, and stringent security
604 requirements all add complexity to the grid transformation. Given
605 the range and diversity of the requirements that should be addressed
606 by the next generation telecommunications infrastructure, utilities
607 need to adopt a holistic architectural approach to integrate the
608 electrical grid with digital telecommunications across the entire
609 power delivery chain.
611 Many utilities still rely on complex environments formed of multiple
612 application-specific, proprietary networks. Information is siloed
613 between operational areas. This prevents utility operations from
614 realizing the operational efficiency benefits, visibility, and
615 functional integration of operational information across grid
616 applications and data networks. The key to modernizing grid
617 telecommunications is to provide a common, adaptable, multi-service
618 network infrastructure for the entire utility organization. Such a
619 network serves as the platform for current capabilities while
620 enabling future expansion of the network to accommodate new
621 applications and services.
623 To meet this diverse set of requirements, both today and in the
624 future, the next generation utility telecommunnications network will
625 be based on open-standards-based IP architecture. An end-to-end IP
626 architecture takes advantage of nearly three decades of IP technology
627 development, facilitating interoperability across disparate networks
628 and devices, as it has been already demonstrated in many mission-
629 critical and highly secure networks.
631 IEC (International Electrotechnical Commission) and different
632 National Committees have mandated a specific adhoc group (AHG8) to
633 define the migration strategy to IPv6 for all the IEC TC57 power
634 automation standards. IPv6 is seen as the obvious future
635 telecommunications technology for the Smart Grid. The Adhoc Group
636 has disclosed, to the IEC coordination group, their conclusions at
637 the end of 2014.
639 It is imperative that utilities participate in standards development
640 bodies to influence the development of future solutions and to
641 benefit from shared experiences of other utilities and vendors.
643 3.2. Telecommunications Trends and General telecommunications
644 Requirements
646 These general telecommunications requirements are over and above the
647 specific requirements of the use cases that have been addressed so
648 far. These include both current and future telecommunications
649 related requirements that should be factored into the network
650 architecture and design.
652 3.2.1. General Telecommunications Requirements
654 o IP Connectivity everywhere
656 o Monitoring services everywhere and from different remote centers
658 o Move services to a virtual data center
660 o Unify access to applications / information from the corporate
661 network
663 o Unify services
665 o Unified Communications Solutions
666 o Mix of fiber and microwave technologies - obsolescence of SONET/
667 SDH or TDM
669 o Standardize grid telecommunications protocol to opened standard to
670 ensure interoperability
672 o Reliable Telecommunications for Transmission and Distribution
673 Substations
675 o IEEE 1588 time synchronization Client / Server Capabilities
677 o Integration of Multicast Design
679 o QoS Requirements Mapping
681 o Enable Future Network Expansion
683 o Substation Network Resilience
685 o Fast Convergence Design
687 o Scalable Headend Design
689 o Define Service Level Agreements (SLA) and Enable SLA Monitoring
691 o Integration of 3G/4G Technologies and future technologies
693 o Ethernet Connectivity for Station Bus Architecture
695 o Ethernet Connectivity for Process Bus Architecture
697 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP
699 3.2.1.1. Migration to Packet-Switched Network
701 Throughout the world, utilities are increasingly planning for a
702 future based on smart grid applications requiring advanced
703 telecommunications systems. Many of these applications utilize
704 packet connectivity for communicating information and control signals
705 across the utility's Wide Area Network (WAN), made possible by
706 technologies such as multiprotocol label switching (MPLS). The data
707 that traverses the utility WAN includes:
709 o Grid monitoring, control, and protection data
711 o Non-control grid data (e.g. asset data for condition-based
712 monitoring)
714 o Physical safety and security data (e.g. voice and video)
716 o Remote worker access to corporate applications (voice, maps,
717 schematics, etc.)
719 o Field area network backhaul for smart metering, and distribution
720 grid management
722 o Enterprise traffic (email, collaboration tools, business
723 applications)
725 WANs support this wide variety of traffic to and from substations,
726 the transmission and distribution grid, generation sites, between
727 control centers, and between work locations and data centers. To
728 maintain this rapidly expanding set of applications, many utilities
729 are taking steps to evolve present time-division multiplexing (TDM)
730 based and frame relay infrastructures to packet systems. Packet-
731 based networks are designed to provide greater functionalities and
732 higher levels of service for applications, while continuing to
733 deliver reliability and deterministic (real-time) traffic support.
735 3.2.2. Applications, Use cases and traffic patterns
737 Among the numerous applications and use cases that a utility deploys
738 today, many rely on high availability and deterministic behaviour of
739 the telecommunications networks. Protection use cases and generation
740 control are the most demanding and can't rely on a best effort
741 approach.
743 3.2.2.1. Transmission use cases
745 Protection means not only the protection of the human operator but
746 also the protection of the electric equipments and the preservation
747 of the stability and frequency of the grid. If a default occurs on
748 the transmission or the distribution of the electricity, important
749 damages could occured to the human operator but also to very costly
750 electrical equipments and perturb the grid leading to blackouts. The
751 time and reliability requirements are very strong to avoid dramatic
752 impacts to the electrical infrastructure.
754 3.2.2.1.1. Tele Protection
756 The key criteria for measuring Teleprotection performance are command
757 transmission time, dependability and security. These criteria are
758 defined by the IEC standard 60834 as follows:
760 o Transmission time (Speed): The time between the moment where state
761 changes at the transmitter input and the moment of the
762 corresponding change at the receiver output, including propagation
763 delay. Overall operating time for a Teleprotection system
764 includes the time for initiating the command at the transmitting
765 end, the propagation delay over the network (including equipments)
766 and the selection and decision time at the receiving end,
767 including any additional delay due to a noisy environment.
769 o Dependability: The ability to issue and receive valid commands in
770 the presence of interference and/or noise, by minimizing the
771 probability of missing command (PMC). Dependability targets are
772 typically set for a specific bit error rate (BER) level.
774 o Security: The ability to prevent false tripping due to a noisy
775 environment, by minimizing the probability of unwanted commands
776 (PUC). Security targets are also set for a specific bit error
777 rate (BER) level.
779 Additional key elements that may impact Teleprotection performance
780 include bandwidth rate of the Teleprotection system and its
781 resiliency or failure recovery capacity. Transmission time,
782 bandwidth utilization and resiliency are directly linked to the
783 telecommunications equipments and the connections that are used to
784 transfer the commands between relays.
786 3.2.2.1.1.1. Latency Budget Consideration
788 Delay requirements for utility networks may vary depending upon a
789 number of parameters, such as the specific protection equipments
790 used. Most power line equipment can tolerate short circuits or
791 faults for up to approximately five power cycles before sustaining
792 irreversible damage or affecting other segments in the network. This
793 translates to total fault clearance time of 100ms. As a safety
794 precaution, however, actual operation time of protection systems is
795 limited to 70- 80 percent of this period, including fault recognition
796 time, command transmission time and line breaker switching time.
797 Some system components, such as large electromechanical switches,
798 require particularly long time to operate and take up the majority of
799 the total clearance time, leaving only a 10ms window for the
800 telecommunications part of the protection scheme, independent of the
801 distance to travel. Given the sensitivity of the issue, new networks
802 impose requirements that are even more stringent: IEC standard 61850
803 limits the transfer time for protection messages to 1/4 - 1/2 cycle
804 or 4 - 8ms (for 60Hz lines) for the most critical messages.
806 3.2.2.1.1.2. Asymetric delay
808 In addition to minimal transmission delay, a differential protection
809 telecommunications channel must be synchronous, i.e., experiencing
810 symmetrical channel delay in transmit and receive paths. This
811 requires special attention in jitter-prone packet networks. While
812 optimally Teleprotection systems should support zero asymmetric
813 delay, typical legacy relays can tolerate discrepancies of up to
814 750us.
816 The main tools available for lowering delay variation below this
817 threshold are:
819 o A jitter buffer at the multiplexers on each end of the line can be
820 used to offset delay variation by queuing sent and received
821 packets. The length of the queues must balance the need to
822 regulate the rate of transmission with the need to limit overall
823 delay, as larger buffers result in increased latency. This is the
824 old TDM traditional way to fulfill this requirement.
826 o Traffic management tools ensure that the Teleprotection signals
827 receive the highest transmission priority and minimize the number
828 of jitter addition during the path. This is one way to meet the
829 requirement in IP networks.
831 o Standard Packet-Based synchronization technologies, such as
832 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet
833 (Sync-E), can help maintain stable networks by keeping a highly
834 accurate clock source on the different network devices involved.
836 3.2.2.1.1.2.1. Other traffic characteristics
838 o Redundancy: The existence in a system of more than one means of
839 accomplishing a given function.
841 o Recovery time : The duration of time within which a business
842 process must be restored after any type of disruption in order to
843 avoid unacceptable consequences associated with a break in
844 business continuity.
846 o performance management : In networking, a management function
847 defined for controlling and analyzing different parameters/metrics
848 such as the throughput, error rate.
850 o packet loss : One or more packets of data travelling across
851 network fail to reach their destination.
853 3.2.2.1.1.2.2. Teleprotection network requirements
855 The following table captures the main network requirements (this is
856 based on IEC 61850 standard)
858 +-----------------------------+-------------------------------------+
859 | Teleprotection Requirement | Attribute |
860 +-----------------------------+-------------------------------------+
861 | One way maximum delay | 4-10 ms |
862 | Asymetric delay required | Yes |
863 | Maximum jitter | less than 250 us (750 us for legacy |
864 | | IED) |
865 | Topology | Point to point, point to Multi- |
866 | | point |
867 | Availability | 99.9999 |
868 | precise timing required | Yes |
869 | Recovery time on node | less than 50ms - hitless |
870 | failure | |
871 | performance management | Yes, Mandatory |
872 | Redundancy | Yes |
873 | Packet loss | 0.1% to 1% |
874 +-----------------------------+-------------------------------------+
876 Table 1: Teleprotection network requirements
878 3.2.2.1.2. Inter-Trip Protection scheme
880 Inter-tripping is the controlled tripping of a circuit breaker to
881 complete the isolation of a circuit or piece of apparatus in concert
882 with the tripping of other circuit breakers. The main use of such
883 schemes is to ensure that protection at both ends of a faulted
884 circuit will operate to isolate the equipment concerned. Inter-
885 tripping schemes use signaling to convey a trip command to remote
886 circuit breakers to isolate circuits.
888 +--------------------------------+----------------------------------+
889 | Inter-Trip protection | Attribute |
890 | Requirement | |
891 +--------------------------------+----------------------------------+
892 | One way maximum delay | 5 ms |
893 | Asymetric delay required | No |
894 | Maximum jitter | Not critical |
895 | Topology | Point to point, point to Multi- |
896 | | point |
897 | Bandwidth | 64 Kbps |
898 | Availability | 99.9999 |
899 | precise timing required | Yes |
900 | Recovery time on node failure | less than 50ms - hitless |
901 | performance management | Yes, Mandatory |
902 | Redundancy | Yes |
903 | Packet loss | 0.1% |
904 +--------------------------------+----------------------------------+
906 Table 2: Inter-Trip protection network requirements
908 3.2.2.1.3. Current Differential Protection Scheme
910 Current differential protection is commonly used for line protection,
911 and is typical for protecting parallel circuits. A main advantage
912 for differential protection is that, compared to overcurrent
913 protection, it allows only the faulted circuit to be de-energized in
914 case of a fault. At both end of the lines, the current is measured
915 by the differential relays, and based on Kirchhoff's law, both relays
916 will trip the circuit breaker if the current going into the line does
917 not equal the current going out of the line. This type of protection
918 scheme assumes some form of communications being present between the
919 relays at both end of the line, to allow both relays to compare
920 measured current values. A fault in line 1 will cause overcurrent to
921 be flowing in both lines, but because the current in line 2 is a
922 through following current, this current is measured equal at both
923 ends of the line, therefore the differential relays on line 2 will
924 not trip line 2. Line 1 will be tripped, as the relays will not
925 measure the same currents at both ends of the line. Line
926 differential protection schemes assume a very low telecommunications
927 delay between both relays, often as low as 5ms. Moreover, as those
928 systems are often not time-synchronized, they also assume symmetric
929 telecommunications paths with constant delay, which allows comparing
930 current measurement values taken at the exact same time.
932 +----------------------------------+--------------------------------+
933 | Current Differential protection | Attribute |
934 | Requirement | |
935 +----------------------------------+--------------------------------+
936 | One way maximum delay | 5 ms |
937 | Asymetric delay Required | Yes |
938 | Maximum jitter | less than 250 us (750us for |
939 | | legacy IED) |
940 | Topology | Point to point, point to |
941 | | Multi-point |
942 | Bandwidth | 64 Kbps |
943 | Availability | 99.9999 |
944 | precise timing required | Yes |
945 | Recovery time on node failure | less than 50ms - hitless |
946 | performance management | Yes, Mandatory |
947 | Redundancy | Yes |
948 | Packet loss | 0.1% |
949 +----------------------------------+--------------------------------+
951 Table 3: Current Differential Protection requirements
953 3.2.2.1.4. Distance Protection Scheme
955 Distance (Impedance Relay) protection scheme is based on voltage and
956 current measurements. A fault on a circuit will generally create a
957 sag in the voltage level. If the ratio of voltage to current
958 measured at the protection relay terminals, which equates to an
959 impedance element, falls within a set threshold the circuit breaker
960 will operate. The operating characteristics of this protection are
961 based on the line characteristics. This means that when a fault
962 appears on the line, the impedance setting in the relay is compared
963 to the apparent impedance of the line from the relay terminals to the
964 fault. If the relay setting is determined to be below the apparent
965 impedance it is determined that the fault is within the zone of
966 protection. When the transmission line length is under a minimum
967 length, distance protection becomes more difficult to coordinate. In
968 these instances the best choice of protection is current differential
969 protection.
971 +-------------------------------+-----------------------------------+
972 | Distance protection | Attribute |
973 | Requirement | |
974 +-------------------------------+-----------------------------------+
975 | One way maximum delay | 5 ms |
976 | Asymetric delay Required | No |
977 | Maximum jitter | Not critical |
978 | Topology | Point to point, point to Multi- |
979 | | point |
980 | Bandwidth | 64 Kbps |
981 | Availability | 99.9999 |
982 | precise timing required | Yes |
983 | Recovery time on node failure | less than 50ms - hitless |
984 | performance management | Yes, Mandatory |
985 | Redundancy | Yes |
986 | Packet loss | 0.1% |
987 +-------------------------------+-----------------------------------+
989 Table 4: Distance Protection requirements
991 3.2.2.1.5. Inter-Substation Protection Signaling
993 This use case describes the exchange of Sampled Value and/or GOOSE
994 (Generic Object Oriented Substation Events) message between
995 Intelligent Electronic Devices (IED) in two substations for
996 protection and tripping coordination. The two IEDs are in a master-
997 slave mode.
999 The Current Transformer or Voltage Transformer (CT/VT) in one
1000 substation sends the sampled analog voltage or current value to the
1001 Merging Unit (MU) over hard wire. The merging unit sends the time-
1002 synchronized 61850-9-2 sampled values to the slave IED. The slave
1003 IED forwards the information to the Master IED in the other
1004 substation. The master IED makes the determination (for example
1005 based on sampled value differentials) to send a trip command to the
1006 originating IED. Once the slave IED/Relay receives the GOOSE trip
1007 for breaker tripping, it opens the breaker. It then sends a
1008 confirmation message back to the master. All data exchanges between
1009 IEDs are either through Sampled Value and/or GOOSE messages.
1011 +----------------------------------+--------------------------------+
1012 | Inter-Substation protection | Attribute |
1013 | Requirement | |
1014 +----------------------------------+--------------------------------+
1015 | One way maximum delay | 5 ms |
1016 | Asymetric delay Required | No |
1017 | Maximum jitter | Not critical |
1018 | Topology | Point to point, point to |
1019 | | Multi-point |
1020 | Bandwidth | 64 Kbps |
1021 | Availability | 99.9999 |
1022 | precise timing required | Yes |
1023 | Recovery time on node failure | less than 50ms - hitless |
1024 | performance management | Yes, Mandatory |
1025 | Redundancy | Yes |
1026 | Packet loss | 1% |
1027 +----------------------------------+--------------------------------+
1029 Table 5: Inter-Substation Protection requirements
1031 3.2.2.1.6. Intra-Substation Process Bus Communications
1033 This use case describes the data flow from the CT/VT to the IEDs in
1034 the substation via the merging unit (MU). The CT/VT in the
1035 substation send the sampled value (analog voltage or current) to the
1036 Merging Unit (MU) over hard wire. The merging unit sends the time-
1037 synchronized 61850-9-2 sampled values to the IEDs in the substation
1038 in GOOSE message format. The GPS Master Clock can send 1PPS or
1039 IRIG-B format to MU through serial port, or IEEE 1588 protocol via
1040 network. Process bus communication using 61850 simplifies
1041 connectivity within the substation and removes the requirement for
1042 multiple serial connections and removes the slow serial bus
1043 architectures that are typically used. This also ensures increased
1044 flexibility and increased speed with the use of multicast messaging
1045 between multiple devices.
1047 +----------------------------------+--------------------------------+
1048 | Intra-Substation protection | Attribute |
1049 | Requirement | |
1050 +----------------------------------+--------------------------------+
1051 | One way maximum delay | 5 ms |
1052 | Asymetric delay Required | No |
1053 | Maximum jitter | Not critical |
1054 | Topology | Point to point, point to |
1055 | | Multi-point |
1056 | Bandwidth | 64 Kbps |
1057 | Availability | 99.9999 |
1058 | precise timing required | Yes |
1059 | Recovery time on Node failure | less than 50ms - hitless |
1060 | performance management | Yes, Mandatory |
1061 | Redundancy | Yes - No |
1062 | Packet loss | 0.1% |
1063 +----------------------------------+--------------------------------+
1065 Table 6: Intra-Substation Protection requirements
1067 3.2.2.1.7. Wide Area Monitoring and Control Systems
1069 The application of synchrophasor measurement data from Phasor
1070 Measurement Units (PMU) to Wide Area Monitoring and Control Systems
1071 promises to provide important new capabilities for improving system
1072 stability. Access to PMU data enables more timely situational
1073 awareness over larger portions of the grid than what has been
1074 possible historically with normal SCADA (Supervisory Control and Data
1075 Acquisition) data. Handling the volume and real-time nature of
1076 synchrophasor data presents unique challenges for existing
1077 application architectures. Wide Area management System (WAMS) makes
1078 it possible for the condition of the bulk power system to be observed
1079 and understood in real-time so that protective, preventative, or
1080 corrective action can be taken. Because of the very high sampling
1081 rate of measurements and the strict requirement for time
1082 synchronization of the samples, WAMS has stringent telecommunications
1083 requirements in an IP network that are captured in the following
1084 table:
1086 +----------------------+--------------------------------------------+
1087 | WAMS Requirement | Attribute |
1088 +----------------------+--------------------------------------------+
1089 | One way maximum | 50 ms |
1090 | delay | |
1091 | Asymetric delay | No |
1092 | Required | |
1093 | Maximum jitter | Not critical |
1094 | Topology | Point to point, point to Multi-point, |
1095 | | Multi-point to Multi-point |
1096 | Bandwidth | 100 Kbps |
1097 | Availability | 99.9999 |
1098 | precise timing | Yes |
1099 | required | |
1100 | Recovery time on | less than 50ms - hitless |
1101 | Node failure | |
1102 | performance | Yes, Mandatory |
1103 | management | |
1104 | Redundancy | Yes |
1105 | Packet loss | 1% |
1106 +----------------------+--------------------------------------------+
1108 Table 7: WAMS Special Communication Requirements
1110 3.2.2.1.8. IEC 61850 WAN engineering guidelines requirement
1111 classification
1113 The IEC (International Electrotechnical Commission) has recently
1114 published a Technical Report which offers guidelines on how to define
1115 and deploy Wide Area Networks for the interconnections of electric
1116 substations, generation plants and SCADA operation centers. The IEC
1117 61850-90-12 is providing a classification of WAN communication
1118 requirements into 4 classes. You will find herafter the table
1119 summarizing these requirements:
1121 +----------------+------------+------------+------------+-----------+
1122 | WAN | Class WA | Class WB | Class WC | Class WD |
1123 | Requirement | | | | |
1124 +----------------+------------+------------+------------+-----------+
1125 | Application | EHV (Extra | HV (High | MV (Medium | General |
1126 | field | High | Voltage) | Voltage) | purpose |
1127 | | Voltage) | | | |
1128 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms |
1129 | Jitter | 10 us | 100 us | 1 ms | 10 ms |
1130 | Latency | 100 us | 1 ms | 10 ms | 100 ms |
1131 | Asymetry | | | | |
1132 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 |
1133 | | | | | ms |
1134 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | |
1135 | | 10-6 | 10-4 | | |
1136 | Unavailability | 10-7 to | 10-5 to | 10-3 | |
1137 | | 10-6 | 10-4 | | |
1138 | Recovery delay | Zero | 50 ms | 5 s | 50 s |
1139 | Cyber security | extremely | High | Medium | Medium |
1140 | | high | | | |
1141 +----------------+------------+------------+------------+-----------+
1143 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC
1145 3.2.2.2. Distribution use case
1147 3.2.2.2.1. Fault Location Isolation and Service Restoration (FLISR)
1149 As the name implies, Fault Location, Isolation, and Service
1150 Restoration (FLISR) refers to the ability to automatically locate the
1151 fault, isolate the fault, and restore service in the distribution
1152 network. It is a self-healing feature whose purpose is to minimize
1153 the impact of faults by serving portions of the loads on the affected
1154 circuit by switching to other circuits. It reduces the number of
1155 customers that experience a sustained power outage by reconfiguring
1156 distribution circuits. This will likely be the first wide spread
1157 application of distributed intelligence in the grid. Secondary
1158 substations can be connected to multiple primary substations.
1159 Normally, static power switch statuses (open/closed) in the network
1160 dictate the power flow to secondary substations. Reconfiguring the
1161 network in the event of a fault is typically done manually on site to
1162 operate switchgear to energize/de-energize alternate paths.
1163 Automating the operation of substation switchgear allows the utility
1164 to have a more dynamic network where the flow of power can be altered
1165 under fault conditions but also during times of peak load. It allows
1166 the utility to shift peak loads around the network. Or, to be more
1167 precise, alters the configuration of the network to move loads
1168 between different primary substations. The FLISR capability can be
1169 enabled in two modes:
1171 o Managed centrally from DMS (Distribution Management System), or
1173 o Executed locally through distributed control via intelligent
1174 switches and fault sensors.
1176 There are 3 distinct sub-functions that are performed:
1178 1. Fault Location Identification
1180 This sub-function is initiated by SCADA inputs, such as lockouts,
1181 fault indications/location, and, also, by input from the Outage
1182 Management System (OMS), and in the future by inputs from fault-
1183 predicting devices. It determines the specific protective device,
1184 which has cleared the sustained fault, identifies the de-energized
1185 sections, and estimates the probable location of the actual or the
1186 expected fault. It distinguishes faults cleared by controllable
1187 protective devices from those cleared by fuses, and identifies
1188 momentary outages and inrush/cold load pick-up currents. This step
1189 is also referred to as Fault Detection Classification and Location
1190 (FDCL). This step helps to expedite the restoration of faulted
1191 sections through fast fault location identification and improved
1192 diagnostic information available for crew dispatch. Also provides
1193 visualization of fault information to design and implement a
1194 switching plan to isolate the fault.
1196 2. Fault Type Determination
1198 I. Indicates faults cleared by controllable protective devices by
1199 distinguishing between:
1201 a. Faults cleared by fuses
1203 b. Momentary outages
1205 c. Inrush/cold load current
1207 II. Determines the faulted sections based on SCADA fault indications
1208 and protection lockout signals
1210 III. Increases the accuracy of the fault location estimation based
1211 on SCADA fault current measurements and real-time fault analysis
1213 3. Fault Isolation and Service Restoration
1214 Once the location and type of the fault has been pinpointed, the
1215 systems will attempt to isolate the fault and restore the non-faulted
1216 section of the network. This can have three modes of operation:
1218 I. Closed-loop mode : This is initiated by the Fault location sub-
1219 function. It generates a switching order (i.e., sequence of
1220 switching) for the remotely controlled switching devices to isolate
1221 the faulted section, and restore service to the non-faulted sections.
1222 The switching order is automatically executed via SCADA.
1224 II. Advisory mode : This is initiated by the Fault location sub-
1225 function. It generates a switching order for remotely and manually
1226 controlled switching devices to isolate the faulted section, and
1227 restore service to the non-faulted sections. The switching order is
1228 presented to operator for approval and execution.
1230 III. Study mode : the operator initiates this function. It analyzes
1231 a saved case modified by the operator, and generates a switching
1232 order under the operating conditions specified by the operator.
1234 With the increasing volume of data that are collected through fault
1235 sensors, utilities will use Big Data query and analysis tools to
1236 study outage information to anticipate and prevent outages by
1237 detecting failure patterns and their correlation with asset age,
1238 type, load profiles, time of day, weather conditions, and other
1239 conditions to discover conditions that lead to faults and take the
1240 necessary preventive and corrective measures.
1242 +----------------------+--------------------------------------------+
1243 | FLISR Requirement | Attribute |
1244 +----------------------+--------------------------------------------+
1245 | One way maximum | 80 ms |
1246 | delay | |
1247 | Asymetric delay | No |
1248 | Required | |
1249 | Maximum jitter | 40 ms |
1250 | Topology | Point to point, point to Multi-point, |
1251 | | Multi-point to Multi-point |
1252 | Bandwidth | 64 Kbps |
1253 | Availability | 99.9999 |
1254 | precise timing | Yes |
1255 | required | |
1256 | Recovery time on | Depends on customer impact |
1257 | Node failure | |
1258 | performance | Yes, Mandatory |
1259 | management | |
1260 | Redundancy | Yes |
1261 | Packet loss | 0.1% |
1262 +----------------------+--------------------------------------------+
1264 Table 9: FLISR Communication Requirements
1266 3.2.2.3. Generation use case
1268 3.2.2.3.1. Frequency Control / Automatic Generation Control (AGC)
1270 The system frequency should be maintained within a very narrow band.
1271 Deviations from the acceptable frequency range are detected and
1272 forwarded to the Load Frequency Control (LFC) system so that required
1273 up or down generation increase / decrease pulses can be sent to the
1274 power plants for frequency regulation. The trend in system frequency
1275 is a measure of mismatch between demand and generation, and is a
1276 necessary parameter for load control in interconnected systems.
1278 Automatic generation control (AGC) is a system for adjusting the
1279 power output of generators at different power plants, in response to
1280 changes in the load. Since a power grid requires that generation and
1281 load closely balance moment by moment, frequent adjustments to the
1282 output of generators are necessary. The balance can be judged by
1283 measuring the system frequency; if it is increasing, more power is
1284 being generated than used, and all machines in the system are
1285 accelerating. If the system frequency is decreasing, more demand is
1286 on the system than the instantaneous generation can provide, and all
1287 generators are slowing down.
1289 Where the grid has tie lines to adjacent control areas, automatic
1290 generation control helps maintain the power interchanges over the tie
1291 lines at the scheduled levels. The AGC takes into account various
1292 parameters including the most economical units to adjust, the
1293 coordination of thermal, hydroelectric, and other generation types,
1294 and even constraints related to the stability of the system and
1295 capacity of interconnections to other power grids.
1297 For the purpose of AGC we use static frequency measurements and
1298 averaging methods are used to get a more precise measure of system
1299 frequency in steady-state conditions.
1301 During disturbances, more real-time dynamic measurements of system
1302 frequency are taken using PMUs, especially when different areas of
1303 the system exhibit different frequencies. But that is outside the
1304 scope of this use case.
1306 +---------------------------------------------------+---------------+
1307 | FCAG (Frequency Control Automatic Generation) | Attribute |
1308 | Requirement | |
1309 +---------------------------------------------------+---------------+
1310 | One way maximum delay | 500 ms |
1311 | Asymetric delay Required | No |
1312 | Maximum jitter | Not critical |
1313 | Topology | Point to |
1314 | | point |
1315 | Bandwidth | 20 Kbps |
1316 | Availability | 99.999 |
1317 | precise timing required | Yes |
1318 | Recovery time on Node failure | N/A |
1319 | performance management | Yes, |
1320 | | Mandatory |
1321 | Redundancy | Yes |
1322 | Packet loss | 1% |
1323 +---------------------------------------------------+---------------+
1325 Table 10: FCAG Communication Requirements
1327 3.2.3. Specific Network topologies of Smart Grid Applications
1329 Utilities often have very large private telecommunications networks.
1330 It covers an entire territory / country. The main purpose of the
1331 network, until now, has been to support transmission network
1332 monitoring, control, and automation, remote control of generation
1333 sites, and providing FCAPS (Fault. Configuration. Accounting.
1334 Performance. Security) services from centralized network operation
1335 centers.
1337 Going forward, one network will support operation and maintenance of
1338 electrical networks (generation, transmission, and distribution),
1339 voice and data services for ten of thousands of employees and for
1340 exchange with neighboring interconnections, and administrative
1341 services. To meet those requirements, utility may deploy several
1342 physical networks leveraging different technologies across the
1343 country: an optical network and a microwave network for instance.
1344 Each protection and automatism system between two points has two
1345 telecommunications circuits, one on each network. Path diversity
1346 between two substations is key. Regardless of the event type
1347 (hurricane, ice storm, etc.), one path shall stay available so the
1348 SPS can still operate.
1350 In the optical network, signals are transmitted over more than tens
1351 of thousands of circuits using fiber optic links, microwave and
1352 telephone cables. This network is the nervous system of the
1353 utility's power transmission operations. The optical network
1354 represents ten of thousands of km of cable deployed along the power
1355 lines.
1357 Due to vast distances between transmission substations (for example
1358 as far as 280km apart), the fiber signal can be amplified to reach a
1359 distance of 280 km without attenuation.
1361 3.2.4. Precision Time Protocol
1363 Some utilities do not use GPS clocks in generation substations. One
1364 of the main reasons is that some of the generation plants are 30 to
1365 50 meters deep under ground and the GPS signal can be weak and
1366 unreliable. Instead, atomic clocks are used. Clocks are
1367 synchronized amongst each other. Rubidium clocks provide clock and
1368 1ms timestamps for IRIG-B. Some companies plan to transition to the
1369 Precision Time Protocol (IEEE 1588), distributing the synchronization
1370 signal over the IP/MPLS network.
1372 The Precision Time Protocol (PTP) is defined in IEEE standard 1588.
1373 PTP is applicable to distributed systems consisting of one or more
1374 nodes, communicating over a network. Nodes are modeled as containing
1375 a real-time clock that may be used by applications within the node
1376 for various purposes such as generating time-stamps for data or
1377 ordering events managed by the node. The protocol provides a
1378 mechanism for synchronizing the clocks of participating nodes to a
1379 high degree of accuracy and precision.
1381 PTP operates based on the following assumptions :
1383 It is assumed that the network eliminates cyclic forwarding of PTP
1384 messages within each communication path (e.g., by using a spanning
1385 tree protocol). PTP eliminates cyclic forwarding of PTP messages
1386 between communication paths.
1388 PTP is tolerant of an occasional missed message, duplicated
1389 message, or message that arrived out of order. However, PTP
1390 assumes that such impairments are relatively rare.
1392 PTP was designed assuming a multicast communication model. PTP
1393 also supports a unicast communication model as long as the
1394 behavior of the protocol is preserved.
1396 Like all message-based time transfer protocols, PTP time accuracy
1397 is degraded by asymmetry in the paths taken by event messages.
1398 Asymmetry is not detectable by PTP, however, if known, PTP
1399 corrects for asymmetry.
1401 A time-stamp event is generated at the time of transmission and
1402 reception of any event message. The time-stamp event occurs when the
1403 message's timestamp point crosses the boundary between the node and
1404 the network.
1406 IEC 61850 will recommend the use of the IEEE PTP 1588 Utility Profile
1407 (as defined in IEC 62439-3 Annex B) which offers the support of
1408 redundant attachment of clocks to Paralell Redundancy Protcol (PRP)
1409 and High-availability Seamless Redundancy (HSR) networks.
1411 3.3. IANA Considerations
1413 This memo includes no request to IANA.
1415 3.4. Security Considerations
1417 3.4.1. Current Practices and Their Limitations
1419 Grid monitoring and control devices are already targets for cyber
1420 attacks and legacy telecommunications protocols have many intrinsic
1421 network related vulnerabilities. DNP3, Modbus, PROFIBUS/PROFINET,
1422 and other protocols are designed around a common paradigm of request
1423 and respond. Each protocol is designed for a master device such as
1424 an HMI (Human Machine Interface) system to send commands to
1425 subordinate slave devices to retrieve data (reading inputs) or
1426 control (writing to outputs). Because many of these protocols lack
1427 authentication, encryption, or other basic security measures, they
1428 are prone to network-based attacks, allowing a malicious actor or
1429 attacker to utilize the request-and-respond system as a mechanism for
1430 command-and-control like functionality. Specific security concerns
1431 common to most industrial control, including utility
1432 telecommunication protocols include the following:
1434 o Network or transport errors (e.g. malformed packets or excessive
1435 latency) can cause protocol failure.
1437 o Protocol commands may be available that are capable of forcing
1438 slave devices into inoperable states, including powering-off
1439 devices, forcing them into a listen-only state, disabling
1440 alarming.
1442 o Protocol commands may be available that are capable of restarting
1443 communications and otherwise interrupting processes.
1445 o Protocol commands may be available that are capable of clearing,
1446 erasing, or resetting diagnostic information such as counters and
1447 diagnostic registers.
1449 o Protocol commands may be available that are capable of requesting
1450 sensitive information about the controllers, their configurations,
1451 or other need-to-know information.
1453 o Most protocols are application layer protocols transported over
1454 TCP; therefore it is easy to transport commands over non-standard
1455 ports or inject commands into authorized traffic flows.
1457 o Protocol commands may be available that are capable of
1458 broadcasting messages to many devices at once (i.e. a potential
1459 DoS).
1461 o Protocol commands may be available to query the device network to
1462 obtain defined points and their values (i.e. a configuration
1463 scan).
1465 o Protocol commands may be available that will list all available
1466 function codes (i.e. a function scan).
1468 o Bump in the wire (BITW) solutions : A hardware device is added to
1469 provide IPSec services between two routers that are not capable of
1470 IPSec functions. This special IPsec device will intercept then
1471 intercept outgoing datagrams, add IPSec protection to them, and
1472 strip it off incoming datagrams. BITW can all IPSec to legacy
1473 hosts and can retrofit non-IPSec routers to provide security
1474 benefits. The disadvantages are complexity and cost.
1476 These inherent vulnerabilities, along with increasing connectivity
1477 between IT an OT networks, make network-based attacks very feasible.
1478 Simple injection of malicious protocol commands provides control over
1479 the target process. Altering legitimate protocol traffic can also
1480 alter information about a process and disrupt the legitimate controls
1481 that are in place over that process. A man- in-the-middle attack
1482 could provide both control over a process and misrepresentation of
1483 data back to operator consoles.
1485 3.4.2. Security Trends in Utility Networks
1487 Although advanced telecommunications networks can assist in
1488 transforming the energy industry, playing a critical role in
1489 maintaining high levels of reliability, performance, and
1490 manageability, they also introduce the need for an integrated
1491 security infrastructure. Many of the technologies being deployed to
1492 support smart grid projects such as smart meters and sensors can
1493 increase the vulnerability of the grid to attack. Top security
1494 concerns for utilities migrating to an intelligent smart grid
1495 telecommunications platform center on the following trends:
1497 o Integration of distributed energy resources
1499 o Proliferation of digital devices to enable management, automation,
1500 protection, and control
1502 o Regulatory mandates to comply with standards for critical
1503 infrastructure protection
1505 o Migration to new systems for outage management, distribution
1506 automation, condition-based maintenance, load forecasting, and
1507 smart metering
1509 o Demand for new levels of customer service and energy management
1511 This development of a diverse set of networks to support the
1512 integration of microgrids, open-access energy competition, and the
1513 use of network-controlled devices is driving the need for a converged
1514 security infrastructure for all participants in the smart grid,
1515 including utilities, energy service providers, large commercial and
1516 industrial, as well as residential customers. Securing the assets of
1517 electric power delivery systems, from the control center to the
1518 substation, to the feeders and down to customer meters, requires an
1519 end-to-end security infrastructure that protects the myriad of
1520 telecommunications assets used to operate, monitor, and control power
1521 flow and measurement. Cyber security refers to all the security
1522 issues in automation and telecommunications that affect any functions
1523 related to the operation of the electric power systems.
1524 Specifically, it involves the concepts of:
1526 o Integrity : data cannot be altered undetectably
1528 o Authenticity : the telecommunications parties involved must be
1529 validated as genuine
1531 o Authorization : only requests and commands from the authorized
1532 users can be accepted by the system
1534 o Confidentiality : data must not be accessible to any
1535 unauthenticated users
1537 When designing and deploying new smart grid devices and
1538 telecommunications systems, it's imperative to understand the various
1539 impacts of these new components under a variety of attack situations
1540 on the power grid. Consequences of a cyber attack on the grid
1541 telecommunications network can be catastrophic. This is why security
1542 for smart grid is not just an ad hoc feature or product, it's a
1543 complete framework integrating both physical and Cyber security
1544 requirements and covering the entire smart grid networks from
1545 generation to distribution. Security has therefore become one of the
1546 main foundations of the utility telecom network architecture and must
1547 be considered at every layer with a defense-in-depth approach.
1548 Migrating to IP based protocols is key to address these challenges
1549 for two reasons:
1551 1. IP enables a rich set of features and capabilities to enhance the
1552 security posture
1554 2. IP is based on open standards, which allows interoperability
1555 between different vendors and products, driving down the costs
1556 associated with implementing security solutions in OT networks.
1558 Securing OT (Operation technology) telecommunications over packet-
1559 switched IP networks follow the same principles that are foundational
1560 for securing the IT infrastructure, i.e., consideration must be given
1561 to enforcing electronic access control for both person-to-machine and
1562 machine-to-machine communications, and providing the appropriate
1563 levels of data privacy, device and platform integrity, and threat
1564 detection and mitigation.
1566 3.5. Acknowledgements
1568 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy
1569 Practice Cisco
1571 Pascal Thubert, CTAO Cisco
1573 4. Building Automation Systems Use Cases
1574 4.1. Introduction
1576 Building Automation System (BAS) is a system that manages various
1577 equipment and sensors in buildings (e.g., heating, cooling and
1578 ventilating) for improving residents' comfort, reduction of energy
1579 consumption and automatic responses in case of failure and emergency.
1580 For example, BAS measures temperature of a room by using various
1581 sensors and then controls the HVAC (Heating, Ventilating, and air
1582 Conditioning) system automatically to maintain the temperature level
1583 and minimize the energy consumption.
1585 There are typically two layers of network in a BAS. Upper one is
1586 called management network and the lower one is called field network.
1587 In management networks, an IP-based communication protocol is used
1588 while in field network, non-IP based communication protocols (a.k.a.,
1589 field protocol) are mainly used.
1591 There are many field protocols used in today's deployment in which
1592 some medium access control and physical layers protocols are
1593 standards-based and others are proprietary based. Therefore the BAS
1594 needs to have multiple MAC/PHY modules and interfaces to make use of
1595 multiple field protocols based devices. This situation not only
1596 makes BAS more expensive with large development cycle of multiple
1597 devices but also creates the issue of vendor lock-in with multiple
1598 types of management applications.
1600 The other issue with some of the existing field networks and
1601 protocols are security. When these protocols and network were
1602 developed, it was assumed that the field networks are isolated
1603 physically from external networks and therefore the network and
1604 protocol security was not a concern. However, in today's world many
1605 BASes are managed remotely and is connected to shared IP networks and
1606 it is also not uncommon that same IT infrastructure is used be it
1607 office, home or in enterprise networks. Adding network and protocol
1608 security to existing system is a non-trivial task.
1610 This document first describes the BAS functionalities, its
1611 architecture and current deployment models. Then we discuss the use
1612 cases and field network requirements that need to be satisfied by
1613 deterministic networking.
1615 4.2. BAS Functionality
1617 Building Automation System (BAS) is a system that manages various
1618 devices in buildings automatically. BAS primarily performs the
1619 following functions:
1621 o Measures states of devices in a regular interval. For example,
1622 temperature or humidity or illuminance of rooms, on/off state of
1623 room lights, open/close state of doors, FAN speed, valve, running
1624 mode of HVAC, and its power consumption.
1626 o Stores the measured data into a database (Note: The database keeps
1627 the data for several years).
1629 o Provides the measured data for BAS operators for visualization.
1631 o Generates alarms for abnormal state of devices (e.g., calling
1632 operator's cellular phone, sending an e-mail to operators and so
1633 on).
1635 o Controls devices on demand.
1637 o Controls devices with a pre-defined operation schedule (e.g., turn
1638 off room lights at 10:00 PM).
1640 4.3. BAS Architecture
1642 A typical BAS architecture is described below in Figure 1. There are
1643 several elements in a BAS.
1645 +----------------------------+
1646 | |
1647 | BMS HMI |
1648 | | | |
1649 | +----------------------+ |
1650 | | Management Network | |
1651 | +----------------------+ |
1652 | | | |
1653 | LC LC |
1654 | | | |
1655 | +----------------------+ |
1656 | | Field Network | |
1657 | +----------------------+ |
1658 | | | | | |
1659 | Dev Dev Dev Dev |
1660 | |
1661 +----------------------------+
1663 BMS := Building Management Server
1664 HMI := Human Machine Interface
1665 LC := Local Controller
1667 Figure 1: BAS architecture
1669 Human Machine Interface (HMI): It is commonly a computing platform
1670 (e.g., desktop PC) used by operators. Operators perform the
1671 following operations through HMI.
1673 o Monitoring devices: HMI displays measured device states. For
1674 example, latest device states, a history chart of states, a popup
1675 window with an alert message. Typically, the measured device
1676 states are stored in BMS (Building Management Server).
1678 o Controlling devices: HMI provides ability to control the devices.
1679 For example, turn on a room light, set a target temperature to
1680 HVAC. Several parameters (a target device, a control value,
1681 etc.), can be set by the operators which then HMI sends to a LC
1682 (Local Controller) via the management network.
1684 o Configuring an operational schedule: HMI provides scheduling
1685 capability through which operational schedule is defined. For
1686 example, schedule includes 1) a time to control, 2) a target
1687 device to control, and 3) a control value. A specific operational
1688 example could be turn off all room lights in the building at 10:00
1689 PM. This schedule is typically stored in BMS.
1691 Building Management Server (BMS) collects device states from LCs
1692 (Local Controllers) and stores it into a database. According to its
1693 configuration, BMS executes the following operation automatically.
1695 o BMS collects device states from LCs in a regular interval and then
1696 stores the information into a database.
1698 o BMS sends control values to LCs according to a pre-configured
1699 schedule.
1701 o BMS sends an alarm signal to operators if it detects abnormal
1702 devices states. For example, turning on a red lamp, calling
1703 operators' cellular phone, sending an e-mail to operators.
1705 BMS and HMI communicate with Local Controllers (LCs) via IP-based
1706 communication protocol standardized by BACnet/IP [bacnetip], KNX/IP
1707 [knx]. These protocols are commonly called as management protocols.
1708 LCs measure device states and provide the information to BMS or HMI.
1709 These devices may include HVAC, FAN, doors, valves, lights, sensors
1710 (e.g., temperature, humidity, and illuminance). LC can also set
1711 control values to the devices. LC sometimes has additional
1712 functions, for example, sending a device state to BMS or HMI if the
1713 device state exceeds a certain threshold value, feedback control to a
1714 device to keep the device state at a certain state. Typical example
1715 of LC is a PLC (Programmable Logic Controller).
1717 Each LC is connected with a different field network and communicates
1718 with several tens or hundreds of devices via the field network.
1719 Today there are many field protocols used in the field network.
1720 Based on the type of field protocol used, LC interfaces and its
1721 hardware/software could be different. Field protocols are currently
1722 non-IP based in which some of them are standards-based (e.g., LonTalk
1723 [lontalk], Modbus [modbus], Profibus [profibus], FL-net [flnet],) and
1724 others are proprietary.
1726 4.4. Deployment Model
1728 An example BAS system deployment model for medium and large buildings
1729 is depicted in Figure 2 below. In this case the physical layout of
1730 the entire system spans across multiple floors in which there is
1731 normally a monitoring room where the BAS management entities are
1732 located. Each floor will have one or more LCs depending upon the
1733 number of devices connected to the field network.
1735 +--------------------------------------------------+
1736 | Floor 3 |
1737 | +----LC~~~~+~~~~~+~~~~~+ |
1738 | | | | | |
1739 | | Dev Dev Dev |
1740 | | |
1741 |--- | ------------------------------------------|
1742 | | Floor 2 |
1743 | +----LC~~~~+~~~~~+~~~~~+ Field Network |
1744 | | | | | |
1745 | | Dev Dev Dev |
1746 | | |
1747 |--- | ------------------------------------------|
1748 | | Floor 1 |
1749 | +----LC~~~~+~~~~~+~~~~~+ +-----------------|
1750 | | | | | | Monitoring Room |
1751 | | Dev Dev Dev | |
1752 | | | BMS HMI |
1753 | | Management Network | | | |
1754 | +--------------------------------+-----+ |
1755 | | |
1756 +--------------------------------------------------+
1758 Figure 2: Deployment model for Medium/Large Buildings
1760 Each LC is then connected to the monitoring room via the management
1761 network. In this scenario, the management functions are performed
1762 locally and reside within the building. In most cases, fast Ethernet
1763 (e.g. 100BASE-TX) is used for the management network. In the field
1764 network, variety of physical interfaces such as RS232C, and RS485 are
1765 used. Since management network is non-real time, Ethernet without
1766 quality of service is sufficient for today's deployment. However,
1767 the requirements are different for field networks when they are
1768 replaced by either Ethernet or any wireless technologies supporting
1769 real time requirements (Section 3.4).
1771 Figure 3 depicts a deployment model in which the management can be
1772 hosted remotely. This deployment is becoming popular for small
1773 office and residential buildings whereby having a standalone
1774 monitoring system is not a cost effective solution. In such
1775 scenario, multiple buildings are managed by a remote management
1776 monitoring system.
1778 +---------------+
1779 | Remote Center |
1780 | |
1781 | BMS HMI |
1782 +------------------------------------+ | | | |
1783 | Floor 2 | | +---+---+ |
1784 | +----LC~~~~+~~~~~+ Field Network| | | |
1785 | | | | | | Router |
1786 | | Dev Dev | +-------|-------+
1787 | | | |
1788 |--- | ------------------------------| |
1789 | | Floor 1 | |
1790 | +----LC~~~~+~~~~~+ | |
1791 | | | | | |
1792 | | Dev Dev | |
1793 | | | |
1794 | | Management Network | WAN |
1795 | +------------------------Router-------------+
1796 | |
1797 +------------------------------------+
1799 Figure 3: Deployment model for Small Buildings
1801 In either case, interoperability today is only limited to the
1802 management network and its protocols. In existing deployment, there
1803 are limited interoperability opportunity in the field network due to
1804 its nature of non-IP-based design and requirements.
1806 4.5. Use cases and Field Network Requirements
1808 In this section, we describe several use cases and corresponding
1809 network requirements.
1811 4.5.1. Environmental Monitoring
1813 In this use case, LCs measure environmental data (e.g. temperatures,
1814 humidity, illuminance, CO2, etc.) from several sensor devices at each
1815 measurement interval. LCs keep latest value of each sensor. BMS
1816 sends data requests to LCs to collect the latest values, then stores
1817 the collected values into a database. Operators check the latest
1818 environmental data that are displayed by the HMI. BMS also checks
1819 the collected data automatically to notify the operators if a room
1820 condition was going to bad (e.g., too hot or cold). The following
1821 table lists the field network requirements in which the number of
1822 devices in a typical building will be ~100s per LC.
1824 +----------------------+-------------+
1825 | Metric | Requirement |
1826 +----------------------+-------------+
1827 | Measurement interval | 100 msec |
1828 | | |
1829 | Availability | 99.999 % |
1830 +----------------------+-------------+
1832 Table 11: Field Network Requirements for Environmental Monitoring
1834 There is a case that BMS sends data requests at each 1 second in
1835 order to draw a historical chart of 1 second granularity. Therefore
1836 100 msec measurement interval is sufficient for this use case,
1837 because typically 10 times granularity (compared with the interval of
1838 data requests) is considered enough accuracy in this use case. A LC
1839 needs to measure values of all sensors connected with itself at each
1840 measurement interval. Each communication delay in this scenario is
1841 not so critical. The important requirement is completing
1842 measurements of all sensor values in the specified measurement
1843 interval. The availability in this use case is very high (Three 9s).
1845 4.5.2. Fire Detection
1847 In the case of fire detection, HMI needs to show a popup window with
1848 an alert message within a few seconds after an abnormal state is
1849 detected. BMS needs to do some operations if it detects fire. For
1850 example, stopping a HVAC, closing fire shutters, and turning on fire
1851 sprinklers. The following table describes requirements in which the
1852 number of devices in a typical building will be ~10s per LC.
1854 +----------------------+---------------+
1855 | Metric | Requirement |
1856 +----------------------+---------------+
1857 | Measurement interval | 10s of msec |
1858 | | |
1859 | Communication delay | < 10s of msec |
1860 | | |
1861 | Availability | 99.9999 % |
1862 +----------------------+---------------+
1864 Table 12: Field Network Requirements for Fire Detection
1866 In order to perform the above operation within a few seconds (1 or 2
1867 seconds) after detecting fire, LCs should measure sensor values at a
1868 regular interval of less than 10s of msec. If a LC detects an
1869 abnormal sensor value, it sends an alarm information to BMS and HMI
1870 immediately. BMS then controls HVAC or fire shutters or fire
1871 sprinklers. HMI then displays a pop up window and generates the
1872 alert message. Since the management network does not operate in real
1873 time, and software run on BMS or HMI requires 100s of ms, the
1874 communication delay should be less than ~10s of msec. The
1875 availability in this use case is very high (Four 9s).
1877 4.5.3. Feedback Control
1879 Feedback control is used to keep a device state at a certain value.
1880 For example, keeping a room temperature at 27 degree Celsius, keeping
1881 a water flow rate at 100 L/m and so on. The target device state is
1882 normally pre-defined in LCs or provided from BMS or from HMI.
1884 In feedback control procedure, a LC repeats the following actions at
1885 a regular interval (feedback interval).
1887 1. The LC measures device states of the target device.
1889 2. The LC calculates a control value by considering the measured
1890 device state.
1892 3. The LC sends the control value to the target device.
1894 The feedback interval highly depends on the characteristics of the
1895 device and a target quality of control value. While several tens of
1896 milliseconds feedback interval is sufficient to control a valve that
1897 regulates a water flow, controlling DC motors requires several
1898 milliseconds interval. The following table describes the field
1899 network requirements in which the number of devices in a typical
1900 building will be ~10s per LC.
1902 +----------------------+---------------+
1903 | Metric | Requirement |
1904 +----------------------+---------------+
1905 | Feedback interval | ~10ms - 100ms |
1906 | | |
1907 | Communication delay | < 10s of msec |
1908 | | |
1909 | Communication jitter | < 1 msec |
1910 | | |
1911 | Availability | 99.9999 % |
1912 +----------------------+---------------+
1914 Table 13: Field Network Requirements for Feedback Control
1916 Small communication delay and jitter are required in this use case in
1917 order to provide high quality of feedback control. This is currently
1918 offered in production environment with hgh availability (Four 9s).
1920 4.6. Security Considerations
1922 Both network and physical security of BAS are important. While
1923 physical security is present in today's deployment, adequate network
1924 security and access control are either not implemented or configured
1925 properly. This was sufficient in networks while they are isolated
1926 and not connected to the IT or other infrastructure networks but when
1927 IT and OT (Operational Technology) are connected in the same
1928 infrastructure network, network security is essential. The
1929 management network being an IP-based network does have the protocols
1930 and knobs to enable the network security but in many cases BAS for
1931 example, does not use device authentication or encryption for data in
1932 transit. On the contrary, many of today's field networks do not
1933 provide any security at all. Following are the high level security
1934 requirements that the network should provide:
1936 o Authentication between management and field devices (both local
1937 and remote)
1939 o Integrity and data origin authentication of communication data
1940 between field and management devices
1942 o Confidentiality of data when communicated to a remote device
1944 o Availability of network data for normal and disaster scenario
1946 5. Wireless for Industrial Use Cases
1948 (This section was derived from draft-thubert-6tisch-4detnet-01)
1950 5.1. Introduction
1952 The emergence of wireless technology has enabled a variety of new
1953 devices to get interconnected, at a very low marginal cost per
1954 device, at any distance ranging from Near Field to interplanetary,
1955 and in circumstances where wiring may not be practical, for instance
1956 on fast-moving or rotating devices.
1958 At the same time, a new breed of Time Sensitive Networks is being
1959 developed to enable traffic that is highly sensitive to jitter, quite
1960 sensitive to latency, and with a high degree of operational
1961 criticality so that loss should be minimized at all times. Such
1962 traffic is not limited to professional Audio/ Video networks, but is
1963 also found in command and control operations such as industrial
1964 automation and vehicular sensors and actuators.
1966 At IEEE802.1, the Audio/Video Task Group [IEEE802.1TSNTG] Time
1967 Sensitive Networking (TSN) to address Deterministic Ethernet. The
1968 Medium access Control (MAC) of IEEE802.15.4 [IEEE802154] has evolved
1969 with the new TimeSlotted Channel Hopping (TSCH) [RFC7554] mode for
1970 deterministic industrial-type applications. TSCH was introduced with
1971 the IEEE802.15.4e [IEEE802154e] amendment and will be wrapped up in
1972 the next revision of the IEEE802.15.4 standard. For all practical
1973 purpose, this document is expected to be insensitive to the future
1974 versions of the IEEE802.15.4 standard, which is thus referenced
1975 undated.
1977 Though at a different time scale, both TSN and TSCH standards provide
1978 Deterministic capabilities to the point that a packet that pertains
1979 to a certain flow crosses the network from node to node following a
1980 very precise schedule, as a train that leaves intermediate stations
1981 at precise times along its path. With TSCH, time is formatted into
1982 timeSlots, and an individual cell is allocated to unicast or
1983 broadcast communication at the MAC level. The time-slotted operation
1984 reduces collisions, saves energy, and enables to more closely
1985 engineer the network for deterministic properties. The channel
1986 hopping aspect is a simple and efficient technique to combat multi-
1987 path fading and co-channel interferences (for example by Wi-Fi
1988 emitters).
1990 The 6TiSCH Architecture [I-D.ietf-6tisch-architecture] defines a
1991 remote monitoring and scheduling management of a TSCH network by a
1992 Path Computation Element (PCE), which cooperates with an abstract
1993 Network Management Entity (NME) to manage timeSlots and device
1994 resources in a manner that minimizes the interaction with and the
1995 load placed on the constrained devices.
1997 This Architecture applies the concepts of Deterministic Networking on
1998 a TSCH network to enable the switching of timeSlots in a G-MPLS
1999 manner. This document details the dependencies that 6TiSCH has on
2000 PCE [PCE] and DetNet [I-D.finn-detnet-architecture] to provide the
2001 necessary capabilities that may be specific to such networks. In
2002 turn, DetNet is expected to integrate and maintain consistency with
2003 the work that has taken place and is continuing at IEEE802.1TSN and
2004 AVnu.
2006 5.2. Terminology
2008 Readers are expected to be familiar with all the terms and concepts
2009 that are discussed in "Multi-link Subnet Support in IPv6"
2010 [I-D.ietf-ipv6-multilink-subnets].
2012 The draft uses terminology defined or referenced in
2013 [I-D.ietf-6tisch-terminology] and
2014 [I-D.ietf-roll-rpl-industrial-applicability].
2016 The draft also conforms to the terms and models described in
2017 [RFC3444] and uses the vocabulary and the concepts defined in
2018 [RFC4291] for the IPv6 Architecture.
2020 5.3. 6TiSCH Overview
2022 The scope of the present work is a subnet that, in its basic
2023 configuration, is made of a TSCH [RFC7554] MAC Low Power Lossy
2024 Network (LLN).
2026 ---+-------- ............ ------------
2027 | External Network |
2028 | +-----+
2029 +-----+ | NME |
2030 | | LLN Border | |
2031 | | router +-----+
2032 +-----+
2033 o o o
2034 o o o o
2035 o o LLN o o o
2036 o o o o
2037 o
2039 Figure 4: Basic Configuration of a 6TiSCH Network
2041 In the extended configuration, a Backbone Router (6BBR) federates
2042 multiple 6TiSCH in a single subnet over a backbone. 6TiSCH 6BBRs
2043 synchronize with one another over the backbone, so as to ensure that
2044 the multiple LLNs that form the IPv6 subnet stay tightly
2045 synchronized.
2047 ---+-------- ............ ------------
2048 | External Network |
2049 | +-----+
2050 | +-----+ | NME |
2051 +-----+ | +-----+ | |
2052 | | Router | | PCE | +-----+
2053 | | +--| |
2054 +-----+ +-----+
2055 | |
2056 | Subnet Backbone |
2057 +--------------------+------------------+
2058 | | |
2059 +-----+ +-----+ +-----+
2060 | | Backbone | | Backbone | | Backbone
2061 o | | router | | router | | router
2062 +-----+ +-----+ +-----+
2063 o o o o o
2064 o o o o o o o o o o o
2065 o o o LLN o o o o
2066 o o o o o o o o o o o o
2068 Figure 5: Extended Configuration of a 6TiSCH Network
2070 If the Backbone is Deterministic, then the Backbone Router ensures
2071 that the end-to-end deterministic behavior is maintained between the
2072 LLN and the backbone. This SHOULD be done in conformance to the
2073 DetNet Architecture [I-D.finn-detnet-architecture] which studies
2074 Layer-3 aspects of Deterministic Networks, and covers networks that
2075 span multiple Layer-2 domains. One particular requirement is that
2076 the PCE MUST be able to compute a deterministic path and to end
2077 across the TSCH network and an IEEE802.1 TSN Ethernet backbone, and
2078 DetNet MUST enable end-to-end deterministic forwarding.
2080 6TiSCH defines the concept of a Track, which is a complex form of a
2081 uni-directional Circuit ([I-D.ietf-6tisch-terminology]). As opposed
2082 to a simple circuit that is a sequence of nodes and links, a Track is
2083 shaped as a directed acyclic graph towards a destination to support
2084 multi-path forwarding and route around failures. A Track may also
2085 branch off and rejoin, for the purpose of the so-called Packet
2086 Replication and Elimination (PRE), over non congruent branches. PRE
2087 may be used to complement layer-2 Automatic Repeat reQuest (ARQ) to
2088 meet industrial expectations in Packet Delivery Ratio (PDR), in
2089 particular when the Track extends beyond the 6TiSCH network.
2091 +-----+
2092 | IoT |
2093 | G/W |
2094 +-----+
2095 ^ <---- Elimination
2096 | |
2097 Track branch | |
2098 +-------+ +--------+ Subnet Backbone
2099 | |
2100 +--|--+ +--|--+
2101 | | | Backbone | | | Backbone
2102 o | | | router | | | router
2103 +--/--+ +--|--+
2104 o / o o---o----/ o
2105 o o---o--/ o o o o o
2106 o \ / o o LLN o
2107 o v <---- Replication
2108 o
2110 Figure 6: End-to-End deterministic Track
2112 In the example above, a Track is laid out from a field device in a
2113 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN
2114 backbone.
2116 The Replication function in the field device sends a copy of each
2117 packet over two different branches, and the PCE schedules each hop of
2118 both branches so that the two copies arrive in due time at the
2119 gateway. In case of a loss on one branch, hopefully the other copy
2120 of the packet still makes it in due time. If two copies make it to
2121 the IoT gateway, the Elimination function in the gateway ignores the
2122 extra packet and presents only one copy to upper layers.
2124 At each 6TiSCH hop along the Track, the PCE may schedule more than
2125 one timeSlot for a packet, so as to support Layer-2 retries (ARQ).
2126 It is also possible that the field device only uses the second branch
2127 if sending over the first branch fails.
2129 In current deployments, a TSCH Track does not necessarily support PRE
2130 but is systematically multi-path. This means that a Track is
2131 scheduled so as to ensure that each hop has at least two forwarding
2132 solutions, and the forwarding decision is to try the preferred one
2133 and use the other in case of Layer-2 transmission failure as detected
2134 by ARQ.
2136 5.3.1. TSCH and 6top
2138 6top is a logical link control sitting between the IP layer and the
2139 TSCH MAC layer, which provides the link abstraction that is required
2140 for IP operations. The 6top operations are specified in
2141 [I-D.wang-6tisch-6top-sublayer].
2143 The 6top data model and management interfaces are further discussed
2144 in [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap].
2146 The architecture defines "soft" cells and "hard" cells. "Hard" cells
2147 are owned and managed by an separate scheduling entity (e.g. a PCE)
2148 that specifies the slotOffset/channelOffset of the cells to be
2149 added/moved/deleted, in which case 6top can only act as instructed,
2150 and may not move hard cells in the TSCH schedule on its own.
2152 5.3.2. SlotFrames and Priorities
2154 A slotFrame is the base object that the PCE needs to manipulate to
2155 program a schedule into an LLN node. Elaboration on that concept can
2156 be found in section "SlotFrames and Priorities" of the 6TiSCH
2157 architecture [I-D.ietf-6tisch-architecture]. The architecture also
2158 details how the schedule is constructed and how transmission
2159 resources called cells can be allocated to particular transmissions
2160 so as to avoid collisions.
2162 5.3.3. Schedule Management by a PCE
2164 6TiSCH supports a mixed model of centralized routes and distributed
2165 routes. Centralized routes can for example be computed by a entity
2166 such as a PCE. Distributed routes are computed by RPL.
2168 Both methods may inject routes in the Routing Tables of the 6TiSCH
2169 routers. In either case, each route is associated with a 6TiSCH
2170 topology that can be a RPL Instance topology or a track. The 6TiSCH
2171 topology is indexed by a Instance ID, in a format that reuses the
2172 RPLInstanceID as defined in RPL [RFC6550].
2174 Both RPL and PCE rely on shared sources such as policies to define
2175 Global and Local RPLInstanceIDs that can be used by either method.
2176 It is possible for centralized and distributed routing to share a
2177 same topology. Generally they will operate in different slotFrames,
2178 and centralized routes will be used for scheduled traffic and will
2179 have precedence over distributed routes in case of conflict between
2180 the slotFrames.
2182 Section "Schedule Management Mechanisms" of the 6TiSCH architecture
2183 describes 4 paradigms to manage the TSCH schedule of the LLN nodes:
2184 Static Scheduling, neighbor-to-neighbor Scheduling, remote monitoring
2185 and scheduling management, and Hop-by-hop scheduling. The Track
2186 operation for DetNet corresponds to a remote monitoring and
2187 scheduling management by a PCE.
2189 The 6top interface document [I-D.ietf-6tisch-6top-interface]
2190 specifies the generic data model that can be used to monitor and
2191 manage resources of the 6top sublayer. Abstract methods are
2192 suggested for use by a management entity in the device. The data
2193 model also enables remote control operations on the 6top sublayer.
2195 [I-D.ietf-6tisch-coap] defines an mapping of the 6top set of
2196 commands, which is described in [I-D.ietf-6tisch-6top-interface], to
2197 CoAP resources. This allows an entity to interact with the 6top
2198 layer of a node that is multiple hops away in a RESTful fashion.
2200 [I-D.ietf-6tisch-coap] also defines a basic set CoAP resources and
2201 associated RESTful access methods (GET/PUT/POST/DELETE). The payload
2202 (body) of the CoAP messages is encoded using the CBOR format. The
2203 PCE commands are expected to be issued directly as CoAP requests or
2204 to be mapped back and forth into CoAP by a gateway function at the
2205 edge of the 6TiSCH network. For instance, it is possible that a
2206 mapping entity on the backbone transforms a non-CoAP protocol such as
2207 PCEP into the RESTful interfaces that the 6TiSCH devices support.
2208 This architecture will be refined to comply with DetNet
2209 [I-D.finn-detnet-architecture] when the work is formalized.
2211 5.3.4. Track Forwarding
2213 By forwarding, this specification means the per-packet operation that
2214 allows to deliver a packet to a next hop or an upper layer in this
2215 node. Forwarding is based on pre-existing state that was installed
2216 as a result of the routing computation of a Track by a PCE. The
2217 6TiSCH architecture supports three different forwarding model, G-MPLS
2218 Track Forwarding (TF), 6LoWPAN Fragment Forwarding (FF) and IPv6
2219 Forwarding (6F) which is the classical IP operation. The DetNet case
2220 relates to the Track Forwarding operation under the control of a PCE.
2222 A Track is a unidirectional path between a source and a destination.
2223 In a Track cell, the normal operation of IEEE802.15.4 Automatic
2224 Repeat-reQuest (ARQ) usually happens, though the acknowledgment may
2225 be omitted in some cases, for instance if there is no scheduled cell
2226 for a retry.
2228 Track Forwarding is the simplest and fastest. A bundle of cells set
2229 to receive (RX-cells) is uniquely paired to a bundle of cells that
2230 are set to transmit (TX-cells), representing a layer-2 forwarding
2231 state that can be used regardless of the network layer protocol.
2232 This model can effectively be seen as a Generalized Multi-protocol
2233 Label Switching (G-MPLS) operation in that the information used to
2234 switch a frame is not an explicit label, but rather related to other
2235 properties of the way the packet was received, a particular cell in
2236 the case of 6TiSCH. As a result, as long as the TSCH MAC (and
2237 Layer-2 security) accepts a frame, that frame can be switched
2238 regardless of the protocol, whether this is an IPv6 packet, a 6LoWPAN
2239 fragment, or a frame from an alternate protocol such as WirelessHART
2240 or ISA100.11a.
2242 A data frame that is forwarded along a Track normally has a
2243 destination MAC address that is set to broadcast - or a multicast
2244 address depending on MAC support. This way, the MAC layer in the
2245 intermediate nodes accepts the incoming frame and 6top switches it
2246 without incurring a change in the MAC header. In the case of
2247 IEEE802.15.4, this means effectively broadcast, so that along the
2248 Track the short address for the destination of the frame is set to
2249 0xFFFF.
2251 A Track is thus formed end-to-end as a succession of paired bundles,
2252 a receive bundle from the previous hop and a transmit bundle to the
2253 next hop along the Track, and a cell in such a bundle belongs to at
2254 most one Track. For a given iteration of the device schedule, the
2255 effective channel of the cell is obtained by adding a pseudo-random
2256 number to the channelOffset of the cell, which results in a rotation
2257 of the frequency that used for transmission. The bundles may be
2258 computed so as to accommodate both variable rates and
2259 retransmissions, so they might not be fully used at a given iteration
2260 of the schedule. The 6TiSCH architecture provides additional means
2261 to avoid waste of cells as well as overflows in the transmit bundle,
2262 as follows:
2264 In one hand, a TX-cell that is not needed for the current iteration
2265 may be reused opportunistically on a per-hop basis for routed
2266 packets. When all of the frame that were received for a given Track
2267 are effectively transmitted, any available TX-cell for that Track can
2268 be reused for upper layer traffic for which the next-hop router
2269 matches the next hop along the Track. In that case, the cell that is
2270 being used is effectively a TX-cell from the Track, but the short
2271 address for the destination is that of the next-hop router. It
2272 results that a frame that is received in a RX-cell of a Track with a
2273 destination MAC address set to this node as opposed to broadcast must
2274 be extracted from the Track and delivered to the upper layer (a frame
2275 with an unrecognized MAC address is dropped at the lower MAC layer
2276 and thus is not received at the 6top sublayer).
2278 On the other hand, it might happen that there are not enough TX-cells
2279 in the transmit bundle to accommodate the Track traffic, for instance
2280 if more retransmissions are needed than provisioned. In that case,
2281 the frame can be placed for transmission in the bundle that is used
2282 for layer-3 traffic towards the next hop along the track as long as
2283 it can be routed by the upper layer, that is, typically, if the frame
2284 transports an IPv6 packet. The MAC address should be set to the
2285 next-hop MAC address to avoid confusion. It results that a frame
2286 that is received over a layer-3 bundle may be in fact associated to a
2287 Track. In a classical IP link such as an Ethernet, off-track traffic
2288 is typically in excess over reservation to be routed along the non-
2289 reserved path based on its QoS setting. But with 6TiSCH, since the
2290 use of the layer-3 bundle may be due to transmission failures, it
2291 makes sense for the receiver to recognize a frame that should be re-
2292 tracked, and to place it back on the appropriate bundle if possible.
2293 A frame should be re-tracked if the Per-Hop-Behavior group indicated
2294 in the Differentiated Services Field in the IPv6 header is set to
2295 Deterministic Forwarding, as discussed in Section 5.4.1. A frame is
2296 re-tracked by scheduling it for transmission over the transmit bundle
2297 associated to the Track, with the destination MAC address set to
2298 broadcast.
2300 There are 2 modes for a Track, transport mode and tunnel mode.
2302 5.3.4.1. Transport Mode
2304 In transport mode, the Protocol Data Unit (PDU) is associated with
2305 flow-dependant meta-data that refers uniquely to the Track, so the
2306 6top sublayer can place the frame in the appropriate cell without
2307 ambiguity. In the case of IPv6 traffic, this flow identification is
2308 transported in the Flow Label of the IPv6 header. Associated with
2309 the source IPv6 address, the Flow Label forms a globally unique
2310 identifier for that particular Track that is validated at egress
2311 before restoring the destination MAC address (DMAC) and punting to
2312 the upper layer.
2314 | ^
2315 +--------------+ | |
2316 | IPv6 | | |
2317 +--------------+ | |
2318 | 6LoWPAN HC | | |
2319 +--------------+ ingress egress
2320 | 6top | sets +----+ +----+ restores
2321 +--------------+ dmac to | | | | dmac to
2322 | TSCH MAC | brdcst | | | | self
2323 +--------------+ | | | | | |
2324 | LLN PHY | +-------+ +--...-----+ +-------+
2325 +--------------+
2327 Track Forwarding, Transport Mode
2329 5.3.4.2. Tunnel Mode
2331 In tunnel mode, the frames originate from an arbitrary protocol over
2332 a compatible MAC that may or may not be synchronized with the 6TiSCH
2333 network. An example of this would be a router with a dual radio that
2334 is capable of receiving and sending WirelessHART or ISA100.11a frames
2335 with the second radio, by presenting itself as an access Point or a
2336 Backbone Router, respectively.
2338 In that mode, some entity (e.g. PCE) can coordinate with a
2339 WirelessHART Network Manager or an ISA100.11a System Manager to
2340 specify the flows that are to be transported transparently over the
2341 Track.
2343 +--------------+
2344 | IPv6 |
2345 +--------------+
2346 | 6LoWPAN HC |
2347 +--------------+ set restore
2348 | 6top | +dmac+ +dmac+
2349 +--------------+ to|brdcst to|nexthop
2350 | TSCH MAC | | | | |
2351 +--------------+ | | | |
2352 | LLN PHY | +-------+ +--...-----+ +-------+
2353 +--------------+ | ingress egress |
2354 | |
2355 +--------------+ | |
2356 | LLN PHY | | |
2357 +--------------+ | |
2358 | TSCH MAC | | |
2359 +--------------+ | dmac = | dmac =
2360 |ISA100/WiHART | | nexthop v nexthop
2361 +--------------+
2363 Figure 7: Track Forwarding, Tunnel Mode
2365 In that case, the flow information that identifies the Track at the
2366 ingress 6TiSCH router is derived from the RX-cell. The dmac is set
2367 to this node but the flow information indicates that the frame must
2368 be tunneled over a particular Track so the frame is not passed to the
2369 upper layer. Instead, the dmac is forced to broadcast and the frame
2370 is passed to the 6top sublayer for switching.
2372 At the egress 6TiSCH router, the reverse operation occurs. Based on
2373 metadata associated to the Track, the frame is passed to the
2374 appropriate link layer with the destination MAC restored.
2376 5.3.4.3. Tunnel Metadata
2378 Metadata coming with the Track configuration is expected to provide
2379 the destination MAC address of the egress endpoint as well as the
2380 tunnel mode and specific data depending on the mode, for instance a
2381 service access point for frame delivery at egress. If the tunnel
2382 egress point does not have a MAC address that matches the
2383 configuration, the Track installation fails.
2385 In transport mode, if the final layer-3 destination is the tunnel
2386 termination, then it is possible that the IPv6 address of the
2387 destination is compressed at the 6LoWPAN sublayer based on the MAC
2388 address. It is thus mandatory at the ingress point to validate that
2389 the MAC address that was used at the 6LoWPAN sublayer for compression
2390 matches that of the tunnel egress point. For that reason, the node
2391 that injects a packet on a Track checks that the destination is
2392 effectively that of the tunnel egress point before it overwrites it
2393 to broadcast. The 6top sublayer at the tunnel egress point reverts
2394 that operation to the MAC address obtained from the tunnel metadata.
2396 5.4. Operations of Interest for DetNet and PCE
2398 In a classical system, the 6TiSCH device does not place the request
2399 for bandwidth between self and another device in the network.
2400 Rather, an Operation Control System invoked through an Human/Machine
2401 Interface (HMI) indicates the Traffic Specification, in particular in
2402 terms of latency and reliability, and the end nodes. With this, the
2403 PCE must compute a Track between the end nodes and provision the
2404 network with per-flow state that describes the per-hop operation for
2405 a given packet, the corresponding timeSlots, and the flow
2406 identification that enables to recognize when a certain packet
2407 belongs to a certain Track, sort out duplicates, etc...
2409 For a static configuration that serves a certain purpose for a long
2410 period of time, it is expected that a node will be provisioned in one
2411 shot with a full schedule, which incorporates the aggregation of its
2412 behavior for multiple Tracks. 6TiSCH expects that the programing of
2413 the schedule will be done over COAP as discussed in 6TiSCH Resource
2414 Management and Interaction using CoAP [I-D.ietf-6tisch-coap].
2416 But an Hybrid mode may be required as well whereby a single Track is
2417 added, modified, or removed, for instance if it appears that a Track
2418 does not perform as expected for, say, PDR. For that case, the
2419 expectation is that a protocol that flows along a Track (to be), in a
2420 fashion similar to classical Traffic Engineering (TE) [CCAMP], may be
2421 used to update the state in the devices. 6TiSCH provides means for a
2422 device to negotiate a timeSlot with a neighbor, but in general that
2423 flow was not designed and no protocol was selected and it is expected
2424 that DetNet will determine the appropriate end-to-end protocols to be
2425 used in that case.
2427 Stream Management Entity
2429 Operational System and HMI
2431 -+-+-+-+-+-+-+ Northbound -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
2433 PCE PCE PCE PCE
2435 -+-+-+-+-+-+-+ Southbound -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
2437 --- 6TiSCH------6TiSCH------6TiSCH------6TiSCH--
2438 6TiSCH / Device Device Device Device \
2439 Device- - 6TiSCH
2440 \ 6TiSCH 6TiSCH 6TiSCH 6TiSCH / Device
2441 ----Device------Device------Device------Device--
2443 Figure 8
2445 5.4.1. Packet Marking and Handling
2447 Section "Packet Marking and Handling" of
2448 [I-D.ietf-6tisch-architecture] describes the packet tagging and
2449 marking that is expected in 6TiSCH networks.
2451 5.4.1.1. Tagging Packets for Flow Identification
2453 For packets that are routed by a PCE along a Track, the tuple formed
2454 by the IPv6 source address and a local RPLInstanceID is tagged in the
2455 packets to identify uniquely the Track and associated transmit bundle
2456 of timeSlots.
2458 It results that the tagging that is used for a DetNet flow outside
2459 the 6TiSCH LLN MUST be swapped into 6TiSCH formats and back as the
2460 packet enters and then leaves the 6TiSCH network.
2462 Note: The method and format used for encoding the RPLInstanceID at
2463 6lo is generalized to all 6TiSCH topological Instances, which
2464 includes Tracks.
2466 5.4.1.2. Replication, Retries and Elimination
2468 6TiSCH expects elimination and replication of packets along a complex
2469 Track, but has no position about how the sequence numbers would be
2470 tagged in the packet.
2472 As it goes, 6TiSCH expects that timeSlots corresponding to copies of
2473 a same packet along a Track are correlated by configuration, and does
2474 not need to process the sequence numbers.
2476 The semantics of the configuration MUST enable correlated timeSlots
2477 to be grouped for transmit (and respectively receive) with a 'OR'
2478 relations, and then a 'AND' relation MUST be configurable between
2479 groups. The semantics is that if the transmit (and respectively
2480 receive) operation succeeded in one timeSlot in a 'OR' group, then
2481 all the other timeSLots in the group are ignored. Now, if there are
2482 at least two groups, the 'AND' relation between the groups indicates
2483 that one operation must succeed in each of the groups.
2485 On the transmit side, timeSlots provisioned for retries along a same
2486 branch of a Track are placed a same 'OR' group. The 'OR' relation
2487 indicates that if a transmission is acknowledged, then further
2488 transmissions SHOULD NOT be attempted for timeSlots in that group.
2489 There are as many 'OR' groups as there are branches of the Track
2490 departing from this node. Different 'OR' groups are programmed for
2491 the purpose of replication, each group corresponding to one branch of
2492 the Track. The 'AND' relation between the groups indicates that
2493 transmission over any of branches MUST be attempted regardless of
2494 whether a transmission succeeded in another branch. It is also
2495 possible to place cells to different next-hop routers in a same 'OR'
2496 group. This allows to route along multi-path tracks, trying one
2497 next-hop and then another only if sending to the first fails.
2499 On the receive side, all timeSlots are programmed in a same 'OR'
2500 group. Retries of a same copy as well as converging branches for
2501 elimination are converged, meaning that the first successful
2502 reception is enough and that all the other timeSlots can be ignored.
2504 5.4.1.3. Differentiated Services Per-Hop-Behavior
2506 Additionally, an IP packet that is sent along a Track uses the
2507 Differentiated Services Per-Hop-Behavior Group called Deterministic
2508 Forwarding, as described in
2509 [I-D.svshah-tsvwg-deterministic-forwarding].
2511 5.4.2. Topology and capabilities
2513 6TiSCH nodes are usually IoT devices, characterized by very limited
2514 amount of memory, just enough buffers to store one or a few IPv6
2515 packets, and limited bandwidth between peers. It results that a node
2516 will maintain only a small number of peering information, and will
2517 not be able to store many packets waiting to be forwarded. Peers can
2518 be identified through MAC or IPv6 addresses, but a Cryptographically
2519 Generated Address [RFC3972] (CGA) may also be used.
2521 Neighbors can be discovered over the radio using mechanism such as
2522 beacons, but, though the neighbor information is available in the
2523 6TiSCH interface data model, 6TiSCH does not describe a protocol to
2524 pro-actively push the neighborhood information to a PCE. This
2525 protocol should be described and should operate over CoAP. The
2526 protocol should be able to carry multiple metrics, in particular the
2527 same metrics as used for RPL operations [RFC6551]
2529 The energy that the device consumes in sleep, transmit and receive
2530 modes can be evaluated and reported. So can the amount of energy
2531 that is stored in the device and the power that it can be scavenged
2532 from the environment. The PCE SHOULD be able to compute Tracks that
2533 will implement policies on how the energy is consumed, for instance
2534 balance between nodes, ensure that the spent energy does not exceeded
2535 the scavenged energy over a period of time, etc...
2537 5.5. Security Considerations
2539 On top of the classical protection of control signaling that can be
2540 expected to support DetNet, it must be noted that 6TiSCH networks
2541 operate on limited resources that can be depleted rapidly if an
2542 attacker manages to operate a DoS attack on the system, for instance
2543 by placing a rogue device in the network, or by obtaining management
2544 control and to setup extra paths.
2546 5.6. Acknowledgments
2548 This specification derives from the 6TiSCH architecture, which is the
2549 result of multiple interactions, in particular during the 6TiSCH
2550 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at
2551 the IETF.
2553 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier
2554 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael
2555 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon,
2556 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey,
2557 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria
2558 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation
2559 and various contributions.
2561 6. Cellular Radio Use Cases
2563 (This section was derived from draft-korhonen-detnet-telreq-00)
2565 6.1. Introduction and background
2567 The recent developments in telecommunication networks, especially in
2568 the cellular domain, are heading towards transport networks where
2569 precise time synchronization support has to be one of the basic
2570 building blocks. While the transport networks themselves have
2571 practically transitioned to all-AP packet based networks to meet the
2572 bandwidth and cost requirements, a highly accurate clock distribution
2573 has become a challenge. Earlier the transport networks in the
2574 cellular domain were typically time division and multiplexing (TDM)
2575 -based and provided frequency synchronization capabilities as a part
2576 of the transport media. Alternatively other technologies such as
2577 Global Positioning System (GPS) or Synchronous Ethernet (SyncE)
2578 [SyncE] were used. New radio access network deployment models and
2579 architectures may require time sensitive networking services with
2580 strict requirements on other parts of the network that previously
2581 were not considered to be packetized at all. The time and
2582 synchronization support are already topical for backhaul and midhaul
2583 packet networks [MEF], and becoming a real issue for fronthaul
2584 networks. Specifically in the fronthaul networks the timing and
2585 synchronization requirements can be extreme for packet based
2586 technologies, for example, in order of sub +-20 ns packet delay
2587 variation (PDV) and frequency accuracy of +0.002 PPM [Fronthaul].
2589 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985]
2590 for legacy transport support) have become popular tools to build and
2591 manage new all-IP radio access networks (RAN)
2592 [I-D.kh-spring-ip-ran-use-case]. Although various timing and
2593 synchronization optimizations have already been proposed and
2594 implemented including 1588 PTP enhancements
2595 [I-D.ietf-tictoc-1588overmpls][I-D.mirsky-mpls-residence-time], these
2596 solution are not necessarily sufficient for the forthcoming RAN
2597 architectures or guarantee the higher time-synchronization
2598 requirements [CPRI]. There are also existing solutions for the TDM
2599 over IP [RFC5087] [RFC4553] or Ethernet transports [RFC5086]. The
2600 really interesting and important existing work for time sensitive
2601 networking has been done for Ethernet [TSNTG], which specifies the
2602 use of IEEE 1588 time precision protocol (PTP) [IEEE1588] in the
2603 context of IEEE 802.1D and IEEE 802.1Q. While IEEE 802.1AS
2604 [IEEE8021AS] specifies a Layer-2 time synchronizing service other
2605 specification, such as IEEE 1722 [IEEE1722] specify Ethernet-based
2606 Layer-2 transport for time-sensitive streams. New promising work
2607 seeks to enable the transport of time-sensitive fronthaul streams in
2608 Ethernet bridged networks [IEEE8021CM]. Similarly to IEEE 1722 there
2609 is an ongoing standardization effort to define Layer-2 transport
2610 encapsulation format for transporting radio over Ethernet (RoE) in
2611 IEEE 1904.3 Task Force [IEEE19043].
2613 As already mentioned all-IP RANs and various "haul" networks would
2614 benefit from time synchronization and time-sensitive transport
2615 services. Although Ethernet appears to be the unifying technology
2616 for the transport there is still a disconnect providing Layer-3
2617 services. The protocol stack typically has a number of layers below
2618 the Ethernet Layer-2 that shows up to the Layer-3 IP transport. It
2619 is not uncommon that on top of the lowest layer (optical) transport
2620 there is the first layer of Ethernet followed one or more layers of
2621 MPLS, PseudoWires and/or other tunneling protocols finally carrying
2622 the Ethernet layer visible to the user plane IP traffic. While there
2623 are existing technologies, especially in MPLS/PWE space, to establish
2624 circuits through the routed and switched networks, there is a lack of
2625 signaling the time synchronization and time-sensitive stream
2626 requirements/reservations for Layer-3 flows in a way that the entire
2627 transport stack is addressed and the Ethernet layers that needs to be
2628 configured are addressed. Furthermore, not all "user plane" traffic
2629 will be IP. Therefore, the same solution need also address the use
2630 cases where the user plane traffic is again another layer or Ethernet
2631 frames. There is existing work describing the problem statement
2632 [I-D.finn-detnet-problem-statement] and the architecture
2633 [I-D.finn-detnet-architecture] for deterministic networking (DetNet)
2634 that eventually targets to provide solutions for time-sensitive (IP/
2635 transport) streams with deterministic properties over Ethernet-based
2636 switched networks.
2638 This document describes requirements for deterministic networking in
2639 a cellular telecom transport networks context. The requirements
2640 include time synchronization, clock distribution and ways of
2641 establishing time-sensitive streams for both Layer-2 and Layer-3 user
2642 plane traffic using IETF protocol solutions.
2644 The recent developments in telecommunication networks, especially in
2645 the cellular domain, are heading towards transport networks where
2646 precise time synchronization support has to be one of the basic
2647 building blocks. While the transport networks themselves have
2648 practically transitioned to all-AP packet based networks to meet the
2649 bandwidth and cost requirements, a highly accurate clock distribution
2650 has become a challenge. Earlier the transport networks in the
2651 cellular domain were typically time division and multiplexing (TDM)
2652 -based and provided frequency synchronization capabilities as a part
2653 of the transport media. Alternatively other technologies such as
2654 Global Positioning System (GPS) or Synchronous Ethernet (SyncE)
2655 [SyncE] were used. New radio access network deployment models and
2656 architectures may require time sensitive networking services with
2657 strict requirements on other parts of the network that previously
2658 were not considered to be packetized at all. The time and
2659 synchronization support are already topical for backhaul and midhaul
2660 packet networks [MEF], and becoming a real issue for fronthaul
2661 networks. Specifically in the fronthaul networks the timing and
2662 synchronization requirements can be extreme for packet based
2663 technologies, for example, in order of sub +-20 ns packet delay
2664 variation (PDV) and frequency accuracy of +0.002 PPM [Fronthaul].
2666 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985]
2667 for legacy transport support) have become popular tools to build and
2668 manage new all-IP radio access networks (RAN)
2669 [I-D.kh-spring-ip-ran-use-case]. Although various timing and
2670 synchronization optimizations have already been proposed and
2671 implemented including 1588 PTP enhancements
2672 [I-D.ietf-tictoc-1588overmpls][I-D.mirsky-mpls-residence-time], these
2673 solution are not necessarily sufficient for the forthcoming RAN
2674 architectures or guarantee the higher time-synchronization
2675 requirements [CPRI]. There are also existing solutions for the TDM
2676 over IP [RFC5087] [RFC4553] or Ethernet transports [RFC5086]. The
2677 really interesting and important existing work for time sensitive
2678 networking has been done for Ethernet [TSNTG], which specifies the
2679 use of IEEE 1588 time precision protocol (PTP) [IEEE1588] in the
2680 context of IEEE 802.1D and IEEE 802.1Q. While IEEE 802.1AS
2681 [IEEE8021AS] specifies a Layer-2 time synchronizing service other
2682 specification, such as IEEE 1722 [IEEE1722] specify Ethernet-based
2683 Layer-2 transport for time-sensitive streams. New promising work
2684 seeks to enable the transport of time-sensitive fronthaul streams in
2685 Ethernet bridged networks [IEEE8021CM]. Similarly to IEEE 1722 there
2686 is an ongoing standardization effort to define Layer-2 transport
2687 encapsulation format for transporting radio over Ethernet (RoE) in
2688 IEEE 1904.3 Task Force [IEEE19043].
2690 As already mentioned all-IP RANs and various "haul" networks would
2691 benefit from time synchronization and time-sensitive transport
2692 services. Although Ethernet appears to be the unifying technology
2693 for the transport there is still a disconnect providing Layer-3
2694 services. The protocol stack typically has a number of layers below
2695 the Ethernet Layer-2 that shows up to the Layer-3 IP transport. It
2696 is not uncommon that on top of the lowest layer (optical) transport
2697 there is the first layer of Ethernet followed one or more layers of
2698 MPLS, PseudoWires and/or other tunneling protocols finally carrying
2699 the Ethernet layer visible to the user plane IP traffic. While there
2700 are existing technologies, especially in MPLS/PWE space, to establish
2701 circuits through the routed and switched networks, there is a lack of
2702 signaling the time synchronization and time-sensitive stream
2703 requirements/reservations for Layer-3 flows in a way that the entire
2704 transport stack is addressed and the Ethernet layers that needs to be
2705 configured are addressed. Furthermore, not all "user plane" traffic
2706 will be IP. Therefore, the same solution need also address the use
2707 cases where the user plane traffic is again another layer or Ethernet
2708 frames. There is existing work describing the problem statement
2710 [I-D.finn-detnet-problem-statement] and the architecture
2711 [I-D.finn-detnet-architecture] for deterministic networking (DetNet)
2712 that eventually targets to provide solutions for time-sensitive (IP/
2713 transport) streams with deterministic properties over Ethernet-based
2714 switched networks.
2716 This document describes requirements for deterministic networking in
2717 a cellular telecom transport networks context. The requirements
2718 include time synchronization, clock distribution and ways of
2719 establishing time-sensitive streams for both Layer-2 and Layer-3 user
2720 plane traffic using IETF protocol solutions.
2722 6.2. Network architecture
2724 Figure Figure 9 illustrates a typical, 3GPP defined, cellular network
2725 architecture, which also has fronthaul and midhaul network segments.
2726 The fronthaul refers to the network connecting base stations (base
2727 band processing units) to the remote radio heads (antennas). The
2728 midhaul network typically refers to the network inter-connecting base
2729 stations (or small/pico cells).
2731 Fronthaul networks build on the available excess time after the base
2732 band processing of the radio frame has completed. Therefore, the
2733 available time for networking is actually very limited, which in
2734 practise determines how far the remote radio heads can be from the
2735 base band processing units (i.e. base stations). For example, in a
2736 case of LTE radio the Hybrid ARQ processing of a radio frame is
2737 allocated 3 ms. Typically the processing completes way earlier (say
2738 up to 400 us, could be much less, though) thus allowing the remaining
2739 time to be used e.g. for fronthaul network. 200 us equals roughly 40
2740 km of optical fiber based transport (assuming round trip time would
2741 be total 2*200 us). The base band processing time and the available
2742 "delay budget" for the fronthaul is a subject to change, possibly
2743 dramatically, in the forthcoming "5G" to meet, for example, the
2744 envisioned reduced radio round trip times, and other architecural and
2745 service requirements [NGMN].
2747 The maximum "delay budget" is then consumed by all nodes and required
2748 buffering between the remote radio head and the base band processing
2749 in addition to the distance incurred delay. Packet delay variation
2750 (PDV) is problematic to fronthaul networks and must be minimized. If
2751 the transport network cannot guarantee low enough PDV additional
2752 buffering has to be introduced at the edges of the network to buffer
2753 out the jitter. Any buffering will eat up the total available delay
2754 budget, though. Section Section 6.3 will discuss the PDV
2755 requirements in more detail.
2757 Y (remote radios)
2758 \
2759 Y__ \.--. .--. +------+
2760 \_( `. +---+ _(Back`. | 3GPP |
2761 Y------( Front )----|eNB|----( Haul )----| core |
2762 ( ` .Haul ) +---+ ( ` . ) ) | netw |
2763 /`--(___.-' \ `--(___.-' +------+
2764 Y_/ / \.--. \
2765 Y_/ _( Mid`. \
2766 ( Haul ) \
2767 ( ` . ) ) \
2768 `--(___.-'\_____+---+ (small cells)
2769 \ |SCe|__Y
2770 +---+ +---+
2771 Y__|eNB|__Y
2772 +---+
2773 Y_/ \_Y ("local" radios)
2775 Figure 9: Generic 3GPP-based cellular network architecture with
2776 Front/Mid/Backhaul networks
2778 6.3. Time synchronization requirements
2780 Cellular networks starting from long term evolution (LTE) [TS36300]
2781 [TS23401] radio the phase synchronization is also needed in addition
2782 to the frequency synchronization. The commonly referenced fronthaul
2783 network synchronization requirements are typically drawn from the
2784 common public radio interface (CPRI) [CPRI] specification that
2785 defines the transport protocol between the base band processing -
2786 radio equipment controller (REC) and the remote antenna - radio
2787 equipment (RE). However, the fundamental requirements still
2788 originate from the respective cellular system and radio
2789 specifications such as the 3GPP ones [TS25104][TS36104][TS36211]
2790 [TS36133].
2792 The fronthaul time synchronization requirements for the current 3GPP
2793 LTE-based networks are listed below:
2795 Transport link contribution to radio frequency error:
2797 +-2 PPB. The given value is considered to be "available" for the
2798 fronthaul link out of the total 50 PPB budget reserved for the
2799 radio interface.
2801 Delay accuracy:
2803 +-8.138 ns i.e. +-1/32 Tc (UMTS Chip time, Tc, 1/3.84 MHz) to
2804 downlink direction and excluding the (optical) cable length in one
2805 direction. Round trip accuracy is then +-16.276 ns. The value is
2806 this low to meet the 3GPP timing alignment error (TAE) measurement
2807 requirements.
2809 Packet delay variation (PDV):
2811 * For multiple input multiple output (MIMO) or TX diversity
2812 transmissions, at each carrier frequency, TAE shall not exceed
2813 65 ns (i.e. 1/4 Tc).
2815 * For intra-band contiguous carrier aggregation, with or without
2816 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2
2817 Tc).
2819 * For intra-band non-contiguous carrier aggregation, with or
2820 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e.
2821 one Tc).
2823 * For inter-band carrier aggregation, with or without MIMO or TX
2824 diversity, TAE shall not exceed 260 ns.
2826 The above listed time synchronization requirements are hard to meet
2827 even with point to point connected networks, not to mention cases
2828 where the underlying transport network actually constitutes of
2829 multiple hops. It is expected that network deployments have to deal
2830 with the jitter requirements buffering at the very ends of the
2831 connections, since trying to meet the jitter requirements in every
2832 intermediate node is likely to be too costly. However, every measure
2833 to reduce jitter and delay on the path are valuable to make it easier
2834 to meet the end to end requirements.
2836 In order to meet the timing requirements both senders and receivers
2837 must is perfect sync. This asks for a very accurate clock
2838 distribution solution. Basically all means and hardware support for
2839 guaranteeing accurate time synchronization in the network is needed.
2840 As an example support for 1588 transparent clocks (TC) in every
2841 intermediate node would be helpful.
2843 6.4. Time-sensitive stream requirements
2845 In addition to the time synchronization requirements listed in
2846 Section Section 6.3 the fronthaul networks assume practically error
2847 free transport. The maximum bit error rate (BER) has been defined to
2848 be 10^-12. When packetized that would equal roughly to packet error
2849 rate (PER) of 2.4*10^-9 (assuming ~300 bytes packets).
2850 Retransmitting lost packets and/or using forward error coding (FEC)
2851 to circumvent bit errors are practically impossible due additional
2852 incurred delay. Using redundant streams for better guarantees for
2853 delivery is also practically impossible due to high bandwidth
2854 requirements fronthaul networks have. For instance, current
2855 uncompressed CPRI bandwidth expansion ratio is roughly 20:1 compared
2856 to the IP layer user payload it carries in a "radio sample form".
2858 The other fundamental assumption is that fronthaul links are
2859 symmetric. Last, all fronthaul streams (carrying radio data) have
2860 equal priority and cannot delay or pre-empt each other. This implies
2861 the network has always be sufficiently under subscribed to guarantee
2862 each time-sensitive flow meets their schedule.
2864 Mapping the fronthaul requirements to [I-D.finn-detnet-architecture]
2865 Section 3 "Providing the DetNet Quality of Service" what is seemed
2866 usable are:
2868 (a) Zero congestion loss.
2870 (b) Pinned-down paths.
2872 The current time-sensitive networking features may still not be
2873 sufficient for fronthaul traffic. Therefore, having specific
2874 profiles that take the requirements of fronthaul into account are
2875 deemed to be useful [IEEE8021CM].
2877 The actual transport protocols and/or solutions to establish required
2878 transport "circuits" (pinned-down paths) for fronthaul traffic are
2879 still undefined. Those are likely to include but not limited to
2880 solutions directly over Ethernet, over IP, and MPLS/PseudoWire
2881 transport.
2883 6.5. Security considerations
2885 Establishing time-sensitive streams in the network entails reserving
2886 networking resources sometimes for a considerable long time. It is
2887 important that these reservation requests must be authenticated to
2888 prevent malicious reservation attempts from hostile nodes or even
2889 accidental misconfiguration. This is specifically important in a
2890 case where the reservation requests span administrative domains.
2891 Furthermore, the reservation information itself should be digitally
2892 signed to reduce the risk where a legitimate node pushed a stale or
2893 hostile configuration into the networking node.
2895 7. Other Use Cases
2897 (This section was derived from draft-zha-detnet-use-case-00)
2899 7.1. Introduction
2901 The rapid growth of the today's communication system and its access
2902 into almost all aspects of daily life has led to great dependency on
2903 services it provides. The communication network, as it is today, has
2904 applications such as multimedia and peer-to-peer file sharing
2905 distribution that require Quality of Service (QoS) guarantees in
2906 terms of delay and jitter to maintain a certain level of performance.
2907 Meanwhile, mobile wireless communications has become an important
2908 part to support modern sociality with increasing importance over the
2909 last years. A communication network of hard real-time and high
2910 reliability is essential for the next concurrent and next generation
2911 mobile wireless networks as well as its bearer network for E-2-E
2912 performance requirements.
2914 Conventional transport network is IP-based because of the bandwidth
2915 and cost requirements. However the delay and jitter guarantee
2916 becomes a challenge in case of contention since the service here is
2917 not deterministic but best effort. With more and more rigid demand
2918 in latency control in the future network [METIS], deterministic
2919 networking [I-D.finn-detnet-architecture] is a promising solution to
2920 meet the ultra low delay applications and use cases. There are
2921 already typical issues for delay sensitive networking requirements in
2922 midhaul and backhaul network to support LTE and future 5G network
2923 [net5G]. And not only in the telecom industry but also other
2924 vertical industry has increasing demand on delay sensitive
2925 communications as the automation becomes critical recently.
2927 More specifically, CoMP techniques, D-2-D, industrial automation and
2928 gaming/media service all have great dependency on the low delay
2929 communications as well as high reliability to guarantee the service
2930 performance. Note that the deterministic networking is not equal to
2931 low latency as it is more focused on the worst case delay bound of
2932 the duration of certain application or service. It can be argued
2933 that without high certainty and absolute delay guarantee, low delay
2934 provisioning is just relative [rfc3393], which is not sufficient to
2935 some delay critical service since delay violation in an instance
2936 cannot be tolerated. Overall, the requirements from vertical
2937 industries seem to be well aligned with the expected low latency and
2938 high determinist performance of future networks
2940 This document describes several use cases and scenarios with
2941 requirements on deterministic delay guarantee within the scope of the
2942 deterministic network [I-D.finn-detnet-problem-statement].
2944 7.2. Critical Delay Requirements
2946 Delay and jitter requirement has been take into account as a major
2947 component in QoS provisioning since the birth of Internet. The delay
2948 sensitive networking with increasing importance become the root of
2949 mobile wireless communications as well as the applicable areas which
2950 are all greatly relied on low delay communications. Due to the best
2951 effort feature of the IP networking, mitigate contention and
2952 buffering is the main solution to serve the delay sensitive service.
2953 More bandwidth is assigned to keep the link low loaded or in another
2954 word, reduce the probability of congestion. However, not only lack
2955 of determinist but also has limitation to serve the applications in
2956 the future communication system, keeping low loaded cannot provide
2957 deterministic delay guarantee. Take the [METIS] that documents the
2958 fundamental challenges as well as overall technical goal of the 5G
2959 mobile and wireless system as the starting point. It should
2960 supports: -1000 times higher mobile data volume per area, -10 times
2961 to 100 times higher typical user data rate, -10 times to 100 times
2962 higher number of connected devices, -10 times longer battery life for
2963 low power devices, and -5 times reduced End-to-End (E2E) latency, at
2964 similar cost and energy consumption levels as today's system. Taking
2965 part of these requirements related to latency, current LTE networking
2966 system has E2E latency less than 20ms [LTE-Latency] which leads to
2967 around 5ms E2E latency for 5G networks. It has been argued that
2968 fulfill such rigid latency demand with similar cost will be most
2969 challenging as the system also requires 100 times bandwidth as well
2970 as 100 times of connected devices. As a result to that, simply
2971 adding redundant bandwidth provisioning can be no longer an efficient
2972 solution due to the high bandwidth requirements more than ever
2973 before. In addition to the bandwidth provisioning, the critical flow
2974 within its reserved resource should not be affected by other flows no
2975 matter the pressure of the network. Robust defense of critical flow
2976 is also not depended on redundant bandwidth allocation.
2977 Deterministic networking techniques in both layer-2 and layer-3 using
2978 IETF protocol solutions can be promising to serve these scenarios.
2980 7.3. Coordinated multipoint processing (CoMP)
2982 In the wireless communication system, Coordinated multipoint
2983 processing (CoMP) is considered as an effective technique to solve
2984 the inter-cell interference problem to improve the cell-edge user
2985 throughput [CoMP].
2987 7.3.1. CoMP Architecture
2988 +--------------------------+
2989 | CoMP |
2990 +--+--------------------+--+
2991 | |
2992 +----------+ +------------+
2993 | Uplink | | Downlink |
2994 +-----+----+ +--------+---+
2995 | |
2996 ------------------- -----------------------
2997 | | | | | |
2998 +---------+ +----+ +-----+ +------------+ +-----+ +-----+
2999 | Joint | | CS | | DPS | | Joint | | CS/ | | DPS |
3000 |Reception| | | | | |Transmission| | CB | | |
3001 +---------+ +----+ +-----+ +------------+ +-----+ +-----+
3002 | |
3003 |----------- |-------------
3004 | | | |
3005 +------------+ +---------+ +----------+ +------------+
3006 | Joint | | Soft | | Coherent | | Non- |
3007 |Equalization| |Combining| | JT | | Coherent JT|
3008 +------------+ +---------+ +----------+ +------------+
3010 Figure 10: Framework of CoMP Technology
3012 As shown in Figure 10, CoMP reception and transmission is a framework
3013 that multiple geographically distributed antenna nodes cooperate to
3014 improve the performance of the users served in the common cooperation
3015 area. The design principal of CoMP is to extend the current single-
3016 cell to multi-UEs transmission to a multi-cell- to-multi-UEs
3017 transmission by base station cooperation. In contrast to single-cell
3018 scenario, CoMP has critical issues such as: Backhaul latency, CSI
3019 (Channel State Information) reporting and accuracy and Network
3020 complexity. Clearly the first two requirements are very much delay
3021 sensitive and will be discussed in next section.
3023 7.3.2. Delay Sensitivity in CoMP
3025 As the essential feature of CoMP, signaling is exchanged between
3026 eNBs, the backhaul latency is the dominating limitation of the CoMP
3027 performance. Generally, JT and JP may benefit from coordinating the
3028 scheduling (distributed or centralized) of different cells in case
3029 that the signaling exchanging between eNBs is limited to 4-10ms. For
3030 C-RAN the backhaul latency requirement is 250us while for D-RAN it is
3031 4-15ms. And this delay requirement is not only rigid but also
3032 absolute since any uncertainty in delay will down the performance
3033 significantly. Note that, some operator's transport network is not
3034 build to support Layer-3 transfer in aggregation layer. In such
3035 case, the signaling is exchanged through EPC which means delay is
3036 supposed to be larger. CoMP has high requirement on delay and
3037 reliability which is lack by current mobile network systems and may
3038 impact the architecture of the mobile network.
3040 7.4. Industrial Automation
3042 Traditional "industrial automation" terminology usually refers to
3043 automation of manufacturing, quality control and material processing.
3044 "Industrial internet" and "industrial 4.0" [EA12] is becoming a hot
3045 topic based on the Internet of Things. This high flexible and
3046 dynamic engineering and manufacturing will result in a lot of so-
3047 called smart approaches such as Smart Factory, Smart Products, Smart
3048 Mobility, and Smart Home/Buildings. No doubt that ultra high
3049 reliability and robustness is a must in data transmission, especially
3050 in the closed loop automation control application where delay
3051 requirement is below 1ms and packet loss less than 10E-9. All these
3052 critical requirements on both latency and loss cannot be fulfilled by
3053 current 4G communication networks. Moreover, the collaboration of
3054 the industrial automation from remote campus with cellular and fixed
3055 network has to be built on an integrated, cloud-based platform. In
3056 this way, the deterministic flows should be guaranteed regardless of
3057 the amount of other flows in the network. The lack of this mechanism
3058 becomes the main obstacle in deployment on of industrial automation.
3060 7.5. Vehicle to Vehicle
3062 V2V communication has gained more and more attention in the last few
3063 years and will be increasingly growth in the future. Not only
3064 equipped with direct communication system which is short ranged, V2V
3065 communication also requires wireless cellular networks to cover wide
3066 range and more sophisticated services. V2V application in the area
3067 autonomous driving has very stringent requirements of latency and
3068 reliability. It is critical that the timely arrival of information
3069 for safety issues. In addition, due to the limitation of processing
3070 of individual vehicle, passing information to the cloud can provide
3071 more functions such as video processing, audio recognition or
3072 navigation systems. All of those requirements lead to a highly
3073 reliable connectivity to the cloud. On the other hand, it is natural
3074 that the provisioning of low latency communication is one of the main
3075 challenges to be overcome as a result of the high mobility, the high
3076 penetration losses caused by the vehicle itself. As result of that,
3077 the data transmission with latency below 5ms and a high reliability
3078 of PER below 10E-6 are demanded. It can benefit from the deployment
3079 of deterministic networking with high reliability.
3081 7.6. Gaming, Media and Virtual Reality
3083 Online gaming and cloud gaming is dominating the gaming market since
3084 it allow multiple players to play together with more challenging and
3085 competing. Connected via current internet, the latency can be a big
3086 issue to degrade the end users' experience. There different types of
3087 games and FPS (First Person Shooting) gaming has been considered to
3088 be the most latency sensitive online gaming due to the high
3089 requirements of timing precision and computing of moving target.
3090 Virtual reality is also receiving more interests than ever before as
3091 a novel gaming experience. The delay here can be very critical to
3092 the interacting in the virtual world. Disagreement between what is
3093 seeing and what is feeling can cause motion sickness and affect what
3094 happens in the game. Supporting fast, real-time and reliable
3095 communications in both PHY/MAC layer, network layer and application
3096 layer is main bottleneck for such use case. The media content
3097 delivery has been and will become even more important use of
3098 Internet. Not only high bandwidth demand but also critical delay and
3099 jitter requirements have to be taken into account to meet the user
3100 demand. To make the smoothness of the video and audio, delay and
3101 jitter has to be guaranteed to avoid possible interruption which is
3102 the killer of all online media on demand service. Now with 4K and 8K
3103 video in the near future, the delay guarantee become one of the most
3104 challenging issue than ever before. 4K/8K UHD video service requires
3105 6Gbps-100Gbps for uncompressed video and compressed video starting
3106 from 60Mbps. The delay requirement is 100ms while some specific
3107 interactive applications may require 10ms delay [UHD-video].
3109 8. Use Case Common Elements
3111 Looking at the use cases collectively, the following common desires
3112 for the DetNet-based networks of the future emerge:
3114 o Open standards-based network (replace various proprietary
3115 networks, reduce cost, create multi-vendor market)
3117 o Centrally administered (though such administration may be
3118 distributed for scale and resiliency)
3120 o Integrates L2 (bridged) and L3 (routed) environments (independent
3121 of the Link layer, e.g. can be used with Ethernet, 6TiSCH, etc.)
3123 o Carries both deterministic and best-effort traffic (guaranteed
3124 end-to-end delivery of deterministic flows, deterministic flows
3125 isolated from each other and from best-effort traffic congestion,
3126 unused deterministic BW available to best-effort traffic)
3128 o Ability to add or remove systems from the network with minimal,
3129 bounded service interruption (applications include replacement of
3130 failed devices as well as plug and play)
3132 o Uses standardized data flow information models capable of
3133 expressing deterministic properties (models express device
3134 capabilities, flow properties. Protocols for pushing models from
3135 controller to devices, devices to controller)
3137 o Scalable size (long distances (many km) and short distances
3138 (within a single machine), many hops (radio repeaters, microwave
3139 links, fiber links...) and short hops (single machine))
3141 o Scalable timing parameters and accuracy (bounded latency,
3142 guaranteed worst case maximum, minimum. Low latency, e.g. control
3143 loops may be less than 1ms, but larger for wide area networks)
3145 o High availability (99.9999 percent up time requested, but may be
3146 up to twelve 9s)
3148 o Reliability, redundancy (lives at stake)
3150 o Security (from failures, attackers, misbehaving devices -
3151 sensitive to both packet content and arrival time)
3153 9. Acknowledgments
3155 This document has benefited from reviews, suggestions, comments and
3156 proposed text provided by the following members, listed in
3157 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver
3158 Huang.
3160 10. Informative References
3162 [ACE] IETF, "Authentication and Authorization for Constrained
3163 Environments", .
3166 [bacnetip]
3167 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP",
3168 January 1999.
3170 [CCAMP] IETF, "Common Control and Measurement Plane",
3171 .
3173 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND
3174 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_
3175 and_Enhancement_v2.0, March 2015,
3176 .
3179 [CONTENT_PROTECTION]
3180 Olsen, D., "1722a Content Protection", 2012,
3181 .
3184 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI);
3185 Interface Specification", CPRI Specification V6.1, July
3186 2014, .
3189 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification,
3190 Version 1.2", 2012, .
3192 [DICE] IETF, "DTLS In Constrained Environments",
3193 .
3195 [EA12] Evans, P. and M. Annunziata, "Industrial Internet: Pushing
3196 the Boundaries of Minds and Machines", November 2012.
3198 [ESPN_DC2]
3199 Daley, D., "ESPN's DC2 Scales AVB Large", 2014,
3200 .
3203 [flnet] Japan Electrical Manufacturers' Association, "JEMA 1479 -
3204 English Edition", September 2012.
3206 [Fronthaul]
3207 Chen, D. and T. Mustala, "Ethernet Fronthaul
3208 Considerations", IEEE 1904.3, February 2015,
3209 .
3212 [HART] www.hartcomm.org, "Highway Addressable remote Transducer,
3213 a group of specifications for industrial process and
3214 control devices administered by the HART Foundation".
3216 [I-D.finn-detnet-architecture]
3217 Finn, N., Thubert, P., and M. Teener, "Deterministic
3218 Networking Architecture", draft-finn-detnet-
3219 architecture-02 (work in progress), November 2015.
3221 [I-D.finn-detnet-problem-statement]
3222 Finn, N. and P. Thubert, "Deterministic Networking Problem
3223 Statement", draft-finn-detnet-problem-statement-04 (work
3224 in progress), October 2015.
3226 [I-D.ietf-6tisch-6top-interface]
3227 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
3228 (6top) Interface", draft-ietf-6tisch-6top-interface-04
3229 (work in progress), July 2015.
3231 [I-D.ietf-6tisch-architecture]
3232 Thubert, P., "An Architecture for IPv6 over the TSCH mode
3233 of IEEE 802.15.4", draft-ietf-6tisch-architecture-09 (work
3234 in progress), November 2015.
3236 [I-D.ietf-6tisch-coap]
3237 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and
3238 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work
3239 in progress), March 2015.
3241 [I-D.ietf-6tisch-terminology]
3242 Palattella, M., Thubert, P., Watteyne, T., and Q. Wang,
3243 "Terminology in IPv6 over the TSCH mode of IEEE
3244 802.15.4e", draft-ietf-6tisch-terminology-06 (work in
3245 progress), November 2015.
3247 [I-D.ietf-ipv6-multilink-subnets]
3248 Thaler, D. and C. Huitema, "Multi-link Subnet Support in
3249 IPv6", draft-ietf-ipv6-multilink-subnets-00 (work in
3250 progress), July 2002.
3252 [I-D.ietf-roll-rpl-industrial-applicability]
3253 Phinney, T., Thubert, P., and R. Assimiti, "RPL
3254 applicability in industrial networks", draft-ietf-roll-
3255 rpl-industrial-applicability-02 (work in progress),
3256 October 2013.
3258 [I-D.ietf-tictoc-1588overmpls]
3259 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L.
3260 Montini, "Transporting Timing messages over MPLS
3261 Networks", draft-ietf-tictoc-1588overmpls-07 (work in
3262 progress), October 2015.
3264 [I-D.kh-spring-ip-ran-use-case]
3265 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing
3266 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02
3267 (work in progress), November 2014.
3269 [I-D.mirsky-mpls-residence-time]
3270 Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S.,
3271 and S. Vainshtein, "Residence Time Measurement in MPLS
3272 network", draft-mirsky-mpls-residence-time-07 (work in
3273 progress), July 2015.
3275 [I-D.svshah-tsvwg-deterministic-forwarding]
3276 Shah, S. and P. Thubert, "Deterministic Forwarding PHB",
3277 draft-svshah-tsvwg-deterministic-forwarding-04 (work in
3278 progress), August 2015.
3280 [I-D.thubert-6lowpan-backbone-router]
3281 Thubert, P., "6LoWPAN Backbone Router", draft-thubert-
3282 6lowpan-backbone-router-03 (work in progress), February
3283 2013.
3285 [I-D.wang-6tisch-6top-sublayer]
3286 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer
3287 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in
3288 progress), November 2015.
3290 [IEC61850-90-12]
3291 TC57 WG10, IEC., "IEC 61850-90-12 TR: Communication
3292 networks and systems for power utility automation - Part
3293 90-12: Wide area network engineering guidelines", 2015.
3295 [IEC62439-3:2012]
3296 TC65, IEC., "IEC 62439-3: Industrial communication
3297 networks - High availability automation networks - Part 3:
3298 Parallel Redundancy Protocol (PRP) and High-availability
3299 Seamless Redundancy (HSR)", 2012.
3301 [IEEE1588]
3302 IEEE, "IEEE Standard for a Precision Clock Synchronization
3303 Protocol for Networked Measurement and Control Systems",
3304 IEEE Std 1588-2008, 2008,
3305 .
3308 [IEEE1722]
3309 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport
3310 Protocol for Time Sensitive Applications in a Bridged
3311 Local Area Network", IEEE Std 1722-2011, 2011,
3312 .
3315 [IEEE19043]
3316 IEEE Standards Association, "IEEE 1904.3 TF", IEEE 1904.3,
3317 2015, .
3319 [IEEE802.1TSNTG]
3320 IEEE Standards Association, "IEEE 802.1 Time-Sensitive
3321 Networks Task Group", March 2013,
3322 .
3324 [IEEE802154]
3325 IEEE standard for Information Technology, "IEEE std.
3326 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC)
3327 and Physical Layer (PHY) Specifications for Low-Rate
3328 Wireless Personal Area Networks".
3330 [IEEE802154e]
3331 IEEE standard for Information Technology, "IEEE standard
3332 for Information Technology, IEEE std. 802.15.4, Part.
3333 15.4: Wireless Medium Access Control (MAC) and Physical
3334 Layer (PHY) Specifications for Low-Rate Wireless Personal
3335 Area Networks, June 2011 as amended by IEEE std.
3336 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area
3337 Networks (LR-WPANs) Amendment 1: MAC sublayer", April
3338 2012.
3340 [IEEE8021AS]
3341 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)",
3342 IEEE 802.1AS-2001, 2011,
3343 .
3346 [IEEE8021CM]
3347 Farkas, J., "Time-Sensitive Networking for Fronthaul",
3348 Unapproved PAR, PAR for a New IEEE Standard;
3349 IEEE P802.1CM, April 2015,
3350 .
3353 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation",
3354 .
3356 [ISA100.11a]
3357 ISA/ANSI, "Wireless Systems for Industrial Automation:
3358 Process Control and Related Applications - ISA100.11a-2011
3359 - IEC 62734", 2011, .
3362 [ISO7240-16]
3363 ISO, "ISO 7240-16:2007 Fire detection and alarm systems --
3364 Part 16: Sound system control and indicating equipment",
3365 2007, .
3368 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006.
3370 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0",
3371 1994.
3373 [LTE-Latency]
3374 Johnston, S., "LTE Latency: How does it compare to other
3375 technologies", March 2014,
3376 .
3379 [MEF] MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells",
3380 MEF 22.1.1, July 2014,
3381 .
3384 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and
3385 wireless system", ICT-317669-METIS/D1.1 ICT-
3386 317669-METIS/D1.1, April 2013, .
3389 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL
3390 SPECIFICATION V1.1b", December 2006.
3392 [net5G] Ericsson, "5G Radio Access, Challenges for 2020 and
3393 Beyond", Ericsson white paper wp-5g, June 2013,
3394 .
3396 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0,
3397 February 2015, .
3400 [PCE] IETF, "Path Computation Element",
3401 .
3403 [profibus]
3404 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001.
3406 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
3407 Requirement Levels", BCP 14, RFC 2119,
3408 DOI 10.17487/RFC2119, March 1997,
3409 .
3411 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6
3412 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460,
3413 December 1998, .
3415 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black,
3416 "Definition of the Differentiated Services Field (DS
3417 Field) in the IPv4 and IPv6 Headers", RFC 2474,
3418 DOI 10.17487/RFC2474, December 1998,
3419 .
3421 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol
3422 Label Switching Architecture", RFC 3031,
3423 DOI 10.17487/RFC3031, January 2001,
3424 .
3426 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
3427 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
3428 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001,
3429 .
3431 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation
3432 Metric for IP Performance Metrics (IPPM)", RFC 3393,
3433 DOI 10.17487/RFC3393, November 2002,
3434 .
3436 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between
3437 Information Models and Data Models", RFC 3444,
3438 DOI 10.17487/RFC3444, January 2003,
3439 .
3441 [RFC3972] Aura, T., "Cryptographically Generated Addresses (CGA)",
3442 RFC 3972, DOI 10.17487/RFC3972, March 2005,
3443 .
3445 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation
3446 Edge-to-Edge (PWE3) Architecture", RFC 3985,
3447 DOI 10.17487/RFC3985, March 2005,
3448 .
3450 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing
3451 Architecture", RFC 4291, DOI 10.17487/RFC4291, February
3452 2006, .
3454 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure-
3455 Agnostic Time Division Multiplexing (TDM) over Packet
3456 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006,
3457 .
3459 [RFC4903] Thaler, D., "Multi-Link Subnet Issues", RFC 4903,
3460 DOI 10.17487/RFC4903, June 2007,
3461 .
3463 [RFC4919] Kushalnagar, N., Montenegro, G., and C. Schumacher, "IPv6
3464 over Low-Power Wireless Personal Area Networks (6LoWPANs):
3465 Overview, Assumptions, Problem Statement, and Goals",
3466 RFC 4919, DOI 10.17487/RFC4919, August 2007,
3467 .
3469 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and
3470 P. Pate, "Structure-Aware Time Division Multiplexed (TDM)
3471 Circuit Emulation Service over Packet Switched Network
3472 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007,
3473 .
3475 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi,
3476 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087,
3477 DOI 10.17487/RFC5087, December 2007,
3478 .
3480 [RFC6282] Hui, J., Ed. and P. Thubert, "Compression Format for IPv6
3481 Datagrams over IEEE 802.15.4-Based Networks", RFC 6282,
3482 DOI 10.17487/RFC6282, September 2011,
3483 .
3485 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J.,
3486 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur,
3487 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for
3488 Low-Power and Lossy Networks", RFC 6550,
3489 DOI 10.17487/RFC6550, March 2012,
3490 .
3492 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N.,
3493 and D. Barthel, "Routing Metrics Used for Path Calculation
3494 in Low-Power and Lossy Networks", RFC 6551,
3495 DOI 10.17487/RFC6551, March 2012,
3496 .
3498 [RFC6775] Shelby, Z., Ed., Chakrabarti, S., Nordmark, E., and C.
3499 Bormann, "Neighbor Discovery Optimization for IPv6 over
3500 Low-Power Wireless Personal Area Networks (6LoWPANs)",
3501 RFC 6775, DOI 10.17487/RFC6775, November 2012,
3502 .
3504 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using
3505 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the
3506 Internet of Things (IoT): Problem Statement", RFC 7554,
3507 DOI 10.17487/RFC7554, May 2015,
3508 .
3510 [SRP_LATENCY]
3511 Gunther, C., "Specifying SRP Latency", 2014,
3512 .
3515 [STUDIO_IP]
3516 Mace, G., "IP Networked Studio Infrastructure for
3517 Synchronized & Real-Time Multimedia Transmissions", 2007,
3518 .
3521 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in
3522 packet networks", Recommendation G.8261, August 2013,
3523 .
3525 [TEAS] IETF, "Traffic Engineering Architecture and Signaling",
3526 .
3528 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements
3529 for Evolved Universal Terrestrial Radio Access Network
3530 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013.
3532 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception
3533 (FDD)", 3GPP TS 25.104 3.14.0, March 2007.
3535 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access
3536 (E-UTRA); Base Station (BS) radio transmission and
3537 reception", 3GPP TS 36.104 10.11.0, July 2013.
3539 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access
3540 (E-UTRA); Requirements for support of radio resource
3541 management", 3GPP TS 36.133 12.7.0, April 2015.
3543 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access
3544 (E-UTRA); Physical channels and modulation", 3GPP
3545 TS 36.211 10.7.0, March 2013.
3547 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA)
3548 and Evolved Universal Terrestrial Radio Access Network
3549 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300
3550 10.11.0, September 2013.
3552 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive
3553 Networks Task Group", 2013,
3554 .
3556 [UHD-video]
3557 Holub, P., "Ultra-High Definition Videos and Their
3558 Applications over the Network", The 7th International
3559 Symposium on VICTORIES Project PetrHolub_presentation,
3560 October 2014, .
3563 [WirelessHART]
3564 www.hartcomm.org, "Industrial Communication Networks -
3565 Wireless Communication Network and Communication Profiles
3566 - WirelessHART - IEC 62591", 2010.
3568 Authors' Addresses
3570 Ethan Grossman (editor)
3571 Dolby Laboratories, Inc.
3572 1275 Market Street
3573 San Francisco, CA 94103
3574 USA
3576 Phone: +1 415 645 4726
3577 Email: ethan.grossman@dolby.com
3578 URI: http://www.dolby.com
3580 Craig Gunther
3581 Harman International
3582 10653 South River Front Parkway
3583 South Jordan, UT 84095
3584 USA
3586 Phone: +1 801 568-7675
3587 Email: craig.gunther@harman.com
3588 URI: http://www.harman.com
3589 Pascal Thubert
3590 Cisco Systems, Inc
3591 Building D
3592 45 Allee des Ormes - BP1200
3593 MOUGINS - Sophia Antipolis 06254
3594 FRANCE
3596 Phone: +33 497 23 26 34
3597 Email: pthubert@cisco.com
3599 Patrick Wetterwald
3600 Cisco Systems
3601 45 Allees des Ormes
3602 Mougins 06250
3603 FRANCE
3605 Phone: +33 4 97 23 26 36
3606 Email: pwetterw@cisco.com
3608 Jean Raymond
3609 Hydro-Quebec
3610 1500 University
3611 Montreal H3A3S7
3612 Canada
3614 Phone: +1 514 840 3000
3615 Email: raymond.jean@hydro.qc.ca
3617 Jouni Korhonen
3618 Broadcom Corporation
3619 3151 Zanker Road
3620 San Jose, CA 95134
3621 USA
3623 Email: jouni.nospam@gmail.com
3625 Yu Kaneko
3626 Toshiba
3627 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi
3628 Kanagawa, Japan
3630 Email: yu1.kaneko@toshiba.co.jp
3631 Subir Das
3632 Applied Communication Sciences
3633 150 Mount Airy Road, Basking Ridge
3634 New Jersey, 07920, USA
3636 Email: sdas@appcomsci.com
3638 Yiyong Zha
3639 Huawei Technologies
3641 Email: zhayiyong@huawei.com