idnits 2.17.1 draft-ietf-detnet-use-cases-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (February 16, 2016) is 2992 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'ACE' is defined on line 3207, but no explicit reference was found in the text == Unused Reference: 'DICE' is defined on line 3237, but no explicit reference was found in the text == Unused Reference: 'HART' is defined on line 3257, but no explicit reference was found in the text == Unused Reference: 'I-D.thubert-6lowpan-backbone-router' is defined on line 3325, but no explicit reference was found in the text == Unused Reference: 'IEC61850-90-12' is defined on line 3335, but no explicit reference was found in the text == Unused Reference: 'IEEE8021TSN' is defined on line 3398, but no explicit reference was found in the text == Unused Reference: 'IETFDetNet' is defined on line 3404, but no explicit reference was found in the text == Unused Reference: 'ISA100' is defined on line 3408, but no explicit reference was found in the text == Unused Reference: 'RFC2119' is defined on line 3461, but no explicit reference was found in the text == Unused Reference: 'RFC2460' is defined on line 3466, but no explicit reference was found in the text == Unused Reference: 'RFC2474' is defined on line 3470, but no explicit reference was found in the text == Unused Reference: 'RFC3209' is defined on line 3481, but no explicit reference was found in the text == Unused Reference: 'RFC3393' is defined on line 3486, but no explicit reference was found in the text == Unused Reference: 'RFC4903' is defined on line 3514, but no explicit reference was found in the text == Unused Reference: 'RFC4919' is defined on line 3518, but no explicit reference was found in the text == Unused Reference: 'RFC6282' is defined on line 3535, but no explicit reference was found in the text == Unused Reference: 'RFC6775' is defined on line 3553, but no explicit reference was found in the text == Unused Reference: 'TEAS' is defined on line 3580, but no explicit reference was found in the text == Unused Reference: 'WirelessHART' is defined on line 3618, but no explicit reference was found in the text == Outdated reference: A later version (-08) exists of draft-finn-detnet-architecture-02 == Outdated reference: A later version (-05) exists of draft-finn-detnet-problem-statement-04 == Outdated reference: A later version (-30) exists of draft-ietf-6tisch-architecture-09 == Outdated reference: A later version (-10) exists of draft-ietf-6tisch-terminology-06 -- Obsolete informational reference (is this intentional?): RFC 2460 (Obsoleted by RFC 8200) Summary: 0 errors (**), 0 flaws (~~), 25 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force E. Grossman, Ed. 3 Internet-Draft DOLBY 4 Intended status: Informational C. Gunther 5 Expires: August 19, 2016 HARMAN 6 P. Thubert 7 P. Wetterwald 8 CISCO 9 J. Raymond 10 HYDRO-QUEBEC 11 J. Korhonen 12 BROADCOM 13 Y. Kaneko 14 Toshiba 15 S. Das 16 Applied Communication Sciences 17 Y. Zha 18 HUAWEI 19 B. Varga 20 J. Farkas 21 Ericsson 22 F. Goetz 23 J. Schmitt 24 Siemens 25 February 16, 2016 27 Deterministic Networking Use Cases 28 draft-ietf-detnet-use-cases-03 30 Abstract 32 This draft documents requirements in several diverse industries to 33 establish multi-hop paths for characterized flows with deterministic 34 properties. In this context deterministic implies that streams can 35 be established which provide guaranteed bandwidth and latency which 36 can be established from either a Layer 2 or Layer 3 (IP) interface, 37 and which can co-exist on an IP network with best-effort traffic. 39 Additional requirements include optional redundant paths, very high 40 reliability paths, time synchronization, and clock distribution. 41 Industries considered include wireless for industrial applications, 42 professional audio, electrical utilities, building automation 43 systems, radio/mobile access networks, automotive, and gaming. 45 For each case, this document will identify the application, identify 46 representative solutions used today, and what new uses an IETF DetNet 47 solution may enable. 49 Status of This Memo 51 This Internet-Draft is submitted in full conformance with the 52 provisions of BCP 78 and BCP 79. 54 Internet-Drafts are working documents of the Internet Engineering 55 Task Force (IETF). Note that other groups may also distribute 56 working documents as Internet-Drafts. The list of current Internet- 57 Drafts is at http://datatracker.ietf.org/drafts/current/. 59 Internet-Drafts are draft documents valid for a maximum of six months 60 and may be updated, replaced, or obsoleted by other documents at any 61 time. It is inappropriate to use Internet-Drafts as reference 62 material or to cite them other than as "work in progress." 64 This Internet-Draft will expire on August 19, 2016. 66 Copyright Notice 68 Copyright (c) 2016 IETF Trust and the persons identified as the 69 document authors. All rights reserved. 71 This document is subject to BCP 78 and the IETF Trust's Legal 72 Provisions Relating to IETF Documents 73 (http://trustee.ietf.org/license-info) in effect on the date of 74 publication of this document. Please review these documents 75 carefully, as they describe your rights and restrictions with respect 76 to this document. Code Components extracted from this document must 77 include Simplified BSD License text as described in Section 4.e of 78 the Trust Legal Provisions and are provided without warranty as 79 described in the Simplified BSD License. 81 Table of Contents 83 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5 84 2. Pro Audio Use Cases . . . . . . . . . . . . . . . . . . . . . 5 85 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 5 86 2.2. Fundamental Stream Requirements . . . . . . . . . . . . . 6 87 2.2.1. Guaranteed Bandwidth . . . . . . . . . . . . . . . . 6 88 2.2.2. Bounded and Consistent Latency . . . . . . . . . . . 7 89 2.2.2.1. Optimizations . . . . . . . . . . . . . . . . . . 8 90 2.3. Additional Stream Requirements . . . . . . . . . . . . . 9 91 2.3.1. Deterministic Time to Establish Streaming . . . . . . 9 92 2.3.2. Use of Unused Reservations by Best-Effort Traffic . . 9 93 2.3.3. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 10 94 2.3.4. Secure Transmission . . . . . . . . . . . . . . . . . 10 95 2.3.5. Redundant Paths . . . . . . . . . . . . . . . . . . . 10 96 2.3.6. Link Aggregation . . . . . . . . . . . . . . . . . . 10 97 2.3.7. Traffic Segregation . . . . . . . . . . . . . . . . . 11 98 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets . . . 11 99 2.3.7.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 11 100 2.4. Integration of Reserved Streams into IT Networks . . . . 12 101 2.5. Security Considerations . . . . . . . . . . . . . . . . . 12 102 2.5.1. Denial of Service . . . . . . . . . . . . . . . . . . 12 103 2.5.2. Control Protocols . . . . . . . . . . . . . . . . . . 12 104 2.6. A State-of-the-Art Broadcast Installation Hits Technology 105 Limits . . . . . . . . . . . . . . . . . . . . . . . . . 13 106 3. Utility Telecom Use Cases . . . . . . . . . . . . . . . . . . 13 107 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 13 108 3.2. Telecommunications Trends and General telecommunications 109 Requirements . . . . . . . . . . . . . . . . . . . . . . 14 110 3.2.1. General Telecommunications Requirements . . . . . . . 14 111 3.2.1.1. Migration to Packet-Switched Network . . . . . . 15 112 3.2.2. Applications, Use cases and traffic patterns . . . . 16 113 3.2.2.1. Transmission use cases . . . . . . . . . . . . . 16 114 3.2.2.2. Distribution use case . . . . . . . . . . . . . . 26 115 3.2.2.3. Generation use case . . . . . . . . . . . . . . . 29 116 3.2.3. Specific Network topologies of Smart Grid 117 Applications . . . . . . . . . . . . . . . . . . . . 30 118 3.2.4. Precision Time Protocol . . . . . . . . . . . . . . . 31 119 3.3. IANA Considerations . . . . . . . . . . . . . . . . . . . 32 120 3.4. Security Considerations . . . . . . . . . . . . . . . . . 32 121 3.4.1. Current Practices and Their Limitations . . . . . . . 32 122 3.4.2. Security Trends in Utility Networks . . . . . . . . . 34 123 4. Building Automation Systems . . . . . . . . . . . . . . . . . 35 124 4.1. Use Case Description . . . . . . . . . . . . . . . . . . 35 125 4.2. Building Automation Systems Today . . . . . . . . . . . . 36 126 4.2.1. BAS Architecture . . . . . . . . . . . . . . . . . . 36 127 4.2.2. BAS Deployment Model . . . . . . . . . . . . . . . . 37 128 4.2.3. Use Cases for Field Networks . . . . . . . . . . . . 39 129 4.2.3.1. Environmental Monitoring . . . . . . . . . . . . 39 130 4.2.3.2. Fire Detection . . . . . . . . . . . . . . . . . 39 131 4.2.3.3. Feedback Control . . . . . . . . . . . . . . . . 40 132 4.2.4. Security Considerations . . . . . . . . . . . . . . . 40 133 4.3. BAS Future . . . . . . . . . . . . . . . . . . . . . . . 40 134 4.4. BAS Asks . . . . . . . . . . . . . . . . . . . . . . . . 41 135 5. Wireless for Industrial Use Cases . . . . . . . . . . . . . . 41 136 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 41 137 5.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 42 138 5.3. 6TiSCH Overview . . . . . . . . . . . . . . . . . . . . . 43 139 5.3.1. TSCH and 6top . . . . . . . . . . . . . . . . . . . . 46 140 5.3.2. SlotFrames and Priorities . . . . . . . . . . . . . . 46 141 5.3.3. Schedule Management by a PCE . . . . . . . . . . . . 46 142 5.3.4. Track Forwarding . . . . . . . . . . . . . . . . . . 47 143 5.3.4.1. Transport Mode . . . . . . . . . . . . . . . . . 49 144 5.3.4.2. Tunnel Mode . . . . . . . . . . . . . . . . . . . 50 145 5.3.4.3. Tunnel Metadata . . . . . . . . . . . . . . . . . 51 146 5.4. Operations of Interest for DetNet and PCE . . . . . . . . 51 147 5.4.1. Packet Marking and Handling . . . . . . . . . . . . . 52 148 5.4.1.1. Tagging Packets for Flow Identification . . . . . 52 149 5.4.1.2. Replication, Retries and Elimination . . . . . . 52 150 5.4.1.3. Differentiated Services Per-Hop-Behavior . . . . 53 151 5.4.2. Topology and capabilities . . . . . . . . . . . . . . 53 152 5.5. Security Considerations . . . . . . . . . . . . . . . . . 54 153 6. Cellular Radio Use Cases . . . . . . . . . . . . . . . . . . 54 154 6.1. Use Case Description . . . . . . . . . . . . . . . . . . 54 155 6.1.1. Network Architecture . . . . . . . . . . . . . . . . 54 156 6.1.2. Time Synchronization Requirements . . . . . . . . . . 55 157 6.1.3. Time-Sensitive Stream Requirements . . . . . . . . . 57 158 6.1.4. Security Considerations . . . . . . . . . . . . . . . 57 159 6.2. Cellular Radio Networks Today . . . . . . . . . . . . . . 58 160 6.3. Cellular Radio Networks Future . . . . . . . . . . . . . 58 161 6.4. Cellular Radio Networks Asks . . . . . . . . . . . . . . 60 162 7. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 60 163 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 60 164 7.2. Industrial M2M Communication Today . . . . . . . . . . . 62 165 7.2.1. Transport Parameters . . . . . . . . . . . . . . . . 62 166 7.2.2. Stream Creation and Destruction . . . . . . . . . . . 63 167 7.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 63 168 7.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 63 169 8. Other Use Cases . . . . . . . . . . . . . . . . . . . . . . . 64 170 8.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 64 171 8.2. Critical Delay Requirements . . . . . . . . . . . . . . . 65 172 8.3. Coordinated multipoint processing (CoMP) . . . . . . . . 65 173 8.3.1. CoMP Architecture . . . . . . . . . . . . . . . . . . 65 174 8.3.2. Delay Sensitivity in CoMP . . . . . . . . . . . . . . 66 175 8.4. Industrial Automation . . . . . . . . . . . . . . . . . . 67 176 8.5. Vehicle to Vehicle . . . . . . . . . . . . . . . . . . . 67 177 8.6. Gaming, Media and Virtual Reality . . . . . . . . . . . . 68 178 9. Use Case Common Elements . . . . . . . . . . . . . . . . . . 68 179 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 69 180 10.1. Pro Audio . . . . . . . . . . . . . . . . . . . . . . . 69 181 10.2. Utility Telecom . . . . . . . . . . . . . . . . . . . . 69 182 10.3. Building Automation Systems . . . . . . . . . . . . . . 70 183 10.4. Wireless for Industrial . . . . . . . . . . . . . . . . 70 184 10.5. Cellular Radio . . . . . . . . . . . . . . . . . . . . . 70 185 10.6. Industrial M2M . . . . . . . . . . . . . . . . . . . . . 70 186 10.7. Other . . . . . . . . . . . . . . . . . . . . . . . . . 70 187 11. Informative References . . . . . . . . . . . . . . . . . . . 71 188 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 79 190 1. Introduction 192 This draft presents use cases from diverse industries which have in 193 common a need for deterministic streams, but which also differ 194 notably in their network topologies and specific desired behavior. 195 Together, they provide broad industry context for DetNet and a 196 yardstick against which proposed DetNet designs can be measured (to 197 what extent does a proposed design satisfy these various use cases?) 199 For DetNet, use cases explicitly do not define requirements; The 200 DetNet WG will consider the use cases, decide which elements are in 201 scope for DetNet, and the results will be incorporated into future 202 drafts. Similarly, the DetNet use case draft explicitly does not 203 suggest any specific design, architecture or protocols, which will be 204 topics of future drafts. 206 We present for each use case the answers to the following questions: 208 o What is the use case? 210 o How is it addressed today? 212 o How would you like it to be addressed in the future? 214 o What do you want the IETF to deliver? 216 The level of detail in each use case should be sufficient to express 217 the relevant elements of the use case, but not more. 219 At the end we consider the use cases collectively, and examine the 220 most significant goals they have in common. 222 2. Pro Audio Use Cases 224 2.1. Introduction 226 The professional audio and video industry includes music and film 227 content creation, broadcast, cinema, and live exposition as well as 228 public address, media and emergency systems at large venues 229 (airports, stadiums, churches, theme parks). These industries have 230 already gone through the transition of audio and video signals from 231 analog to digital, however the interconnect systems remain primarily 232 point-to-point with a single (or small number of) signals per link, 233 interconnected with purpose-built hardware. 235 These industries are now attempting to transition to packet based 236 infrastructure for distributing audio and video in order to reduce 237 cost, increase routing flexibility, and integrate with existing IT 238 infrastructure. 240 However, there are several requirements for making a network the 241 primary infrastructure for audio and video which are not met by 242 todays networks and these are our concern in this draft. 244 The principal requirement is that pro audio and video applications 245 become able to establish streams that provide guaranteed (bounded) 246 bandwidth and latency from the Layer 3 (IP) interface. Such streams 247 can be created today within standards-based layer 2 islands however 248 these are not sufficient to enable effective distribution over wider 249 areas (for example broadcast events that span wide geographical 250 areas). 252 Some proprietary systems have been created which enable deterministic 253 streams at layer 3 however they are engineered networks in that they 254 require careful configuration to operate, often require that the 255 system be over designed, and it is implied that all devices on the 256 network voluntarily play by the rules of that network. To enable 257 these industries to successfully transition to an interoperable 258 multi-vendor packet-based infrastructure requires effective open 259 standards, and we believe that establishing relevant IETF standards 260 is a crucial factor. 262 It would be highly desirable if such streams could be routed over the 263 open Internet, however even intermediate solutions with more limited 264 scope (such as enterprise networks) can provide a substantial 265 improvement over todays networks, and a solution that only provides 266 for the enterprise network scenario is an acceptable first step. 268 We also present more fine grained requirements of the audio and video 269 industries such as safety and security, redundant paths, devices with 270 limited computing resources on the network, and that reserved stream 271 bandwidth is available for use by other best-effort traffic when that 272 stream is not currently in use. 274 2.2. Fundamental Stream Requirements 276 The fundamental stream properties are guaranteed bandwidth and 277 deterministic latency as described in this section. Additional 278 stream requirements are described in a subsequent section. 280 2.2.1. Guaranteed Bandwidth 282 Transmitting audio and video streams is unlike common file transfer 283 activities because guaranteed delivery cannot be achieved by re- 284 trying the transmission; by the time the missing or corrupt packet 285 has been identified it is too late to execute a re-try operation and 286 stream playback is interrupted, which is unacceptable in for example 287 a live concert. In some contexts large amounts of buffering can be 288 used to provide enough delay to allow time for one or more retries, 289 however this is not an effective solution when live interaction is 290 involved, and is not considered an acceptable general solution for 291 pro audio and video. (Have you ever tried speaking into a microphone 292 through a sound system that has an echo coming back at you? It makes 293 it almost impossible to speak clearly). 295 Providing a way to reserve a specific amount of bandwidth for a given 296 stream is a key requirement. 298 2.2.2. Bounded and Consistent Latency 300 Latency in this context means the amount of time that passes between 301 when a signal is sent over a stream and when it is received, for 302 example the amount of time delay between when you speak into a 303 microphone and when your voice emerges from the speaker. Any delay 304 longer than about 10-15 milliseconds is noticeable by most live 305 performers, and greater latency makes the system unusable because it 306 prevents them from playing in time with the other players (see slide 307 6 of [SRP_LATENCY]). 309 The 15ms latency bound is made even more challenging because it is 310 often the case in network based music production with live electric 311 instruments that multiple stages of signal processing are used, 312 connected in series (i.e. from one to the other for example from 313 guitar through a series of digital effects processors) in which case 314 the latencies add, so the latencies of each individual stage must all 315 together remain less than 15ms. 317 In some situations it is acceptable at the local location for content 318 from the live remote site to be delayed to allow for a statistically 319 acceptable amount of latency in order to reduce jitter. However, 320 once the content begins playing in the local location any audio 321 artifacts caused by the local network are unacceptable, especially in 322 those situations where a live local performer is mixed into the feed 323 from the remote location. 325 In addition to being bounded to within some predictable and 326 acceptable amount of time (which may be 15 milliseconds or more or 327 less depending on the application) the latency also has to be 328 consistent. For example when playing a film consisting of a video 329 stream and audio stream over a network, those two streams must be 330 synchronized so that the voice and the picture match up. A common 331 tolerance for audio/video sync is one NTSC video frame (about 33ms) 332 and to maintain the audience perception of correct lip sync the 333 latency needs to be consistent within some reasonable tolerance, for 334 example 10%. 336 A common architecture for synchronizing multiple streams that have 337 different paths through the network (and thus potentially different 338 latencies) is to enable measurement of the latency of each path, and 339 have the data sinks (for example speakers) buffer (delay) all packets 340 on all but the slowest path. Each packet of each stream is assigned 341 a presentation time which is based on the longest required delay. 342 This implies that all sinks must maintain a common time reference of 343 sufficient accuracy, which can be achieved by any of various 344 techniques. 346 This type of architecture is commonly implemented using a central 347 controller that determines path delays and arbitrates buffering 348 delays. 350 2.2.2.1. Optimizations 352 The controller might also perform optimizations based on the 353 individual path delays, for example sinks that are closer to the 354 source can inform the controller that they can accept greater latency 355 since they will be buffering packets to match presentation times of 356 farther away sinks. The controller might then move a stream 357 reservation on a short path to a longer path in order to free up 358 bandwidth for other critical streams on that short path. See slides 359 3-5 of [SRP_LATENCY]. 361 Additional optimization can be achieved in cases where sinks have 362 differing latency requirements, for example in a live outdoor concert 363 the speaker sinks have stricter latency requirements than the 364 recording hardware sinks. See slide 7 of [SRP_LATENCY]. 366 Device cost can be reduced in a system with guaranteed reservations 367 with a small bounded latency due to the reduced requirements for 368 buffering (i.e. memory) on sink devices. For example, a theme park 369 might broadcast a live event across the globe via a layer 3 protocol; 370 in such cases the size of the buffers required is proportional to the 371 latency bounds and jitter caused by delivery, which depends on the 372 worst case segment of the end-to-end network path. For example on 373 todays open internet the latency is typically unacceptable for audio 374 and video streaming without many seconds of buffering. In such 375 scenarios a single gateway device at the local network that receives 376 the feed from the remote site would provide the expensive buffering 377 required to mask the latency and jitter issues associated with long 378 distance delivery. Sink devices in the local location would have no 379 additional buffering requirements, and thus no additional costs, 380 beyond those required for delivery of local content. The sink device 381 would be receiving the identical packets as those sent by the source 382 and would be unaware that there were any latency or jitter issues 383 along the path. 385 2.3. Additional Stream Requirements 387 The requirements in this section are more specific yet are common to 388 multiple audio and video industry applications. 390 2.3.1. Deterministic Time to Establish Streaming 392 Some audio systems installed in public environments (airports, 393 hospitals) have unique requirements with regards to health, safety 394 and fire concerns. One such requirement is a maximum of 3 seconds 395 for a system to respond to an emergency detection and begin sending 396 appropriate warning signals and alarms without human intervention. 397 For this requirement to be met, the system must support a bounded and 398 acceptable time from a notification signal to specific stream 399 establishment. For further details see [ISO7240-16]. 401 Similar requirements apply when the system is restarted after a power 402 cycle, cable re-connection, or system reconfiguration. 404 In many cases such re-establishment of streaming state must be 405 achieved by the peer devices themselves, i.e. without a central 406 controller (since such a controller may only be present during 407 initial network configuration). 409 Video systems introduce related requirements, for example when 410 transitioning from one camera feed to another. Such systems 411 currently use purpose-built hardware to switch feeds smoothly, 412 however there is a current initiative in the broadcast industry to 413 switch to a packet-based infrastructure (see [STUDIO_IP] and the ESPN 414 DC2 use case described below). 416 2.3.2. Use of Unused Reservations by Best-Effort Traffic 418 In cases where stream bandwidth is reserved but not currently used 419 (or is under-utilized) that bandwidth must be available to best- 420 effort (i.e. non-time-sensitive) traffic. For example a single 421 stream may be nailed up (reserved) for specific media content that 422 needs to be presented at different times of the day, ensuring timely 423 delivery of that content, yet in between those times the full 424 bandwidth of the network can be utilized for best-effort tasks such 425 as file transfers. 427 This also addresses a concern of IT network administrators that are 428 considering adding reserved bandwidth traffic to their networks that 429 users will just reserve a ton of bandwidth and then never un-reserve 430 it even though they are not using it, and soon they will have no 431 bandwidth left. 433 2.3.3. Layer 3 Interconnecting Layer 2 Islands 435 As an intermediate step (short of providing guaranteed bandwidth 436 across the open internet) it would be valuable to provide a way to 437 connect multiple Layer 2 networks. For example layer 2 techniques 438 could be used to create a LAN for a single broadcast studio, and 439 several such studios could be interconnected via layer 3 links. 441 2.3.4. Secure Transmission 443 Digital Rights Management (DRM) is very important to the audio and 444 video industries. Any time protected content is introduced into a 445 network there are DRM concerns that must be maintained (see 446 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of 447 network technology, however there are cases when a secure link 448 supporting authentication and encryption is required by content 449 owners to carry their audio or video content when it is outside their 450 own secure environment (for example see [DCI]). 452 As an example, two techniques are Digital Transmission Content 453 Protection (DTCP) and High-Bandwidth Digital Content Protection 454 (HDCP). HDCP content is not approved for retransmission within any 455 other type of DRM, while DTCP may be retransmitted under HDCP. 456 Therefore if the source of a stream is outside of the network and it 457 uses HDCP protection it is only allowed to be placed on the network 458 with that same HDCP protection. 460 2.3.5. Redundant Paths 462 On-air and other live media streams must be backed up with redundant 463 links that seamlessly act to deliver the content when the primary 464 link fails for any reason. In point-to-point systems this is 465 provided by an additional point-to-point link; the analogous 466 requirement in a packet-based system is to provide an alternate path 467 through the network such that no individual link can bring down the 468 system. 470 2.3.6. Link Aggregation 472 For transmitting streams that require more bandwidth than a single 473 link in the target network can support, link aggregation is a 474 technique for combining (aggregating) the bandwidth available on 475 multiple physical links to create a single logical link of the 476 required bandwidth. However, if aggregation is to be used, the 477 network controller (or equivalent) must be able to determine the 478 maximum latency of any path through the aggregate link (see Bounded 479 and Consistent Latency section above). 481 2.3.7. Traffic Segregation 483 Sink devices may be low cost devices with limited processing power. 484 In order to not overwhelm the CPUs in these devices it is important 485 to limit the amount of traffic that these devices must process. 487 As an example, consider the use of individual seat speakers in a 488 cinema. These speakers are typically required to be cost reduced 489 since the quantities in a single theater can reach hundreds of seats. 490 Discovery protocols alone in a one thousand seat theater can generate 491 enough broadcast traffic to overwhelm a low powered CPU. Thus an 492 installation like this will benefit greatly from some type of traffic 493 segregation that can define groups of seats to reduce traffic within 494 each group. All seats in the theater must still be able to 495 communicate with a central controller. 497 There are many techniques that can be used to support this 498 requirement including (but not limited to) the following examples. 500 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets 502 Packet forwarding rules can be used to eliminate some extraneous 503 streaming traffic from reaching potentially low powered sink devices, 504 however there may be other types of broadcast traffic that should be 505 eliminated using other means for example VLANs or IP subnets. 507 2.3.7.2. Multicast Addressing (IPv4 and IPv6) 509 Multicast addressing is commonly used to keep bandwidth utilization 510 of shared links to a minimum. 512 Because of the MAC Address forwarding nature of Layer 2 bridges it is 513 important that a multicast MAC address is only associated with one 514 stream. This will prevent reservations from forwarding packets from 515 one stream down a path that has no interested sinks simply because 516 there is another stream on that same path that shares the same 517 multicast MAC address. 519 Since each multicast MAC Address can represent 32 different IPv4 520 multicast addresses there must be a process put in place to make sure 521 this does not occur. Requiring use of IPv6 address can achieve this, 522 however due to their continued prevalence, solutions that are 523 effective for IPv4 installations are also required. 525 2.4. Integration of Reserved Streams into IT Networks 527 A commonly cited goal of moving to a packet based media 528 infrastructure is that costs can be reduced by using off the shelf, 529 commodity network hardware. In addition, economy of scale can be 530 realized by combining media infrastructure with IT infrastructure. 531 In keeping with these goals, stream reservation technology should be 532 compatible with existing protocols, and not compromise use of the 533 network for best effort (non-time-sensitive) traffic. 535 2.5. Security Considerations 537 Many industries that are moving from the point-to-point world to the 538 digital network world have little understanding of the pitfalls that 539 they can create for themselves with improperly implemented network 540 infrastructure. DetNet should consider ways to provide security 541 against DoS attacks in solutions directed at these markets. Some 542 considerations are given here as examples of ways that we can help 543 new users avoid common pitfalls. 545 2.5.1. Denial of Service 547 One security pitfall that this author is aware of involves the use of 548 technology that allows a presenter to throw the content from their 549 tablet or smart phone onto the A/V system that is then viewed by all 550 those in attendance. The facility introducing this technology was 551 quite excited to allow such modern flexibility to those who came to 552 speak. One thing they hadn't realized was that since no security was 553 put in place around this technology it left a hole in the system that 554 allowed other attendees to "throw" their own content onto the A/V 555 system. 557 2.5.2. Control Protocols 559 Professional audio systems can include amplifiers that are capable of 560 generating hundreds or thousands of watts of audio power which if 561 used incorrectly can cause hearing damage to those in the vicinity. 562 Apart from the usual care required by the systems operators to 563 prevent such incidents, the network traffic that controls these 564 devices must be secured (as with any sensitive application traffic). 565 In addition, it would be desirable if the configuration protocols 566 that are used to create the network paths used by the professional 567 audio traffic could be designed to protect devices that are not meant 568 to receive high-amplitude content from having such potentially 569 damaging signals routed to them. 571 2.6. A State-of-the-Art Broadcast Installation Hits Technology Limits 573 ESPN recently constructed a state-of-the-art 194,000 sq ft, $125 574 million broadcast studio called DC2. The DC2 network is capable of 575 handling 46 Tbps of throughput with 60,000 simultaneous signals. 576 Inside the facility are 1,100 miles of fiber feeding four audio 577 control rooms. (See details at [ESPN_DC2] ). 579 In designing DC2 they replaced as much point-to-point technology as 580 they possibly could with packet-based technology. They constructed 581 seven individual studios using layer 2 LANS (using IEEE 802.1 AVB) 582 that were entirely effective at routing audio within the LANs, and 583 they were very happy with the results, however to interconnect these 584 layer 2 LAN islands together they ended up using dedicated links 585 because there is no standards-based routing solution available. 587 This is the kind of motivation we have to develop these standards 588 because customers are ready and able to use them. 590 3. Utility Telecom Use Cases 592 3.1. Overview 594 [I-D.finn-detnet-problem-statement] defines the characteristics of a 595 deterministic flow as a data communication flow with a bounded 596 latency, extraordinarily low frame loss, and a very narrow jitter. 597 This document intends to define the utility requirements for 598 deterministic networking. 600 Utility Telecom Networks 602 The business and technology trends that are sweeping the utility 603 industry will drastically transform the utility business from the way 604 it has been for many decades. At the core of many of these changes 605 is a drive to modernize the electrical grid with an integrated 606 telecommunications infrastructure. However, interoperability, 607 concerns, legacy networks, disparate tools, and stringent security 608 requirements all add complexity to the grid transformation. Given 609 the range and diversity of the requirements that should be addressed 610 by the next generation telecommunications infrastructure, utilities 611 need to adopt a holistic architectural approach to integrate the 612 electrical grid with digital telecommunications across the entire 613 power delivery chain. 615 Many utilities still rely on complex environments formed of multiple 616 application-specific, proprietary networks. Information is siloed 617 between operational areas. This prevents utility operations from 618 realizing the operational efficiency benefits, visibility, and 619 functional integration of operational information across grid 620 applications and data networks. The key to modernizing grid 621 telecommunications is to provide a common, adaptable, multi-service 622 network infrastructure for the entire utility organization. Such a 623 network serves as the platform for current capabilities while 624 enabling future expansion of the network to accommodate new 625 applications and services. 627 To meet this diverse set of requirements, both today and in the 628 future, the next generation utility telecommunnications network will 629 be based on open-standards-based IP architecture. An end-to-end IP 630 architecture takes advantage of nearly three decades of IP technology 631 development, facilitating interoperability across disparate networks 632 and devices, as it has been already demonstrated in many mission- 633 critical and highly secure networks. 635 IEC (International Electrotechnical Commission) and different 636 National Committees have mandated a specific adhoc group (AHG8) to 637 define the migration strategy to IPv6 for all the IEC TC57 power 638 automation standards. IPv6 is seen as the obvious future 639 telecommunications technology for the Smart Grid. The Adhoc Group 640 has disclosed, to the IEC coordination group, their conclusions at 641 the end of 2014. 643 It is imperative that utilities participate in standards development 644 bodies to influence the development of future solutions and to 645 benefit from shared experiences of other utilities and vendors. 647 3.2. Telecommunications Trends and General telecommunications 648 Requirements 650 These general telecommunications requirements are over and above the 651 specific requirements of the use cases that have been addressed so 652 far. These include both current and future telecommunications 653 related requirements that should be factored into the network 654 architecture and design. 656 3.2.1. General Telecommunications Requirements 658 o IP Connectivity everywhere 660 o Monitoring services everywhere and from different remote centers 662 o Move services to a virtual data center 664 o Unify access to applications / information from the corporate 665 network 667 o Unify services 669 o Unified Communications Solutions 671 o Mix of fiber and microwave technologies - obsolescence of SONET/ 672 SDH or TDM 674 o Standardize grid telecommunications protocol to opened standard to 675 ensure interoperability 677 o Reliable Telecommunications for Transmission and Distribution 678 Substations 680 o IEEE 1588 time synchronization Client / Server Capabilities 682 o Integration of Multicast Design 684 o QoS Requirements Mapping 686 o Enable Future Network Expansion 688 o Substation Network Resilience 690 o Fast Convergence Design 692 o Scalable Headend Design 694 o Define Service Level Agreements (SLA) and Enable SLA Monitoring 696 o Integration of 3G/4G Technologies and future technologies 698 o Ethernet Connectivity for Station Bus Architecture 700 o Ethernet Connectivity for Process Bus Architecture 702 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP 704 3.2.1.1. Migration to Packet-Switched Network 706 Throughout the world, utilities are increasingly planning for a 707 future based on smart grid applications requiring advanced 708 telecommunications systems. Many of these applications utilize 709 packet connectivity for communicating information and control signals 710 across the utility's Wide Area Network (WAN), made possible by 711 technologies such as multiprotocol label switching (MPLS). The data 712 that traverses the utility WAN includes: 714 o Grid monitoring, control, and protection data 715 o Non-control grid data (e.g. asset data for condition-based 716 monitoring) 718 o Physical safety and security data (e.g. voice and video) 720 o Remote worker access to corporate applications (voice, maps, 721 schematics, etc.) 723 o Field area network backhaul for smart metering, and distribution 724 grid management 726 o Enterprise traffic (email, collaboration tools, business 727 applications) 729 WANs support this wide variety of traffic to and from substations, 730 the transmission and distribution grid, generation sites, between 731 control centers, and between work locations and data centers. To 732 maintain this rapidly expanding set of applications, many utilities 733 are taking steps to evolve present time-division multiplexing (TDM) 734 based and frame relay infrastructures to packet systems. Packet- 735 based networks are designed to provide greater functionalities and 736 higher levels of service for applications, while continuing to 737 deliver reliability and deterministic (real-time) traffic support. 739 3.2.2. Applications, Use cases and traffic patterns 741 Among the numerous applications and use cases that a utility deploys 742 today, many rely on high availability and deterministic behaviour of 743 the telecommunications networks. Protection use cases and generation 744 control are the most demanding and can't rely on a best effort 745 approach. 747 3.2.2.1. Transmission use cases 749 Protection means not only the protection of the human operator but 750 also the protection of the electric equipments and the preservation 751 of the stability and frequency of the grid. If a default occurs on 752 the transmission or the distribution of the electricity, important 753 damages could occured to the human operator but also to very costly 754 electrical equipments and perturb the grid leading to blackouts. The 755 time and reliability requirements are very strong to avoid dramatic 756 impacts to the electrical infrastructure. 758 3.2.2.1.1. Tele Protection 760 The key criteria for measuring Teleprotection performance are command 761 transmission time, dependability and security. These criteria are 762 defined by the IEC standard 60834 as follows: 764 o Transmission time (Speed): The time between the moment where state 765 changes at the transmitter input and the moment of the 766 corresponding change at the receiver output, including propagation 767 delay. Overall operating time for a Teleprotection system 768 includes the time for initiating the command at the transmitting 769 end, the propagation delay over the network (including equipments) 770 and the selection and decision time at the receiving end, 771 including any additional delay due to a noisy environment. 773 o Dependability: The ability to issue and receive valid commands in 774 the presence of interference and/or noise, by minimizing the 775 probability of missing command (PMC). Dependability targets are 776 typically set for a specific bit error rate (BER) level. 778 o Security: The ability to prevent false tripping due to a noisy 779 environment, by minimizing the probability of unwanted commands 780 (PUC). Security targets are also set for a specific bit error 781 rate (BER) level. 783 Additional key elements that may impact Teleprotection performance 784 include bandwidth rate of the Teleprotection system and its 785 resiliency or failure recovery capacity. Transmission time, 786 bandwidth utilization and resiliency are directly linked to the 787 telecommunications equipments and the connections that are used to 788 transfer the commands between relays. 790 3.2.2.1.1.1. Latency Budget Consideration 792 Delay requirements for utility networks may vary depending upon a 793 number of parameters, such as the specific protection equipments 794 used. Most power line equipment can tolerate short circuits or 795 faults for up to approximately five power cycles before sustaining 796 irreversible damage or affecting other segments in the network. This 797 translates to total fault clearance time of 100ms. As a safety 798 precaution, however, actual operation time of protection systems is 799 limited to 70- 80 percent of this period, including fault recognition 800 time, command transmission time and line breaker switching time. 801 Some system components, such as large electromechanical switches, 802 require particularly long time to operate and take up the majority of 803 the total clearance time, leaving only a 10ms window for the 804 telecommunications part of the protection scheme, independent of the 805 distance to travel. Given the sensitivity of the issue, new networks 806 impose requirements that are even more stringent: IEC standard 61850 807 limits the transfer time for protection messages to 1/4 - 1/2 cycle 808 or 4 - 8ms (for 60Hz lines) for the most critical messages. 810 3.2.2.1.1.2. Asymetric delay 812 In addition to minimal transmission delay, a differential protection 813 telecommunications channel must be synchronous, i.e., experiencing 814 symmetrical channel delay in transmit and receive paths. This 815 requires special attention in jitter-prone packet networks. While 816 optimally Teleprotection systems should support zero asymmetric 817 delay, typical legacy relays can tolerate discrepancies of up to 818 750us. 820 The main tools available for lowering delay variation below this 821 threshold are: 823 o A jitter buffer at the multiplexers on each end of the line can be 824 used to offset delay variation by queuing sent and received 825 packets. The length of the queues must balance the need to 826 regulate the rate of transmission with the need to limit overall 827 delay, as larger buffers result in increased latency. This is the 828 old TDM traditional way to fulfill this requirement. 830 o Traffic management tools ensure that the Teleprotection signals 831 receive the highest transmission priority and minimize the number 832 of jitter addition during the path. This is one way to meet the 833 requirement in IP networks. 835 o Standard Packet-Based synchronization technologies, such as 836 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet 837 (Sync-E), can help maintain stable networks by keeping a highly 838 accurate clock source on the different network devices involved. 840 3.2.2.1.1.2.1. Other traffic characteristics 842 o Redundancy: The existence in a system of more than one means of 843 accomplishing a given function. 845 o Recovery time : The duration of time within which a business 846 process must be restored after any type of disruption in order to 847 avoid unacceptable consequences associated with a break in 848 business continuity. 850 o performance management : In networking, a management function 851 defined for controlling and analyzing different parameters/metrics 852 such as the throughput, error rate. 854 o packet loss : One or more packets of data travelling across 855 network fail to reach their destination. 857 3.2.2.1.1.2.2. Teleprotection network requirements 859 The following table captures the main network requirements (this is 860 based on IEC 61850 standard) 862 +-----------------------------+-------------------------------------+ 863 | Teleprotection Requirement | Attribute | 864 +-----------------------------+-------------------------------------+ 865 | One way maximum delay | 4-10 ms | 866 | Asymetric delay required | Yes | 867 | Maximum jitter | less than 250 us (750 us for legacy | 868 | | IED) | 869 | Topology | Point to point, point to Multi- | 870 | | point | 871 | Availability | 99.9999 | 872 | precise timing required | Yes | 873 | Recovery time on node | less than 50ms - hitless | 874 | failure | | 875 | performance management | Yes, Mandatory | 876 | Redundancy | Yes | 877 | Packet loss | 0.1% to 1% | 878 +-----------------------------+-------------------------------------+ 880 Table 1: Teleprotection network requirements 882 3.2.2.1.2. Inter-Trip Protection scheme 884 Inter-tripping is the controlled tripping of a circuit breaker to 885 complete the isolation of a circuit or piece of apparatus in concert 886 with the tripping of other circuit breakers. The main use of such 887 schemes is to ensure that protection at both ends of a faulted 888 circuit will operate to isolate the equipment concerned. Inter- 889 tripping schemes use signaling to convey a trip command to remote 890 circuit breakers to isolate circuits. 892 +--------------------------------+----------------------------------+ 893 | Inter-Trip protection | Attribute | 894 | Requirement | | 895 +--------------------------------+----------------------------------+ 896 | One way maximum delay | 5 ms | 897 | Asymetric delay required | No | 898 | Maximum jitter | Not critical | 899 | Topology | Point to point, point to Multi- | 900 | | point | 901 | Bandwidth | 64 Kbps | 902 | Availability | 99.9999 | 903 | precise timing required | Yes | 904 | Recovery time on node failure | less than 50ms - hitless | 905 | performance management | Yes, Mandatory | 906 | Redundancy | Yes | 907 | Packet loss | 0.1% | 908 +--------------------------------+----------------------------------+ 910 Table 2: Inter-Trip protection network requirements 912 3.2.2.1.3. Current Differential Protection Scheme 914 Current differential protection is commonly used for line protection, 915 and is typical for protecting parallel circuits. A main advantage 916 for differential protection is that, compared to overcurrent 917 protection, it allows only the faulted circuit to be de-energized in 918 case of a fault. At both end of the lines, the current is measured 919 by the differential relays, and based on Kirchhoff's law, both relays 920 will trip the circuit breaker if the current going into the line does 921 not equal the current going out of the line. This type of protection 922 scheme assumes some form of communications being present between the 923 relays at both end of the line, to allow both relays to compare 924 measured current values. A fault in line 1 will cause overcurrent to 925 be flowing in both lines, but because the current in line 2 is a 926 through following current, this current is measured equal at both 927 ends of the line, therefore the differential relays on line 2 will 928 not trip line 2. Line 1 will be tripped, as the relays will not 929 measure the same currents at both ends of the line. Line 930 differential protection schemes assume a very low telecommunications 931 delay between both relays, often as low as 5ms. Moreover, as those 932 systems are often not time-synchronized, they also assume symmetric 933 telecommunications paths with constant delay, which allows comparing 934 current measurement values taken at the exact same time. 936 +----------------------------------+--------------------------------+ 937 | Current Differential protection | Attribute | 938 | Requirement | | 939 +----------------------------------+--------------------------------+ 940 | One way maximum delay | 5 ms | 941 | Asymetric delay Required | Yes | 942 | Maximum jitter | less than 250 us (750us for | 943 | | legacy IED) | 944 | Topology | Point to point, point to | 945 | | Multi-point | 946 | Bandwidth | 64 Kbps | 947 | Availability | 99.9999 | 948 | precise timing required | Yes | 949 | Recovery time on node failure | less than 50ms - hitless | 950 | performance management | Yes, Mandatory | 951 | Redundancy | Yes | 952 | Packet loss | 0.1% | 953 +----------------------------------+--------------------------------+ 955 Table 3: Current Differential Protection requirements 957 3.2.2.1.4. Distance Protection Scheme 959 Distance (Impedance Relay) protection scheme is based on voltage and 960 current measurements. A fault on a circuit will generally create a 961 sag in the voltage level. If the ratio of voltage to current 962 measured at the protection relay terminals, which equates to an 963 impedance element, falls within a set threshold the circuit breaker 964 will operate. The operating characteristics of this protection are 965 based on the line characteristics. This means that when a fault 966 appears on the line, the impedance setting in the relay is compared 967 to the apparent impedance of the line from the relay terminals to the 968 fault. If the relay setting is determined to be below the apparent 969 impedance it is determined that the fault is within the zone of 970 protection. When the transmission line length is under a minimum 971 length, distance protection becomes more difficult to coordinate. In 972 these instances the best choice of protection is current differential 973 protection. 975 +-------------------------------+-----------------------------------+ 976 | Distance protection | Attribute | 977 | Requirement | | 978 +-------------------------------+-----------------------------------+ 979 | One way maximum delay | 5 ms | 980 | Asymetric delay Required | No | 981 | Maximum jitter | Not critical | 982 | Topology | Point to point, point to Multi- | 983 | | point | 984 | Bandwidth | 64 Kbps | 985 | Availability | 99.9999 | 986 | precise timing required | Yes | 987 | Recovery time on node failure | less than 50ms - hitless | 988 | performance management | Yes, Mandatory | 989 | Redundancy | Yes | 990 | Packet loss | 0.1% | 991 +-------------------------------+-----------------------------------+ 993 Table 4: Distance Protection requirements 995 3.2.2.1.5. Inter-Substation Protection Signaling 997 This use case describes the exchange of Sampled Value and/or GOOSE 998 (Generic Object Oriented Substation Events) message between 999 Intelligent Electronic Devices (IED) in two substations for 1000 protection and tripping coordination. The two IEDs are in a master- 1001 slave mode. 1003 The Current Transformer or Voltage Transformer (CT/VT) in one 1004 substation sends the sampled analog voltage or current value to the 1005 Merging Unit (MU) over hard wire. The merging unit sends the time- 1006 synchronized 61850-9-2 sampled values to the slave IED. The slave 1007 IED forwards the information to the Master IED in the other 1008 substation. The master IED makes the determination (for example 1009 based on sampled value differentials) to send a trip command to the 1010 originating IED. Once the slave IED/Relay receives the GOOSE trip 1011 for breaker tripping, it opens the breaker. It then sends a 1012 confirmation message back to the master. All data exchanges between 1013 IEDs are either through Sampled Value and/or GOOSE messages. 1015 +----------------------------------+--------------------------------+ 1016 | Inter-Substation protection | Attribute | 1017 | Requirement | | 1018 +----------------------------------+--------------------------------+ 1019 | One way maximum delay | 5 ms | 1020 | Asymetric delay Required | No | 1021 | Maximum jitter | Not critical | 1022 | Topology | Point to point, point to | 1023 | | Multi-point | 1024 | Bandwidth | 64 Kbps | 1025 | Availability | 99.9999 | 1026 | precise timing required | Yes | 1027 | Recovery time on node failure | less than 50ms - hitless | 1028 | performance management | Yes, Mandatory | 1029 | Redundancy | Yes | 1030 | Packet loss | 1% | 1031 +----------------------------------+--------------------------------+ 1033 Table 5: Inter-Substation Protection requirements 1035 3.2.2.1.6. Intra-Substation Process Bus Communications 1037 This use case describes the data flow from the CT/VT to the IEDs in 1038 the substation via the merging unit (MU). The CT/VT in the 1039 substation send the sampled value (analog voltage or current) to the 1040 Merging Unit (MU) over hard wire. The merging unit sends the time- 1041 synchronized 61850-9-2 sampled values to the IEDs in the substation 1042 in GOOSE message format. The GPS Master Clock can send 1PPS or 1043 IRIG-B format to MU through serial port, or IEEE 1588 protocol via 1044 network. Process bus communication using 61850 simplifies 1045 connectivity within the substation and removes the requirement for 1046 multiple serial connections and removes the slow serial bus 1047 architectures that are typically used. This also ensures increased 1048 flexibility and increased speed with the use of multicast messaging 1049 between multiple devices. 1051 +----------------------------------+--------------------------------+ 1052 | Intra-Substation protection | Attribute | 1053 | Requirement | | 1054 +----------------------------------+--------------------------------+ 1055 | One way maximum delay | 5 ms | 1056 | Asymetric delay Required | No | 1057 | Maximum jitter | Not critical | 1058 | Topology | Point to point, point to | 1059 | | Multi-point | 1060 | Bandwidth | 64 Kbps | 1061 | Availability | 99.9999 | 1062 | precise timing required | Yes | 1063 | Recovery time on Node failure | less than 50ms - hitless | 1064 | performance management | Yes, Mandatory | 1065 | Redundancy | Yes - No | 1066 | Packet loss | 0.1% | 1067 +----------------------------------+--------------------------------+ 1069 Table 6: Intra-Substation Protection requirements 1071 3.2.2.1.7. Wide Area Monitoring and Control Systems 1073 The application of synchrophasor measurement data from Phasor 1074 Measurement Units (PMU) to Wide Area Monitoring and Control Systems 1075 promises to provide important new capabilities for improving system 1076 stability. Access to PMU data enables more timely situational 1077 awareness over larger portions of the grid than what has been 1078 possible historically with normal SCADA (Supervisory Control and Data 1079 Acquisition) data. Handling the volume and real-time nature of 1080 synchrophasor data presents unique challenges for existing 1081 application architectures. Wide Area management System (WAMS) makes 1082 it possible for the condition of the bulk power system to be observed 1083 and understood in real-time so that protective, preventative, or 1084 corrective action can be taken. Because of the very high sampling 1085 rate of measurements and the strict requirement for time 1086 synchronization of the samples, WAMS has stringent telecommunications 1087 requirements in an IP network that are captured in the following 1088 table: 1090 +----------------------+--------------------------------------------+ 1091 | WAMS Requirement | Attribute | 1092 +----------------------+--------------------------------------------+ 1093 | One way maximum | 50 ms | 1094 | delay | | 1095 | Asymetric delay | No | 1096 | Required | | 1097 | Maximum jitter | Not critical | 1098 | Topology | Point to point, point to Multi-point, | 1099 | | Multi-point to Multi-point | 1100 | Bandwidth | 100 Kbps | 1101 | Availability | 99.9999 | 1102 | precise timing | Yes | 1103 | required | | 1104 | Recovery time on | less than 50ms - hitless | 1105 | Node failure | | 1106 | performance | Yes, Mandatory | 1107 | management | | 1108 | Redundancy | Yes | 1109 | Packet loss | 1% | 1110 +----------------------+--------------------------------------------+ 1112 Table 7: WAMS Special Communication Requirements 1114 3.2.2.1.8. IEC 61850 WAN engineering guidelines requirement 1115 classification 1117 The IEC (International Electrotechnical Commission) has recently 1118 published a Technical Report which offers guidelines on how to define 1119 and deploy Wide Area Networks for the interconnections of electric 1120 substations, generation plants and SCADA operation centers. The IEC 1121 61850-90-12 is providing a classification of WAN communication 1122 requirements into 4 classes. You will find herafter the table 1123 summarizing these requirements: 1125 +----------------+------------+------------+------------+-----------+ 1126 | WAN | Class WA | Class WB | Class WC | Class WD | 1127 | Requirement | | | | | 1128 +----------------+------------+------------+------------+-----------+ 1129 | Application | EHV (Extra | HV (High | MV (Medium | General | 1130 | field | High | Voltage) | Voltage) | purpose | 1131 | | Voltage) | | | | 1132 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms | 1133 | Jitter | 10 us | 100 us | 1 ms | 10 ms | 1134 | Latency | 100 us | 1 ms | 10 ms | 100 ms | 1135 | Asymetry | | | | | 1136 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 | 1137 | | | | | ms | 1138 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | | 1139 | | 10-6 | 10-4 | | | 1140 | Unavailability | 10-7 to | 10-5 to | 10-3 | | 1141 | | 10-6 | 10-4 | | | 1142 | Recovery delay | Zero | 50 ms | 5 s | 50 s | 1143 | Cyber security | extremely | High | Medium | Medium | 1144 | | high | | | | 1145 +----------------+------------+------------+------------+-----------+ 1147 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC 1149 3.2.2.2. Distribution use case 1151 3.2.2.2.1. Fault Location Isolation and Service Restoration (FLISR) 1153 As the name implies, Fault Location, Isolation, and Service 1154 Restoration (FLISR) refers to the ability to automatically locate the 1155 fault, isolate the fault, and restore service in the distribution 1156 network. It is a self-healing feature whose purpose is to minimize 1157 the impact of faults by serving portions of the loads on the affected 1158 circuit by switching to other circuits. It reduces the number of 1159 customers that experience a sustained power outage by reconfiguring 1160 distribution circuits. This will likely be the first wide spread 1161 application of distributed intelligence in the grid. Secondary 1162 substations can be connected to multiple primary substations. 1163 Normally, static power switch statuses (open/closed) in the network 1164 dictate the power flow to secondary substations. Reconfiguring the 1165 network in the event of a fault is typically done manually on site to 1166 operate switchgear to energize/de-energize alternate paths. 1167 Automating the operation of substation switchgear allows the utility 1168 to have a more dynamic network where the flow of power can be altered 1169 under fault conditions but also during times of peak load. It allows 1170 the utility to shift peak loads around the network. Or, to be more 1171 precise, alters the configuration of the network to move loads 1172 between different primary substations. The FLISR capability can be 1173 enabled in two modes: 1175 o Managed centrally from DMS (Distribution Management System), or 1177 o Executed locally through distributed control via intelligent 1178 switches and fault sensors. 1180 There are 3 distinct sub-functions that are performed: 1182 1. Fault Location Identification 1184 This sub-function is initiated by SCADA inputs, such as lockouts, 1185 fault indications/location, and, also, by input from the Outage 1186 Management System (OMS), and in the future by inputs from fault- 1187 predicting devices. It determines the specific protective device, 1188 which has cleared the sustained fault, identifies the de-energized 1189 sections, and estimates the probable location of the actual or the 1190 expected fault. It distinguishes faults cleared by controllable 1191 protective devices from those cleared by fuses, and identifies 1192 momentary outages and inrush/cold load pick-up currents. This step 1193 is also referred to as Fault Detection Classification and Location 1194 (FDCL). This step helps to expedite the restoration of faulted 1195 sections through fast fault location identification and improved 1196 diagnostic information available for crew dispatch. Also provides 1197 visualization of fault information to design and implement a 1198 switching plan to isolate the fault. 1200 2. Fault Type Determination 1202 I. Indicates faults cleared by controllable protective devices by 1203 distinguishing between: 1205 a. Faults cleared by fuses 1207 b. Momentary outages 1209 c. Inrush/cold load current 1211 II. Determines the faulted sections based on SCADA fault indications 1212 and protection lockout signals 1214 III. Increases the accuracy of the fault location estimation based 1215 on SCADA fault current measurements and real-time fault analysis 1217 3. Fault Isolation and Service Restoration 1218 Once the location and type of the fault has been pinpointed, the 1219 systems will attempt to isolate the fault and restore the non-faulted 1220 section of the network. This can have three modes of operation: 1222 I. Closed-loop mode : This is initiated by the Fault location sub- 1223 function. It generates a switching order (i.e., sequence of 1224 switching) for the remotely controlled switching devices to isolate 1225 the faulted section, and restore service to the non-faulted sections. 1226 The switching order is automatically executed via SCADA. 1228 II. Advisory mode : This is initiated by the Fault location sub- 1229 function. It generates a switching order for remotely and manually 1230 controlled switching devices to isolate the faulted section, and 1231 restore service to the non-faulted sections. The switching order is 1232 presented to operator for approval and execution. 1234 III. Study mode : the operator initiates this function. It analyzes 1235 a saved case modified by the operator, and generates a switching 1236 order under the operating conditions specified by the operator. 1238 With the increasing volume of data that are collected through fault 1239 sensors, utilities will use Big Data query and analysis tools to 1240 study outage information to anticipate and prevent outages by 1241 detecting failure patterns and their correlation with asset age, 1242 type, load profiles, time of day, weather conditions, and other 1243 conditions to discover conditions that lead to faults and take the 1244 necessary preventive and corrective measures. 1246 +----------------------+--------------------------------------------+ 1247 | FLISR Requirement | Attribute | 1248 +----------------------+--------------------------------------------+ 1249 | One way maximum | 80 ms | 1250 | delay | | 1251 | Asymetric delay | No | 1252 | Required | | 1253 | Maximum jitter | 40 ms | 1254 | Topology | Point to point, point to Multi-point, | 1255 | | Multi-point to Multi-point | 1256 | Bandwidth | 64 Kbps | 1257 | Availability | 99.9999 | 1258 | precise timing | Yes | 1259 | required | | 1260 | Recovery time on | Depends on customer impact | 1261 | Node failure | | 1262 | performance | Yes, Mandatory | 1263 | management | | 1264 | Redundancy | Yes | 1265 | Packet loss | 0.1% | 1266 +----------------------+--------------------------------------------+ 1268 Table 9: FLISR Communication Requirements 1270 3.2.2.3. Generation use case 1272 3.2.2.3.1. Frequency Control / Automatic Generation Control (AGC) 1274 The system frequency should be maintained within a very narrow band. 1275 Deviations from the acceptable frequency range are detected and 1276 forwarded to the Load Frequency Control (LFC) system so that required 1277 up or down generation increase / decrease pulses can be sent to the 1278 power plants for frequency regulation. The trend in system frequency 1279 is a measure of mismatch between demand and generation, and is a 1280 necessary parameter for load control in interconnected systems. 1282 Automatic generation control (AGC) is a system for adjusting the 1283 power output of generators at different power plants, in response to 1284 changes in the load. Since a power grid requires that generation and 1285 load closely balance moment by moment, frequent adjustments to the 1286 output of generators are necessary. The balance can be judged by 1287 measuring the system frequency; if it is increasing, more power is 1288 being generated than used, and all machines in the system are 1289 accelerating. If the system frequency is decreasing, more demand is 1290 on the system than the instantaneous generation can provide, and all 1291 generators are slowing down. 1293 Where the grid has tie lines to adjacent control areas, automatic 1294 generation control helps maintain the power interchanges over the tie 1295 lines at the scheduled levels. The AGC takes into account various 1296 parameters including the most economical units to adjust, the 1297 coordination of thermal, hydroelectric, and other generation types, 1298 and even constraints related to the stability of the system and 1299 capacity of interconnections to other power grids. 1301 For the purpose of AGC we use static frequency measurements and 1302 averaging methods are used to get a more precise measure of system 1303 frequency in steady-state conditions. 1305 During disturbances, more real-time dynamic measurements of system 1306 frequency are taken using PMUs, especially when different areas of 1307 the system exhibit different frequencies. But that is outside the 1308 scope of this use case. 1310 +---------------------------------------------------+---------------+ 1311 | FCAG (Frequency Control Automatic Generation) | Attribute | 1312 | Requirement | | 1313 +---------------------------------------------------+---------------+ 1314 | One way maximum delay | 500 ms | 1315 | Asymetric delay Required | No | 1316 | Maximum jitter | Not critical | 1317 | Topology | Point to | 1318 | | point | 1319 | Bandwidth | 20 Kbps | 1320 | Availability | 99.999 | 1321 | precise timing required | Yes | 1322 | Recovery time on Node failure | N/A | 1323 | performance management | Yes, | 1324 | | Mandatory | 1325 | Redundancy | Yes | 1326 | Packet loss | 1% | 1327 +---------------------------------------------------+---------------+ 1329 Table 10: FCAG Communication Requirements 1331 3.2.3. Specific Network topologies of Smart Grid Applications 1333 Utilities often have very large private telecommunications networks. 1334 It covers an entire territory / country. The main purpose of the 1335 network, until now, has been to support transmission network 1336 monitoring, control, and automation, remote control of generation 1337 sites, and providing FCAPS (Fault. Configuration. Accounting. 1338 Performance. Security) services from centralized network operation 1339 centers. 1341 Going forward, one network will support operation and maintenance of 1342 electrical networks (generation, transmission, and distribution), 1343 voice and data services for ten of thousands of employees and for 1344 exchange with neighboring interconnections, and administrative 1345 services. To meet those requirements, utility may deploy several 1346 physical networks leveraging different technologies across the 1347 country: an optical network and a microwave network for instance. 1348 Each protection and automatism system between two points has two 1349 telecommunications circuits, one on each network. Path diversity 1350 between two substations is key. Regardless of the event type 1351 (hurricane, ice storm, etc.), one path shall stay available so the 1352 SPS can still operate. 1354 In the optical network, signals are transmitted over more than tens 1355 of thousands of circuits using fiber optic links, microwave and 1356 telephone cables. This network is the nervous system of the 1357 utility's power transmission operations. The optical network 1358 represents ten of thousands of km of cable deployed along the power 1359 lines. 1361 Due to vast distances between transmission substations (for example 1362 as far as 280km apart), the fiber signal can be amplified to reach a 1363 distance of 280 km without attenuation. 1365 3.2.4. Precision Time Protocol 1367 Some utilities do not use GPS clocks in generation substations. One 1368 of the main reasons is that some of the generation plants are 30 to 1369 50 meters deep under ground and the GPS signal can be weak and 1370 unreliable. Instead, atomic clocks are used. Clocks are 1371 synchronized amongst each other. Rubidium clocks provide clock and 1372 1ms timestamps for IRIG-B. Some companies plan to transition to the 1373 Precision Time Protocol (IEEE 1588), distributing the synchronization 1374 signal over the IP/MPLS network. 1376 The Precision Time Protocol (PTP) is defined in IEEE standard 1588. 1377 PTP is applicable to distributed systems consisting of one or more 1378 nodes, communicating over a network. Nodes are modeled as containing 1379 a real-time clock that may be used by applications within the node 1380 for various purposes such as generating time-stamps for data or 1381 ordering events managed by the node. The protocol provides a 1382 mechanism for synchronizing the clocks of participating nodes to a 1383 high degree of accuracy and precision. 1385 PTP operates based on the following assumptions : 1387 It is assumed that the network eliminates cyclic forwarding of PTP 1388 messages within each communication path (e.g., by using a spanning 1389 tree protocol). PTP eliminates cyclic forwarding of PTP messages 1390 between communication paths. 1392 PTP is tolerant of an occasional missed message, duplicated 1393 message, or message that arrived out of order. However, PTP 1394 assumes that such impairments are relatively rare. 1396 PTP was designed assuming a multicast communication model. PTP 1397 also supports a unicast communication model as long as the 1398 behavior of the protocol is preserved. 1400 Like all message-based time transfer protocols, PTP time accuracy 1401 is degraded by asymmetry in the paths taken by event messages. 1402 Asymmetry is not detectable by PTP, however, if known, PTP 1403 corrects for asymmetry. 1405 A time-stamp event is generated at the time of transmission and 1406 reception of any event message. The time-stamp event occurs when the 1407 message's timestamp point crosses the boundary between the node and 1408 the network. 1410 IEC 61850 will recommend the use of the IEEE PTP 1588 Utility Profile 1411 (as defined in IEC 62439-3 Annex B) which offers the support of 1412 redundant attachment of clocks to Paralell Redundancy Protcol (PRP) 1413 and High-availability Seamless Redundancy (HSR) networks. 1415 3.3. IANA Considerations 1417 This memo includes no request to IANA. 1419 3.4. Security Considerations 1421 3.4.1. Current Practices and Their Limitations 1423 Grid monitoring and control devices are already targets for cyber 1424 attacks and legacy telecommunications protocols have many intrinsic 1425 network related vulnerabilities. DNP3, Modbus, PROFIBUS/PROFINET, 1426 and other protocols are designed around a common paradigm of request 1427 and respond. Each protocol is designed for a master device such as 1428 an HMI (Human Machine Interface) system to send commands to 1429 subordinate slave devices to retrieve data (reading inputs) or 1430 control (writing to outputs). Because many of these protocols lack 1431 authentication, encryption, or other basic security measures, they 1432 are prone to network-based attacks, allowing a malicious actor or 1433 attacker to utilize the request-and-respond system as a mechanism for 1434 command-and-control like functionality. Specific security concerns 1435 common to most industrial control, including utility 1436 telecommunication protocols include the following: 1438 o Network or transport errors (e.g. malformed packets or excessive 1439 latency) can cause protocol failure. 1441 o Protocol commands may be available that are capable of forcing 1442 slave devices into inoperable states, including powering-off 1443 devices, forcing them into a listen-only state, disabling 1444 alarming. 1446 o Protocol commands may be available that are capable of restarting 1447 communications and otherwise interrupting processes. 1449 o Protocol commands may be available that are capable of clearing, 1450 erasing, or resetting diagnostic information such as counters and 1451 diagnostic registers. 1453 o Protocol commands may be available that are capable of requesting 1454 sensitive information about the controllers, their configurations, 1455 or other need-to-know information. 1457 o Most protocols are application layer protocols transported over 1458 TCP; therefore it is easy to transport commands over non-standard 1459 ports or inject commands into authorized traffic flows. 1461 o Protocol commands may be available that are capable of 1462 broadcasting messages to many devices at once (i.e. a potential 1463 DoS). 1465 o Protocol commands may be available to query the device network to 1466 obtain defined points and their values (i.e. a configuration 1467 scan). 1469 o Protocol commands may be available that will list all available 1470 function codes (i.e. a function scan). 1472 o Bump in the wire (BITW) solutions : A hardware device is added to 1473 provide IPSec services between two routers that are not capable of 1474 IPSec functions. This special IPsec device will intercept then 1475 intercept outgoing datagrams, add IPSec protection to them, and 1476 strip it off incoming datagrams. BITW can all IPSec to legacy 1477 hosts and can retrofit non-IPSec routers to provide security 1478 benefits. The disadvantages are complexity and cost. 1480 These inherent vulnerabilities, along with increasing connectivity 1481 between IT an OT networks, make network-based attacks very feasible. 1482 Simple injection of malicious protocol commands provides control over 1483 the target process. Altering legitimate protocol traffic can also 1484 alter information about a process and disrupt the legitimate controls 1485 that are in place over that process. A man- in-the-middle attack 1486 could provide both control over a process and misrepresentation of 1487 data back to operator consoles. 1489 3.4.2. Security Trends in Utility Networks 1491 Although advanced telecommunications networks can assist in 1492 transforming the energy industry, playing a critical role in 1493 maintaining high levels of reliability, performance, and 1494 manageability, they also introduce the need for an integrated 1495 security infrastructure. Many of the technologies being deployed to 1496 support smart grid projects such as smart meters and sensors can 1497 increase the vulnerability of the grid to attack. Top security 1498 concerns for utilities migrating to an intelligent smart grid 1499 telecommunications platform center on the following trends: 1501 o Integration of distributed energy resources 1503 o Proliferation of digital devices to enable management, automation, 1504 protection, and control 1506 o Regulatory mandates to comply with standards for critical 1507 infrastructure protection 1509 o Migration to new systems for outage management, distribution 1510 automation, condition-based maintenance, load forecasting, and 1511 smart metering 1513 o Demand for new levels of customer service and energy management 1515 This development of a diverse set of networks to support the 1516 integration of microgrids, open-access energy competition, and the 1517 use of network-controlled devices is driving the need for a converged 1518 security infrastructure for all participants in the smart grid, 1519 including utilities, energy service providers, large commercial and 1520 industrial, as well as residential customers. Securing the assets of 1521 electric power delivery systems, from the control center to the 1522 substation, to the feeders and down to customer meters, requires an 1523 end-to-end security infrastructure that protects the myriad of 1524 telecommunications assets used to operate, monitor, and control power 1525 flow and measurement. Cyber security refers to all the security 1526 issues in automation and telecommunications that affect any functions 1527 related to the operation of the electric power systems. 1528 Specifically, it involves the concepts of: 1530 o Integrity : data cannot be altered undetectably 1532 o Authenticity : the telecommunications parties involved must be 1533 validated as genuine 1535 o Authorization : only requests and commands from the authorized 1536 users can be accepted by the system 1538 o Confidentiality : data must not be accessible to any 1539 unauthenticated users 1541 When designing and deploying new smart grid devices and 1542 telecommunications systems, it's imperative to understand the various 1543 impacts of these new components under a variety of attack situations 1544 on the power grid. Consequences of a cyber attack on the grid 1545 telecommunications network can be catastrophic. This is why security 1546 for smart grid is not just an ad hoc feature or product, it's a 1547 complete framework integrating both physical and Cyber security 1548 requirements and covering the entire smart grid networks from 1549 generation to distribution. Security has therefore become one of the 1550 main foundations of the utility telecom network architecture and must 1551 be considered at every layer with a defense-in-depth approach. 1552 Migrating to IP based protocols is key to address these challenges 1553 for two reasons: 1555 1. IP enables a rich set of features and capabilities to enhance the 1556 security posture 1558 2. IP is based on open standards, which allows interoperability 1559 between different vendors and products, driving down the costs 1560 associated with implementing security solutions in OT networks. 1562 Securing OT (Operation technology) telecommunications over packet- 1563 switched IP networks follow the same principles that are foundational 1564 for securing the IT infrastructure, i.e., consideration must be given 1565 to enforcing electronic access control for both person-to-machine and 1566 machine-to-machine communications, and providing the appropriate 1567 levels of data privacy, device and platform integrity, and threat 1568 detection and mitigation. 1570 4. Building Automation Systems 1572 4.1. Use Case Description 1574 A Building Automation System (BAS) manages equipment and sensors in a 1575 building for improving residents' comfort, reducing energy 1576 consumption, and responding to failures and emergencies. For 1577 example, the BAS measures the temperature of a room using sensors and 1578 then controls the HVAC (heating, ventilating, and air conditioning) 1579 to maintain a set temperature and minimize energy consumption. 1581 A BAS primarily performs the following functions: 1583 o Periodically measures states of devices, for example humidity and 1584 illuminance of rooms, open/close state of doors, FAN speed, etc. 1586 o Stores the measured data. 1588 o Provides the measured data to BAS systems and operators. 1590 o Generates alarms for abnormal state of devices. 1592 o Controls devices (e.g. turn off room lights at 10:00 PM). 1594 4.2. Building Automation Systems Today 1596 4.2.1. BAS Architecture 1598 A typical BAS architecture of today is shown in Figure 1. 1600 +----------------------------+ 1601 | | 1602 | BMS HMI | 1603 | | | | 1604 | +----------------------+ | 1605 | | Management Network | | 1606 | +----------------------+ | 1607 | | | | 1608 | LC LC | 1609 | | | | 1610 | +----------------------+ | 1611 | | Field Network | | 1612 | +----------------------+ | 1613 | | | | | | 1614 | Dev Dev Dev Dev | 1615 | | 1616 +----------------------------+ 1618 BMS := Building Management Server 1619 HMI := Human Machine Interface 1620 LC := Local Controller 1622 Figure 1: BAS architecture 1624 There are typically two layers of network in a BAS. The upper one is 1625 called the Management Network and the lower one is called the Field 1626 Network. In management networks an IP-based communication protocol 1627 is used, while in field networks non-IP based communication protocols 1628 ("field protocols") are mainly used. Field networks have specific 1629 timing requirements, whereas management networks can be best-effort. 1631 A Human Machine Interface (HMI) is typically a desktop PC used by 1632 operators to monitor and display device states, send device control 1633 commands to Local Controllers (LCs), and configure building schedules 1634 (for example "turn off all room lights in the building at 10:00 PM"). 1636 A Building Management Server (BMS) performs the following operations. 1638 o Collect and store device states from LCs at regular intervals. 1640 o Send control values to LCs according to a building schedule. 1642 o Send an alarm signal to operators if it detects abnormal devices 1643 states. 1645 The BMS and HMI communicate with LCs via IP-based "management 1646 protocols" (see standards [bacnetip], [knx]). 1648 A LC is typically a Programmable Logic Controller (PLC) which is 1649 connected to several tens or hundreds of devices using "field 1650 protocols". An LC performs the following kinds of operations: 1652 o Measure device states and provide the information to BMS or HMI. 1654 o Send control values to devices, unilaterally or as part of a 1655 feedback control loop. 1657 There are many field protocols used today; some are standards-based 1658 and others are proprietary (see standards [lontalk], [modbus], 1659 [profibus] and [flnet]). The result is that BASs have multiple MAC/ 1660 PHY modules and interfaces. This makes BASs more expensive, slower 1661 to develop, and can result in "vendor lock-in" with multiple types of 1662 management applications. 1664 4.2.2. BAS Deployment Model 1666 An example BAS for medium or large buildings is shown in Figure 2. 1667 The physical layout spans multiple floors, and there is a monitoring 1668 room where the BAS management entities are located. Each floor will 1669 have one or more LCs depending upon the number of devices connected 1670 to the field network. 1672 +--------------------------------------------------+ 1673 | Floor 3 | 1674 | +----LC~~~~+~~~~~+~~~~~+ | 1675 | | | | | | 1676 | | Dev Dev Dev | 1677 | | | 1678 |--- | ------------------------------------------| 1679 | | Floor 2 | 1680 | +----LC~~~~+~~~~~+~~~~~+ Field Network | 1681 | | | | | | 1682 | | Dev Dev Dev | 1683 | | | 1684 |--- | ------------------------------------------| 1685 | | Floor 1 | 1686 | +----LC~~~~+~~~~~+~~~~~+ +-----------------| 1687 | | | | | | Monitoring Room | 1688 | | Dev Dev Dev | | 1689 | | | BMS HMI | 1690 | | Management Network | | | | 1691 | +--------------------------------+-----+ | 1692 | | | 1693 +--------------------------------------------------+ 1695 Figure 2: BAS Deployment model for Medium/Large Buildings 1697 Each LC is connected to the monitoring room via the Management 1698 network, and the management functions are performed within the 1699 building. In most cases, fast Ethernet (e.g. 100BASE-T) is used for 1700 the management network. Since the management network is non- 1701 realtime, use of Ethernet without quality of service is sufficient 1702 for today's deployment. 1704 In the field network a variety of physical interfaces such as RS232C 1705 and RS485 are used, which have specific timing requirements. Thus if 1706 a field network is to be replaced with an Ethernet or wireless 1707 network, such networks must support time-critical deterministic 1708 flows. 1710 In Figure 3, another deployment model is presented in which the 1711 management system is hosted remotely. This is becoming popular for 1712 small office and residential buildings in which a standalone 1713 monitoring system is not cost-effective. 1715 +---------------+ 1716 | Remote Center | 1717 | | 1718 | BMS HMI | 1719 +------------------------------------+ | | | | 1720 | Floor 2 | | +---+---+ | 1721 | +----LC~~~~+~~~~~+ Field Network| | | | 1722 | | | | | | Router | 1723 | | Dev Dev | +-------|-------+ 1724 | | | | 1725 |--- | ------------------------------| | 1726 | | Floor 1 | | 1727 | +----LC~~~~+~~~~~+ | | 1728 | | | | | | 1729 | | Dev Dev | | 1730 | | | | 1731 | | Management Network | WAN | 1732 | +------------------------Router-------------+ 1733 | | 1734 +------------------------------------+ 1736 Figure 3: Deployment model for Small Buildings 1738 Some interoperability is possible today in the Management Network, 1739 but not in today's field networks due to their non-IP-based design. 1741 4.2.3. Use Cases for Field Networks 1743 Below are use cases for Environmental Monitoring, Fire Detection, and 1744 Feedback Control, and their implications for field network 1745 performance. 1747 4.2.3.1. Environmental Monitoring 1749 The BMS polls each LC at a maximum measurement interval of 100ms (for 1750 example to draw a historical chart of 1 second granularity with a 10x 1751 sampling interval) and then performs the operations as specified by 1752 the operator. Each LC needs to measure each of its several hundred 1753 sensors once per measurement interval. Latency is not critical in 1754 this scenario as long as all sensor values are completed in the 1755 measurement interval. Availability is expected to be 99.999 %. 1757 4.2.3.2. Fire Detection 1759 On detection of a fire, the BMS must stop the HVAC, close the fire 1760 shutters, turn on the fire sprinklers, send an alarm, etc. There are 1761 typically ~10s of sensors per LC that BMS needs to manage. In this 1762 scenario the measurement interval is 10-50ms, the communication delay 1763 is 10ms, and the availability must be 99.9999 %. 1765 4.2.3.3. Feedback Control 1767 BAS systems utilize feedback control in various ways; the most time- 1768 critial is control of DC motors, which require a short feedback 1769 interval (1-5ms) with low communication delay (10ms) and jitter 1770 (1ms). The feedback interval depends on the characteristics of the 1771 device and a target quality of control value. There are typically 1772 ~10s of such devices per LC. 1774 Communication delay is expected to be less than 10 ms, jitter less 1775 than 1 sec while the availability must be 99.9999% . 1777 4.2.4. Security Considerations 1779 When BAS field networks were developed it was assumed that the field 1780 networks would always be physically isolated from external networks 1781 and therefore security was not a concern. In today's world many BASs 1782 are managed remotely and are thus connected to shared IP networks and 1783 so security is definitely a concern, yet security features are not 1784 available in the majority of BAS field network deployments . 1786 The management network, being an IP-based network, has the protocols 1787 available to enable network security, but in practice many BAS 1788 systems do not implement even the available security features such as 1789 device authentication or encryption for data in transit. 1791 4.3. BAS Future 1793 In the future we expect more fine-grained environmental monitoring 1794 and lower energy consumption, which will require more sensors and 1795 devices, thus requiring larger and more complex building networks. 1797 We expect building networks to be connected to or converged with 1798 other networks (Enterprise network, Home network, and Internet). 1800 Therefore better facilities for network management, control, 1801 reliability and security are critical in order to improve resident 1802 and operator convenience and comfort. For example the ability to 1803 monitor and control building devices via the internet would enable 1804 (for example) control of room lights or HVAC from a resident's 1805 desktop PC or phone application. 1807 4.4. BAS Asks 1809 The community would like to see an interoperable protocol 1810 specification that can satisfy the timing, security, availability and 1811 QoS constraints described above, such that the resulting converged 1812 network can replace the disparate field networks. Ideally this 1813 connectivity could extend to the open Internet. 1815 This would imply an architecture that can guarantee 1817 o Low communication delays (from <10ms to 100ms in a network of 1818 several hundred devices) 1820 o Low jitter (< 1 ms) 1822 o Tight feedback intervals (1ms - 10ms) 1824 o High network availability (up to 99.9999% ) 1826 o Availability of network data in disaster scenario 1828 o Authentication between management and field devices (both local 1829 and remote) 1831 o Integrity and data origin authentication of communication data 1832 between field and management devices 1834 o Confidentiality of data when communicated to a remote device 1836 5. Wireless for Industrial Use Cases 1838 (This section was derived from draft-thubert-6tisch-4detnet-01) 1840 5.1. Introduction 1842 The emergence of wireless technology has enabled a variety of new 1843 devices to get interconnected, at a very low marginal cost per 1844 device, at any distance ranging from Near Field to interplanetary, 1845 and in circumstances where wiring may not be practical, for instance 1846 on fast-moving or rotating devices. 1848 At the same time, a new breed of Time Sensitive Networks is being 1849 developed to enable traffic that is highly sensitive to jitter, quite 1850 sensitive to latency, and with a high degree of operational 1851 criticality so that loss should be minimized at all times. Such 1852 traffic is not limited to professional Audio/ Video networks, but is 1853 also found in command and control operations such as industrial 1854 automation and vehicular sensors and actuators. 1856 At IEEE802.1, the Audio/Video Task Group [IEEE802.1TSNTG] Time 1857 Sensitive Networking (TSN) to address Deterministic Ethernet. The 1858 Medium access Control (MAC) of IEEE802.15.4 [IEEE802154] has evolved 1859 with the new TimeSlotted Channel Hopping (TSCH) [RFC7554] mode for 1860 deterministic industrial-type applications. TSCH was introduced with 1861 the IEEE802.15.4e [IEEE802154e] amendment and will be wrapped up in 1862 the next revision of the IEEE802.15.4 standard. For all practical 1863 purpose, this document is expected to be insensitive to the future 1864 versions of the IEEE802.15.4 standard, which is thus referenced 1865 undated. 1867 Though at a different time scale, both TSN and TSCH standards provide 1868 Deterministic capabilities to the point that a packet that pertains 1869 to a certain flow crosses the network from node to node following a 1870 very precise schedule, as a train that leaves intermediate stations 1871 at precise times along its path. With TSCH, time is formatted into 1872 timeSlots, and an individual cell is allocated to unicast or 1873 broadcast communication at the MAC level. The time-slotted operation 1874 reduces collisions, saves energy, and enables to more closely 1875 engineer the network for deterministic properties. The channel 1876 hopping aspect is a simple and efficient technique to combat multi- 1877 path fading and co-channel interferences (for example by Wi-Fi 1878 emitters). 1880 The 6TiSCH Architecture [I-D.ietf-6tisch-architecture] defines a 1881 remote monitoring and scheduling management of a TSCH network by a 1882 Path Computation Element (PCE), which cooperates with an abstract 1883 Network Management Entity (NME) to manage timeSlots and device 1884 resources in a manner that minimizes the interaction with and the 1885 load placed on the constrained devices. 1887 This Architecture applies the concepts of Deterministic Networking on 1888 a TSCH network to enable the switching of timeSlots in a G-MPLS 1889 manner. This document details the dependencies that 6TiSCH has on 1890 PCE [PCE] and DetNet [I-D.finn-detnet-architecture] to provide the 1891 necessary capabilities that may be specific to such networks. In 1892 turn, DetNet is expected to integrate and maintain consistency with 1893 the work that has taken place and is continuing at IEEE802.1TSN and 1894 AVnu. 1896 5.2. Terminology 1898 Readers are expected to be familiar with all the terms and concepts 1899 that are discussed in "Multi-link Subnet Support in IPv6" 1900 [I-D.ietf-ipv6-multilink-subnets]. 1902 The draft uses terminology defined or referenced in 1903 [I-D.ietf-6tisch-terminology] and 1904 [I-D.ietf-roll-rpl-industrial-applicability]. 1906 The draft also conforms to the terms and models described in 1907 [RFC3444] and uses the vocabulary and the concepts defined in 1908 [RFC4291] for the IPv6 Architecture. 1910 5.3. 6TiSCH Overview 1912 The scope of the present work is a subnet that, in its basic 1913 configuration, is made of a TSCH [RFC7554] MAC Low Power Lossy 1914 Network (LLN). 1916 ---+-------- ............ ------------ 1917 | External Network | 1918 | +-----+ 1919 +-----+ | NME | 1920 | | LLN Border | | 1921 | | router +-----+ 1922 +-----+ 1923 o o o 1924 o o o o 1925 o o LLN o o o 1926 o o o o 1927 o 1929 Figure 4: Basic Configuration of a 6TiSCH Network 1931 In the extended configuration, a Backbone Router (6BBR) federates 1932 multiple 6TiSCH in a single subnet over a backbone. 6TiSCH 6BBRs 1933 synchronize with one another over the backbone, so as to ensure that 1934 the multiple LLNs that form the IPv6 subnet stay tightly 1935 synchronized. 1937 ---+-------- ............ ------------ 1938 | External Network | 1939 | +-----+ 1940 | +-----+ | NME | 1941 +-----+ | +-----+ | | 1942 | | Router | | PCE | +-----+ 1943 | | +--| | 1944 +-----+ +-----+ 1945 | | 1946 | Subnet Backbone | 1947 +--------------------+------------------+ 1948 | | | 1949 +-----+ +-----+ +-----+ 1950 | | Backbone | | Backbone | | Backbone 1951 o | | router | | router | | router 1952 +-----+ +-----+ +-----+ 1953 o o o o o 1954 o o o o o o o o o o o 1955 o o o LLN o o o o 1956 o o o o o o o o o o o o 1958 Figure 5: Extended Configuration of a 6TiSCH Network 1960 If the Backbone is Deterministic, then the Backbone Router ensures 1961 that the end-to-end deterministic behavior is maintained between the 1962 LLN and the backbone. This SHOULD be done in conformance to the 1963 DetNet Architecture [I-D.finn-detnet-architecture] which studies 1964 Layer-3 aspects of Deterministic Networks, and covers networks that 1965 span multiple Layer-2 domains. One particular requirement is that 1966 the PCE MUST be able to compute a deterministic path and to end 1967 across the TSCH network and an IEEE802.1 TSN Ethernet backbone, and 1968 DetNet MUST enable end-to-end deterministic forwarding. 1970 6TiSCH defines the concept of a Track, which is a complex form of a 1971 uni-directional Circuit ([I-D.ietf-6tisch-terminology]). As opposed 1972 to a simple circuit that is a sequence of nodes and links, a Track is 1973 shaped as a directed acyclic graph towards a destination to support 1974 multi-path forwarding and route around failures. A Track may also 1975 branch off and rejoin, for the purpose of the so-called Packet 1976 Replication and Elimination (PRE), over non congruent branches. PRE 1977 may be used to complement layer-2 Automatic Repeat reQuest (ARQ) to 1978 meet industrial expectations in Packet Delivery Ratio (PDR), in 1979 particular when the Track extends beyond the 6TiSCH network. 1981 +-----+ 1982 | IoT | 1983 | G/W | 1984 +-----+ 1985 ^ <---- Elimination 1986 | | 1987 Track branch | | 1988 +-------+ +--------+ Subnet Backbone 1989 | | 1990 +--|--+ +--|--+ 1991 | | | Backbone | | | Backbone 1992 o | | | router | | | router 1993 +--/--+ +--|--+ 1994 o / o o---o----/ o 1995 o o---o--/ o o o o o 1996 o \ / o o LLN o 1997 o v <---- Replication 1998 o 2000 Figure 6: End-to-End deterministic Track 2002 In the example above, a Track is laid out from a field device in a 2003 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN 2004 backbone. 2006 The Replication function in the field device sends a copy of each 2007 packet over two different branches, and the PCE schedules each hop of 2008 both branches so that the two copies arrive in due time at the 2009 gateway. In case of a loss on one branch, hopefully the other copy 2010 of the packet still makes it in due time. If two copies make it to 2011 the IoT gateway, the Elimination function in the gateway ignores the 2012 extra packet and presents only one copy to upper layers. 2014 At each 6TiSCH hop along the Track, the PCE may schedule more than 2015 one timeSlot for a packet, so as to support Layer-2 retries (ARQ). 2016 It is also possible that the field device only uses the second branch 2017 if sending over the first branch fails. 2019 In current deployments, a TSCH Track does not necessarily support PRE 2020 but is systematically multi-path. This means that a Track is 2021 scheduled so as to ensure that each hop has at least two forwarding 2022 solutions, and the forwarding decision is to try the preferred one 2023 and use the other in case of Layer-2 transmission failure as detected 2024 by ARQ. 2026 5.3.1. TSCH and 6top 2028 6top is a logical link control sitting between the IP layer and the 2029 TSCH MAC layer, which provides the link abstraction that is required 2030 for IP operations. The 6top operations are specified in 2031 [I-D.wang-6tisch-6top-sublayer]. 2033 The 6top data model and management interfaces are further discussed 2034 in [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap]. 2036 The architecture defines "soft" cells and "hard" cells. "Hard" cells 2037 are owned and managed by an separate scheduling entity (e.g. a PCE) 2038 that specifies the slotOffset/channelOffset of the cells to be 2039 added/moved/deleted, in which case 6top can only act as instructed, 2040 and may not move hard cells in the TSCH schedule on its own. 2042 5.3.2. SlotFrames and Priorities 2044 A slotFrame is the base object that the PCE needs to manipulate to 2045 program a schedule into an LLN node. Elaboration on that concept can 2046 be found in section "SlotFrames and Priorities" of the 6TiSCH 2047 architecture [I-D.ietf-6tisch-architecture]. The architecture also 2048 details how the schedule is constructed and how transmission 2049 resources called cells can be allocated to particular transmissions 2050 so as to avoid collisions. 2052 5.3.3. Schedule Management by a PCE 2054 6TiSCH supports a mixed model of centralized routes and distributed 2055 routes. Centralized routes can for example be computed by a entity 2056 such as a PCE. Distributed routes are computed by RPL. 2058 Both methods may inject routes in the Routing Tables of the 6TiSCH 2059 routers. In either case, each route is associated with a 6TiSCH 2060 topology that can be a RPL Instance topology or a track. The 6TiSCH 2061 topology is indexed by a Instance ID, in a format that reuses the 2062 RPLInstanceID as defined in RPL [RFC6550]. 2064 Both RPL and PCE rely on shared sources such as policies to define 2065 Global and Local RPLInstanceIDs that can be used by either method. 2066 It is possible for centralized and distributed routing to share a 2067 same topology. Generally they will operate in different slotFrames, 2068 and centralized routes will be used for scheduled traffic and will 2069 have precedence over distributed routes in case of conflict between 2070 the slotFrames. 2072 Section "Schedule Management Mechanisms" of the 6TiSCH architecture 2073 describes 4 paradigms to manage the TSCH schedule of the LLN nodes: 2075 Static Scheduling, neighbor-to-neighbor Scheduling, remote monitoring 2076 and scheduling management, and Hop-by-hop scheduling. The Track 2077 operation for DetNet corresponds to a remote monitoring and 2078 scheduling management by a PCE. 2080 The 6top interface document [I-D.ietf-6tisch-6top-interface] 2081 specifies the generic data model that can be used to monitor and 2082 manage resources of the 6top sublayer. Abstract methods are 2083 suggested for use by a management entity in the device. The data 2084 model also enables remote control operations on the 6top sublayer. 2086 [I-D.ietf-6tisch-coap] defines an mapping of the 6top set of 2087 commands, which is described in [I-D.ietf-6tisch-6top-interface], to 2088 CoAP resources. This allows an entity to interact with the 6top 2089 layer of a node that is multiple hops away in a RESTful fashion. 2091 [I-D.ietf-6tisch-coap] also defines a basic set CoAP resources and 2092 associated RESTful access methods (GET/PUT/POST/DELETE). The payload 2093 (body) of the CoAP messages is encoded using the CBOR format. The 2094 PCE commands are expected to be issued directly as CoAP requests or 2095 to be mapped back and forth into CoAP by a gateway function at the 2096 edge of the 6TiSCH network. For instance, it is possible that a 2097 mapping entity on the backbone transforms a non-CoAP protocol such as 2098 PCEP into the RESTful interfaces that the 6TiSCH devices support. 2099 This architecture will be refined to comply with DetNet 2100 [I-D.finn-detnet-architecture] when the work is formalized. 2102 5.3.4. Track Forwarding 2104 By forwarding, this specification means the per-packet operation that 2105 allows to deliver a packet to a next hop or an upper layer in this 2106 node. Forwarding is based on pre-existing state that was installed 2107 as a result of the routing computation of a Track by a PCE. The 2108 6TiSCH architecture supports three different forwarding model, G-MPLS 2109 Track Forwarding (TF), 6LoWPAN Fragment Forwarding (FF) and IPv6 2110 Forwarding (6F) which is the classical IP operation. The DetNet case 2111 relates to the Track Forwarding operation under the control of a PCE. 2113 A Track is a unidirectional path between a source and a destination. 2114 In a Track cell, the normal operation of IEEE802.15.4 Automatic 2115 Repeat-reQuest (ARQ) usually happens, though the acknowledgment may 2116 be omitted in some cases, for instance if there is no scheduled cell 2117 for a retry. 2119 Track Forwarding is the simplest and fastest. A bundle of cells set 2120 to receive (RX-cells) is uniquely paired to a bundle of cells that 2121 are set to transmit (TX-cells), representing a layer-2 forwarding 2122 state that can be used regardless of the network layer protocol. 2124 This model can effectively be seen as a Generalized Multi-protocol 2125 Label Switching (G-MPLS) operation in that the information used to 2126 switch a frame is not an explicit label, but rather related to other 2127 properties of the way the packet was received, a particular cell in 2128 the case of 6TiSCH. As a result, as long as the TSCH MAC (and 2129 Layer-2 security) accepts a frame, that frame can be switched 2130 regardless of the protocol, whether this is an IPv6 packet, a 6LoWPAN 2131 fragment, or a frame from an alternate protocol such as WirelessHART 2132 or ISA100.11a. 2134 A data frame that is forwarded along a Track normally has a 2135 destination MAC address that is set to broadcast - or a multicast 2136 address depending on MAC support. This way, the MAC layer in the 2137 intermediate nodes accepts the incoming frame and 6top switches it 2138 without incurring a change in the MAC header. In the case of 2139 IEEE802.15.4, this means effectively broadcast, so that along the 2140 Track the short address for the destination of the frame is set to 2141 0xFFFF. 2143 A Track is thus formed end-to-end as a succession of paired bundles, 2144 a receive bundle from the previous hop and a transmit bundle to the 2145 next hop along the Track, and a cell in such a bundle belongs to at 2146 most one Track. For a given iteration of the device schedule, the 2147 effective channel of the cell is obtained by adding a pseudo-random 2148 number to the channelOffset of the cell, which results in a rotation 2149 of the frequency that used for transmission. The bundles may be 2150 computed so as to accommodate both variable rates and 2151 retransmissions, so they might not be fully used at a given iteration 2152 of the schedule. The 6TiSCH architecture provides additional means 2153 to avoid waste of cells as well as overflows in the transmit bundle, 2154 as follows: 2156 In one hand, a TX-cell that is not needed for the current iteration 2157 may be reused opportunistically on a per-hop basis for routed 2158 packets. When all of the frame that were received for a given Track 2159 are effectively transmitted, any available TX-cell for that Track can 2160 be reused for upper layer traffic for which the next-hop router 2161 matches the next hop along the Track. In that case, the cell that is 2162 being used is effectively a TX-cell from the Track, but the short 2163 address for the destination is that of the next-hop router. It 2164 results that a frame that is received in a RX-cell of a Track with a 2165 destination MAC address set to this node as opposed to broadcast must 2166 be extracted from the Track and delivered to the upper layer (a frame 2167 with an unrecognized MAC address is dropped at the lower MAC layer 2168 and thus is not received at the 6top sublayer). 2170 On the other hand, it might happen that there are not enough TX-cells 2171 in the transmit bundle to accommodate the Track traffic, for instance 2172 if more retransmissions are needed than provisioned. In that case, 2173 the frame can be placed for transmission in the bundle that is used 2174 for layer-3 traffic towards the next hop along the track as long as 2175 it can be routed by the upper layer, that is, typically, if the frame 2176 transports an IPv6 packet. The MAC address should be set to the 2177 next-hop MAC address to avoid confusion. It results that a frame 2178 that is received over a layer-3 bundle may be in fact associated to a 2179 Track. In a classical IP link such as an Ethernet, off-track traffic 2180 is typically in excess over reservation to be routed along the non- 2181 reserved path based on its QoS setting. But with 6TiSCH, since the 2182 use of the layer-3 bundle may be due to transmission failures, it 2183 makes sense for the receiver to recognize a frame that should be re- 2184 tracked, and to place it back on the appropriate bundle if possible. 2185 A frame should be re-tracked if the Per-Hop-Behavior group indicated 2186 in the Differentiated Services Field in the IPv6 header is set to 2187 Deterministic Forwarding, as discussed in Section 5.4.1. A frame is 2188 re-tracked by scheduling it for transmission over the transmit bundle 2189 associated to the Track, with the destination MAC address set to 2190 broadcast. 2192 There are 2 modes for a Track, transport mode and tunnel mode. 2194 5.3.4.1. Transport Mode 2196 In transport mode, the Protocol Data Unit (PDU) is associated with 2197 flow-dependant meta-data that refers uniquely to the Track, so the 2198 6top sublayer can place the frame in the appropriate cell without 2199 ambiguity. In the case of IPv6 traffic, this flow identification is 2200 transported in the Flow Label of the IPv6 header. Associated with 2201 the source IPv6 address, the Flow Label forms a globally unique 2202 identifier for that particular Track that is validated at egress 2203 before restoring the destination MAC address (DMAC) and punting to 2204 the upper layer. 2206 | ^ 2207 +--------------+ | | 2208 | IPv6 | | | 2209 +--------------+ | | 2210 | 6LoWPAN HC | | | 2211 +--------------+ ingress egress 2212 | 6top | sets +----+ +----+ restores 2213 +--------------+ dmac to | | | | dmac to 2214 | TSCH MAC | brdcst | | | | self 2215 +--------------+ | | | | | | 2216 | LLN PHY | +-------+ +--...-----+ +-------+ 2217 +--------------+ 2219 Track Forwarding, Transport Mode 2221 5.3.4.2. Tunnel Mode 2223 In tunnel mode, the frames originate from an arbitrary protocol over 2224 a compatible MAC that may or may not be synchronized with the 6TiSCH 2225 network. An example of this would be a router with a dual radio that 2226 is capable of receiving and sending WirelessHART or ISA100.11a frames 2227 with the second radio, by presenting itself as an access Point or a 2228 Backbone Router, respectively. 2230 In that mode, some entity (e.g. PCE) can coordinate with a 2231 WirelessHART Network Manager or an ISA100.11a System Manager to 2232 specify the flows that are to be transported transparently over the 2233 Track. 2235 +--------------+ 2236 | IPv6 | 2237 +--------------+ 2238 | 6LoWPAN HC | 2239 +--------------+ set restore 2240 | 6top | +dmac+ +dmac+ 2241 +--------------+ to|brdcst to|nexthop 2242 | TSCH MAC | | | | | 2243 +--------------+ | | | | 2244 | LLN PHY | +-------+ +--...-----+ +-------+ 2245 +--------------+ | ingress egress | 2246 | | 2247 +--------------+ | | 2248 | LLN PHY | | | 2249 +--------------+ | | 2250 | TSCH MAC | | | 2251 +--------------+ | dmac = | dmac = 2252 |ISA100/WiHART | | nexthop v nexthop 2253 +--------------+ 2255 Figure 7: Track Forwarding, Tunnel Mode 2257 In that case, the flow information that identifies the Track at the 2258 ingress 6TiSCH router is derived from the RX-cell. The dmac is set 2259 to this node but the flow information indicates that the frame must 2260 be tunneled over a particular Track so the frame is not passed to the 2261 upper layer. Instead, the dmac is forced to broadcast and the frame 2262 is passed to the 6top sublayer for switching. 2264 At the egress 6TiSCH router, the reverse operation occurs. Based on 2265 metadata associated to the Track, the frame is passed to the 2266 appropriate link layer with the destination MAC restored. 2268 5.3.4.3. Tunnel Metadata 2270 Metadata coming with the Track configuration is expected to provide 2271 the destination MAC address of the egress endpoint as well as the 2272 tunnel mode and specific data depending on the mode, for instance a 2273 service access point for frame delivery at egress. If the tunnel 2274 egress point does not have a MAC address that matches the 2275 configuration, the Track installation fails. 2277 In transport mode, if the final layer-3 destination is the tunnel 2278 termination, then it is possible that the IPv6 address of the 2279 destination is compressed at the 6LoWPAN sublayer based on the MAC 2280 address. It is thus mandatory at the ingress point to validate that 2281 the MAC address that was used at the 6LoWPAN sublayer for compression 2282 matches that of the tunnel egress point. For that reason, the node 2283 that injects a packet on a Track checks that the destination is 2284 effectively that of the tunnel egress point before it overwrites it 2285 to broadcast. The 6top sublayer at the tunnel egress point reverts 2286 that operation to the MAC address obtained from the tunnel metadata. 2288 5.4. Operations of Interest for DetNet and PCE 2290 In a classical system, the 6TiSCH device does not place the request 2291 for bandwidth between self and another device in the network. 2292 Rather, an Operation Control System invoked through an Human/Machine 2293 Interface (HMI) indicates the Traffic Specification, in particular in 2294 terms of latency and reliability, and the end nodes. With this, the 2295 PCE must compute a Track between the end nodes and provision the 2296 network with per-flow state that describes the per-hop operation for 2297 a given packet, the corresponding timeSlots, and the flow 2298 identification that enables to recognize when a certain packet 2299 belongs to a certain Track, sort out duplicates, etc... 2301 For a static configuration that serves a certain purpose for a long 2302 period of time, it is expected that a node will be provisioned in one 2303 shot with a full schedule, which incorporates the aggregation of its 2304 behavior for multiple Tracks. 6TiSCH expects that the programing of 2305 the schedule will be done over COAP as discussed in 6TiSCH Resource 2306 Management and Interaction using CoAP [I-D.ietf-6tisch-coap]. 2308 But an Hybrid mode may be required as well whereby a single Track is 2309 added, modified, or removed, for instance if it appears that a Track 2310 does not perform as expected for, say, PDR. For that case, the 2311 expectation is that a protocol that flows along a Track (to be), in a 2312 fashion similar to classical Traffic Engineering (TE) [CCAMP], may be 2313 used to update the state in the devices. 6TiSCH provides means for a 2314 device to negotiate a timeSlot with a neighbor, but in general that 2315 flow was not designed and no protocol was selected and it is expected 2316 that DetNet will determine the appropriate end-to-end protocols to be 2317 used in that case. 2319 Operational System and HMI 2321 -+-+-+-+-+-+-+ Northbound -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- 2323 PCE PCE PCE PCE 2325 -+-+-+-+-+-+-+ Southbound -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- 2327 --- 6TiSCH------6TiSCH------6TiSCH------6TiSCH-- 2328 6TiSCH / Device Device Device Device \ 2329 Device- - 6TiSCH 2330 \ 6TiSCH 6TiSCH 6TiSCH 6TiSCH / Device 2331 ----Device------Device------Device------Device-- 2333 Figure 8: Stream Management Entity 2335 5.4.1. Packet Marking and Handling 2337 Section "Packet Marking and Handling" of 2338 [I-D.ietf-6tisch-architecture] describes the packet tagging and 2339 marking that is expected in 6TiSCH networks. 2341 5.4.1.1. Tagging Packets for Flow Identification 2343 For packets that are routed by a PCE along a Track, the tuple formed 2344 by the IPv6 source address and a local RPLInstanceID is tagged in the 2345 packets to identify uniquely the Track and associated transmit bundle 2346 of timeSlots. 2348 It results that the tagging that is used for a DetNet flow outside 2349 the 6TiSCH LLN MUST be swapped into 6TiSCH formats and back as the 2350 packet enters and then leaves the 6TiSCH network. 2352 Note: The method and format used for encoding the RPLInstanceID at 2353 6lo is generalized to all 6TiSCH topological Instances, which 2354 includes Tracks. 2356 5.4.1.2. Replication, Retries and Elimination 2358 6TiSCH expects elimination and replication of packets along a complex 2359 Track, but has no position about how the sequence numbers would be 2360 tagged in the packet. 2362 As it goes, 6TiSCH expects that timeSlots corresponding to copies of 2363 a same packet along a Track are correlated by configuration, and does 2364 not need to process the sequence numbers. 2366 The semantics of the configuration MUST enable correlated timeSlots 2367 to be grouped for transmit (and respectively receive) with a 'OR' 2368 relations, and then a 'AND' relation MUST be configurable between 2369 groups. The semantics is that if the transmit (and respectively 2370 receive) operation succeeded in one timeSlot in a 'OR' group, then 2371 all the other timeSLots in the group are ignored. Now, if there are 2372 at least two groups, the 'AND' relation between the groups indicates 2373 that one operation must succeed in each of the groups. 2375 On the transmit side, timeSlots provisioned for retries along a same 2376 branch of a Track are placed a same 'OR' group. The 'OR' relation 2377 indicates that if a transmission is acknowledged, then further 2378 transmissions SHOULD NOT be attempted for timeSlots in that group. 2379 There are as many 'OR' groups as there are branches of the Track 2380 departing from this node. Different 'OR' groups are programmed for 2381 the purpose of replication, each group corresponding to one branch of 2382 the Track. The 'AND' relation between the groups indicates that 2383 transmission over any of branches MUST be attempted regardless of 2384 whether a transmission succeeded in another branch. It is also 2385 possible to place cells to different next-hop routers in a same 'OR' 2386 group. This allows to route along multi-path tracks, trying one 2387 next-hop and then another only if sending to the first fails. 2389 On the receive side, all timeSlots are programmed in a same 'OR' 2390 group. Retries of a same copy as well as converging branches for 2391 elimination are converged, meaning that the first successful 2392 reception is enough and that all the other timeSlots can be ignored. 2394 5.4.1.3. Differentiated Services Per-Hop-Behavior 2396 Additionally, an IP packet that is sent along a Track uses the 2397 Differentiated Services Per-Hop-Behavior Group called Deterministic 2398 Forwarding, as described in 2399 [I-D.svshah-tsvwg-deterministic-forwarding]. 2401 5.4.2. Topology and capabilities 2403 6TiSCH nodes are usually IoT devices, characterized by very limited 2404 amount of memory, just enough buffers to store one or a few IPv6 2405 packets, and limited bandwidth between peers. It results that a node 2406 will maintain only a small number of peering information, and will 2407 not be able to store many packets waiting to be forwarded. Peers can 2408 be identified through MAC or IPv6 addresses, but a Cryptographically 2409 Generated Address [RFC3972] (CGA) may also be used. 2411 Neighbors can be discovered over the radio using mechanism such as 2412 beacons, but, though the neighbor information is available in the 2413 6TiSCH interface data model, 6TiSCH does not describe a protocol to 2414 pro-actively push the neighborhood information to a PCE. This 2415 protocol should be described and should operate over CoAP. The 2416 protocol should be able to carry multiple metrics, in particular the 2417 same metrics as used for RPL operations [RFC6551] 2419 The energy that the device consumes in sleep, transmit and receive 2420 modes can be evaluated and reported. So can the amount of energy 2421 that is stored in the device and the power that it can be scavenged 2422 from the environment. The PCE SHOULD be able to compute Tracks that 2423 will implement policies on how the energy is consumed, for instance 2424 balance between nodes, ensure that the spent energy does not exceeded 2425 the scavenged energy over a period of time, etc... 2427 5.5. Security Considerations 2429 On top of the classical protection of control signaling that can be 2430 expected to support DetNet, it must be noted that 6TiSCH networks 2431 operate on limited resources that can be depleted rapidly if an 2432 attacker manages to operate a DoS attack on the system, for instance 2433 by placing a rogue device in the network, or by obtaining management 2434 control and to setup extra paths. 2436 6. Cellular Radio Use Cases 2438 6.1. Use Case Description 2440 This use case describes the application of deterministic networking 2441 in the context of cellular telecom transport networks. Important 2442 elements include time synchronization, clock distribution, and ways 2443 of establishing time-sensitive streams for both Layer-2 and Layer-3 2444 user plane traffic. 2446 6.1.1. Network Architecture 2448 Figure 9 illustrates a typical 3GPP-defined cellular network 2449 architecture, which includes "Fronthaul" and "Midhaul" network 2450 segments. The "Fronthaul" is the network connecting base stations 2451 (baseband processing units) to the remote radio heads (antennas). 2452 The "Midhaul" is the network inter-connecting base stations (or small 2453 cell sites). 2455 Y (remote radio heads (antennas)) 2456 \ 2457 Y__ \.--. .--. +------+ 2458 \_( `. +---+ _(Back`. | 3GPP | 2459 Y------( Front )----|eNB|----( Haul )----| core | 2460 ( ` .Haul ) +---+ ( ` . ) ) | netw | 2461 /`--(___.-' \ `--(___.-' +------+ 2462 Y_/ / \.--. \ 2463 Y_/ _( Mid`. \ 2464 ( Haul ) \ 2465 ( ` . ) ) \ 2466 `--(___.-'\_____+---+ (small cell sites) 2467 \ |SCe|__Y 2468 +---+ +---+ 2469 Y__|eNB|__Y 2470 +---+ 2471 Y_/ \_Y ("local" radios) 2473 Figure 9: Generic 3GPP-based Cellular Network Architecture 2475 The available processing time for Fronthaul networking overhead is 2476 limited to the available time after the baseband processing of the 2477 radio frame has completed. For example in Long Term Evolution (LTE) 2478 radio, processing of a radio frame is allocated 3ms, but typically 2479 the processing completes much earlier (<400us) allowing the remaining 2480 time to be used by the Fronthaul network. This ultimately determines 2481 the distance the remote radio heads can be located from the base 2482 stations (200us equals roughly 40 km of optical fiber-based 2483 transport, thus round trip time is 2*200us = 400us). 2485 The remainder of the "maximum delay budget" is consumed by all nodes 2486 and buffering between the remote radio head and the baseband 2487 processing, plus the distance-incurred delay. 2489 The baseband processing time and the available "delay budget" for the 2490 fronthaul is likely to change in the forthcoming "5G" due to reduced 2491 radio round trip times and other architectural and service 2492 requirements [NGMN]. 2494 6.1.2. Time Synchronization Requirements 2496 Fronthaul time synchronization requirements are given by [TS25104], 2497 [TS36104], [TS36211], and [TS36133]. These can be summarized for the 2498 current 3GPP LTE-based networks as: 2500 Delay Accuracy: 2501 +-8ns (i.e. +-1/32 Tc, where Tc is the UMTS Chip time of 1/3.84 2502 MHz) resulting in a round trip accuracy of +-16ns. The value is 2503 this low to meet the 3GPP Timing Alignment Error (TAE) measurement 2504 requirements. 2506 Packet Delay Variation: 2507 Packet Delay Variation (PDV aka Jitter aka Timing Alignment Error) 2508 is problematic to Fronthaul networks and must be minimized. If 2509 the transport network cannot guarantee low enough PDV then 2510 additional buffering has to be introduced at the edges of the 2511 network to buffer out the jitter. Buffering is not desirable as 2512 it reduces the total available delay budget. 2514 * For multiple input multiple output (MIMO) or TX diversity 2515 transmissions, at each carrier frequency, TAE shall not exceed 2516 65 ns (i.e. 1/4 Tc). 2518 * For intra-band contiguous carrier aggregation, with or without 2519 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2 2520 Tc). 2522 * For intra-band non-contiguous carrier aggregation, with or 2523 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e. 2524 one Tc). 2526 * For inter-band carrier aggregation, with or without MIMO or TX 2527 diversity, TAE shall not exceed 260 ns. 2529 Transport link contribution to radio frequency error: 2530 +-2 PPB. This value is considered to be "available" for the 2531 Fronthaul link out of the total 50 PPB budget reserved for the 2532 radio interface. Note: the reason that the transport link 2533 contributes to radio frequency error is as follows. The current 2534 way of doing Fronthaul is from the radio unit to remote radio head 2535 directly. The remote radio head is essentially a passive device 2536 (without buffering etc.) The transport drives the antenna 2537 directly by feeding it with samples and everything the transport 2538 adds will be introduced to radio as-is. So if the transport 2539 causes additional frequence error that shows immediately on the 2540 radio as well. 2542 The above listed time synchronization requirements are difficult to 2543 meet with point-to-point connected networks, and more difficult when 2544 the network includes multiple hops. It is expected that networks 2545 must include buffering at the ends of the connections as imposed by 2546 the jitter requirements, since trying to meet the jitter requirements 2547 in every intermediate node is likely to be too costly. However, 2548 every measure to reduce jitter and delay on the path makes it easier 2549 to meet the end-to-end requirements. 2551 In order to meet the timing requirements both senders and receivers 2552 must remain time synchronized, demanding very accurate clock 2553 distribution, for example support for IEEE 1588 transparent clocks in 2554 every intermediate node. 2556 In cellular networks from the LTE radio era onward, phase 2557 synchronization is needed in addition to frequency synchronization 2558 ([TS36300], [TS23401]). 2560 6.1.3. Time-Sensitive Stream Requirements 2562 In addition to the time synchronization requirements listed in 2563 Section Section 6.1.2 the Fronthaul networks assume practically 2564 error-free transport. The maximum bit error rate (BER) has been 2565 defined to be 10^-12. When packetized that would imply a packet 2566 error rate (PER) of 2.4*10^-9 (assuming ~300 bytes packets). 2567 Retransmitting lost packets and/or using forward error correction 2568 (FEC) to circumvent bit errors is practically impossible due to the 2569 additional delay incurred. Using redundant streams for better 2570 guarantees for delivery is also practically impossible in many cases 2571 due to high bandwidth requirements of Fronthaul networks. For 2572 instance, current uncompressed CPRI bandwidth expansion ratio is 2573 roughly 20:1 compared to the IP layer user payload it carries. 2574 Protection switching is also a candidate but current technologies for 2575 the path switch are too slow. We do not currently know of a better 2576 solution for this issue. 2578 Fronthaul links are assumed to be symmetric, and all Fronthaul 2579 streams (i.e. those carrying radio data) have equal priority and 2580 cannot delay or pre-empt each other. This implies that the network 2581 must guarantee that each time-sensitive flow meets their schedule. 2583 6.1.4. Security Considerations 2585 Establishing time-sensitive streams in the network entails reserving 2586 networking resources for long periods of time. It is important that 2587 these reservation requests be authenticated to prevent malicious 2588 reservation attempts from hostile nodes (or accidental 2589 misconfiguration). This is particularly important in the case where 2590 the reservation requests span administrative domains. Furthermore, 2591 the reservation information itself should be digitally signed to 2592 reduce the risk of a legitimate node pushing a stale or hostile 2593 configuration into another networking node. 2595 6.2. Cellular Radio Networks Today 2597 Today's Fronthaul networks typically consist of: 2599 o Dedicated point-to-point fiber connection is common 2601 o Proprietary protocols and framings 2603 o Custom equipment and no real networking 2605 Today's Midhaul and Backhaul networks typically consist of: 2607 o Mostly normal IP networks, MPLS-TP, etc. 2609 o Clock distribution and sync using 1588 and SyncE 2611 Telecommunication networks in the cellular domain are already heading 2612 towards transport networks where precise time synchronization support 2613 is one of the basic building blocks. While the transport networks 2614 themselves have practically transitioned to all-IP packet based 2615 networks to meet the bandwidth and cost requirements, highly accurate 2616 clock distribution has become a challenge. 2618 Transport networks in the cellular domain are typically based on Time 2619 Division Multiplexing (TDM-based) and provide frequency 2620 synchronization capabilities as a part of the transport media. 2621 Alternatively other technologies such as Global Positioning System 2622 (GPS) or Synchronous Ethernet (SyncE) are used [SyncE]. 2624 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985] 2625 for legacy transport support) have become popular tools to build and 2626 manage new all-IP Radio Access Networks (RAN) 2627 [I-D.kh-spring-ip-ran-use-case]. Although various timing and 2628 synchronization optimizations have already been proposed and 2629 implemented including 1588 PTP enhancements 2630 [I-D.ietf-tictoc-1588overmpls][I-D.mirsky-mpls-residence-time], these 2631 solution are not necessarily sufficient for the forthcoming RAN 2632 architectures or guarantee the higher time-synchronization 2633 requirements [CPRI]. There are also existing solutions for the TDM 2634 over IP [RFC5087] [RFC4553] or Ethernet transports [RFC5086]. 2636 6.3. Cellular Radio Networks Future 2638 We would like to see the following in future Cellular Radio networks: 2640 o Unified standards-based transport protocols and standard 2641 networking equipment that can make use of underlying deterministic 2642 link-layer services 2644 o Unified and standards-based network management systems and 2645 protocols in all parts of the network (including Fronthaul) 2647 New radio access network deployment models and architectures may 2648 require time sensitive networking services with strict requirements 2649 on other parts of the network that previously were not considered to 2650 be packetized at all. The time and synchronization support are 2651 already topical for Backhaul and Midhaul packet networks [MEF], and 2652 becoming a real issue for Fronthaul networks. Specifically in the 2653 Fronthaul networks the timing and synchronization requirements can be 2654 extreme for packet based technologies, for example, on the order of 2655 sub +-20 ns packet delay variation (PDV) and frequency accuracy of 2656 +0.002 PPM [Fronthaul]. 2658 The actual transport protocols and/or solutions to establish required 2659 transport "circuits" (pinned-down paths) for Fronthaul traffic are 2660 still undefined. Those are likely to include (but are not limited 2661 to) solutions directly over Ethernet, over IP, and MPLS/PseudoWire 2662 transport. 2664 Even the current time-sensitive networking features may not be 2665 sufficient for Fronthaul traffic. Therefore, having specific 2666 profiles that take the requirements of Fronthaul into account is 2667 desirable [IEEE8021CM]. 2669 The really interesting and important existing work for time sensitive 2670 networking has been done for Ethernet [TSNTG], which specifies the 2671 use of IEEE 1588 time precision protocol (PTP) [IEEE1588] in the 2672 context of IEEE 802.1D and IEEE 802.1Q. While IEEE 802.1AS 2673 [IEEE8021AS] specifies a Layer-2 time synchronizing service other 2674 specification, such as IEEE 1722 [IEEE1722] specify Ethernet-based 2675 Layer-2 transport for time-sensitive streams. New promising work 2676 seeks to enable the transport of time-sensitive fronthaul streams in 2677 Ethernet bridged networks [IEEE8021CM]. Similarly to IEEE 1722 there 2678 is an ongoing standardization effort to define Layer-2 transport 2679 encapsulation format for transporting radio over Ethernet (RoE) in 2680 IEEE 1904.3 Task Force [IEEE19043]. 2682 All-IP RANs and various "haul" networks would benefit from time 2683 synchronization and time-sensitive transport services. Although 2684 Ethernet appears to be the unifying technology for the transport 2685 there is still a disconnect providing Layer-3 services. The protocol 2686 stack typically has a number of layers below the Ethernet Layer-2 2687 that shows up to the Layer-3 IP transport. It is not uncommon that 2688 on top of the lowest layer (optical) transport there is the first 2689 layer of Ethernet followed one or more layers of MPLS, PseudoWires 2690 and/or other tunneling protocols finally carrying the Ethernet layer 2691 visible to the user plane IP traffic. While there are existing 2692 technologies, especially in MPLS/PWE space, to establish circuits 2693 through the routed and switched networks, there is a lack of 2694 signaling the time synchronization and time-sensitive stream 2695 requirements/reservations for Layer-3 flows in a way that the entire 2696 transport stack is addressed and the Ethernet layers that needs to be 2697 configured are addressed. 2699 Furthermore, not all "user plane" traffic will be IP. Therefore, the 2700 same solution also must address the use cases where the user plane 2701 traffic is again another layer or Ethernet frames. There is existing 2702 work describing the problem statement 2703 [I-D.finn-detnet-problem-statement] and the architecture 2704 [I-D.finn-detnet-architecture] for deterministic networking (DetNet) 2705 that targets solutions for time-sensitive (IP/transport) streams with 2706 deterministic properties over Ethernet-based switched networks. 2708 6.4. Cellular Radio Networks Asks 2710 A standard for data plane transport specification which is: 2712 o Unified among all *hauls 2714 o Deployed in a highly deterministic network environment 2716 A standard for data flow information models that are: 2718 o Aware of the time sensitivity and constraints of the target 2719 networking environment 2721 o Aware of underlying deterministic networking services (e.g. on the 2722 Ethernet layer) 2724 Mapping the Fronthaul requirements to IETF DetNet 2725 [I-D.finn-detnet-architecture] Section 3 "Providing the DetNet 2726 Quality of Service", the relevant features are: 2728 o Zero congestion loss. 2730 o Pinned-down paths. 2732 7. Industrial M2M 2734 7.1. Use Case Description 2736 Industrial Automation in general refers to automation of 2737 manufacturing, quality control and material processing. In this 2738 "machine to machine" (M2M) use case we consider machine units in a 2739 plant floor which periodically exchange data with upstream or 2740 downstream machine modules and/or a supervisory controller within a 2741 local area network. 2743 The actors of M2M communication are Programmable Logic Controllers 2744 (PLCs). Communication between PLCs and between PLCs and the 2745 supervisory PLC (S-PLC) is achieved via critical control/data streams 2746 Figure 10. 2748 S (Sensor) 2749 \ +-----+ 2750 PLC__ \.--. .--. ---| MES | 2751 \_( `. _( `./ +-----+ 2752 A------( Local )-------------( L2 ) 2753 ( Net ) ( Net ) +-------+ 2754 /`--(___.-' `--(___.-' ----| S-PLC | 2755 S_/ / PLC .--. / +-------+ 2756 A_/ \_( `. 2757 (Actuator) ( Local ) 2758 ( Net ) 2759 /`--(___.-'\ 2760 / \ A 2761 S A 2763 Figure 10: Current Generic Industrial M2M Network Architecture 2765 This use case focuses on PLC-related communications; communication to 2766 Manufacturing-Execution-Systems (MESs) are not addressed. 2768 This use case covers only critical control/data streams; non-critical 2769 traffic between industrial automation applications (such as 2770 communication of state, configuration, set-up, and database 2771 communication) are adequately served by currently available 2772 prioritizing techniques. Such traffic can use up to 80% of the total 2773 bandwidth required. There is also a subset of non-time-critical 2774 traffic that must be reliable even though it is not time sensitive. 2776 In this use case the primary need for deterministic networking is to 2777 provide end-to-end delivery of M2M messages within specific timing 2778 constraints, for example in closed loop automation control. Today 2779 this level of determinism is provided by proprietary networking 2780 technologies. In addition, standard networking technologies are used 2781 to connect the local network to remote industrial automation sites, 2782 e.g. over an enterprise or metro network which also carries other 2783 types of traffic. Therefore, flows that should be forwarded with 2784 deterministic guarantees need to be sustained regardless of the 2785 amount of other flows in those networks. 2787 7.2. Industrial M2M Communication Today 2789 Today, proprietary networks fulfill the needed timing and 2790 availability for M2M networks. 2792 The network topologies used today by industrial automation are 2793 similar to those used by telecom networks: Daisy Chain, Ring, Hub and 2794 Spoke, and Comb (a subset of Daisy Chain). 2796 PLC-related control/data streams are transmitted periodically and 2797 carry either a pre-configured payload or a payload configured during 2798 runtime. 2800 Some industrial applications require time synchronization at the end 2801 nodes. For such time-coordinated PLCs, accuracy of 1 microsecond is 2802 required. Even in the case of "non-time-coordinated" PLCs time sync 2803 may be needed e.g. for timestamping of sensor data. 2805 Industrial network scenarios require advanced security solutions. 2806 Many of the current industrial production networks are physically 2807 separated. Preventing critical flows from be leaked outside a domain 2808 is handled today by filtering policies that are typically enforced in 2809 firewalls. 2811 7.2.1. Transport Parameters 2813 The Cycle Time defines the frequency of message(s) between industrial 2814 actors. The Cycle Time is application dependent, in the range of 1ms 2815 - 100ms for critical control/data streams. 2817 Because industrial applications assume deterministic transport for 2818 critical Control-Data-Stream parameters (instead of defining latency 2819 and delay variation parameters) it is sufficient to fulfill the upper 2820 bound of latency (maximum latency). The underlying networking 2821 infrastructure must ensure a maximum end-to-end delivery time of 2822 messages in the range of 100 microseconds to 50 milliseconds 2823 depending on the control loop application. 2825 The bandwidth requirements of control/data streams are usually 2826 calculated directly from the bytes-per-cycle parameter of the control 2827 loop. For PLC-to-PLC communication one can expect 2 - 32 streams 2828 with packet size in the range of 100 - 700 bytes. For S-PLC to PLCs 2829 the number of streams is higher - up to 256 streams. Usually no more 2830 than 20% of available bandwidth is used for critical control/data 2831 streams. In today's networks 1Gbps links are commonly used. 2833 Most PLC control loops are rather tolerant of packet loss, however 2834 critical control/data streams accept no more than 1 packet loss per 2835 consecutive communication cycle (i.e. if a packet gets lost in cycle 2836 "n", then the next cycle ("n+1") must be lossless). After two or 2837 more consecutive packet losses the network may be considered to be 2838 "down" by the Application. 2840 As network downtime may impact the whole production system the 2841 required network availability is rather high (99,999%). 2843 Based on the above parameters we expect that some form of redundancy 2844 will be required for M2M communications, however any individual 2845 solution depends on several parameters including cycle time, delivery 2846 time, etc. 2848 7.2.2. Stream Creation and Destruction 2850 In an industrial environment, critical control/data streams are 2851 created rather infrequently, on the order of ~10 times per day / week 2852 / month. Most of these critical control/data streams get created at 2853 machine startup, however flexibility is also needed during runtime, 2854 for example when adding or removing a machine. Going forward as 2855 production systems become more flexible, we expect a significant 2856 increase in the rate at which streams are created, changed and 2857 destroyed. 2859 7.3. Industrial M2M Future 2861 We would like to see the various proprietary networks replaced with a 2862 converged IP-standards-based network with deterministic properties 2863 that can satisfy the timing, security and reliability constraints 2864 described above. 2866 7.4. Industrial M2M Asks 2868 o Converged IP-based network 2870 o Deterministic behavior (bounded latency and jitter ) 2872 o High availability (presumably through redundancy) (99.999 %) 2874 o Low message delivery time (100us - 50ms) 2876 o Low packet loss (burstless, 0.1-1 %) 2878 o Precise time synchronization accuracy (1us) 2880 o Security (e.g. prevent critical flows from being leaked between 2881 physically separated networks) 2883 8. Other Use Cases 2885 8.1. Introduction 2887 The rapid growth of the today's communication system and its access 2888 into almost all aspects of daily life has led to great dependency on 2889 services it provides. The communication network, as it is today, has 2890 applications such as multimedia and peer-to-peer file sharing 2891 distribution that require Quality of Service (QoS) guarantees in 2892 terms of delay and jitter to maintain a certain level of performance. 2893 Meanwhile, mobile wireless communications has become an important 2894 part to support modern sociality with increasing importance over the 2895 last years. A communication network of hard real-time and high 2896 reliability is essential for the next concurrent and next generation 2897 mobile wireless networks as well as its bearer network for E-2-E 2898 performance requirements. 2900 Conventional transport network is IP-based because of the bandwidth 2901 and cost requirements. However the delay and jitter guarantee 2902 becomes a challenge in case of contention since the service here is 2903 not deterministic but best effort. With more and more rigid demand 2904 in latency control in the future network [METIS], deterministic 2905 networking [I-D.finn-detnet-architecture] is a promising solution to 2906 meet the ultra low delay applications and use cases. There are 2907 already typical issues for delay sensitive networking requirements in 2908 midhaul and backhaul network to support LTE and future 5G network 2909 [net5G]. And not only in the telecom industry but also other 2910 vertical industry has increasing demand on delay sensitive 2911 communications as the automation becomes critical recently. 2913 More specifically, CoMP techniques, D-2-D, industrial automation and 2914 gaming/media service all have great dependency on the low delay 2915 communications as well as high reliability to guarantee the service 2916 performance. Note that the deterministic networking is not equal to 2917 low latency as it is more focused on the worst case delay bound of 2918 the duration of certain application or service. It can be argued 2919 that without high certainty and absolute delay guarantee, low delay 2920 provisioning is just relative [rfc3393], which is not sufficient to 2921 some delay critical service since delay violation in an instance 2922 cannot be tolerated. Overall, the requirements from vertical 2923 industries seem to be well aligned with the expected low latency and 2924 high determinist performance of future networks 2926 This document describes several use cases and scenarios with 2927 requirements on deterministic delay guarantee within the scope of the 2928 deterministic network [I-D.finn-detnet-problem-statement]. 2930 8.2. Critical Delay Requirements 2932 Delay and jitter requirement has been take into account as a major 2933 component in QoS provisioning since the birth of Internet. The delay 2934 sensitive networking with increasing importance become the root of 2935 mobile wireless communications as well as the applicable areas which 2936 are all greatly relied on low delay communications. Due to the best 2937 effort feature of the IP networking, mitigate contention and 2938 buffering is the main solution to serve the delay sensitive service. 2939 More bandwidth is assigned to keep the link low loaded or in another 2940 word, reduce the probability of congestion. However, not only lack 2941 of determinist but also has limitation to serve the applications in 2942 the future communication system, keeping low loaded cannot provide 2943 deterministic delay guarantee. Take the [METIS] that documents the 2944 fundamental challenges as well as overall technical goal of the 5G 2945 mobile and wireless system as the starting point. It should 2946 supports: -1000 times higher mobile data volume per area, -10 times 2947 to 100 times higher typical user data rate, -10 times to 100 times 2948 higher number of connected devices, -10 times longer battery life for 2949 low power devices, and -5 times reduced End-to-End (E2E) latency, at 2950 similar cost and energy consumption levels as today's system. Taking 2951 part of these requirements related to latency, current LTE networking 2952 system has E2E latency less than 20ms [LTE-Latency] which leads to 2953 around 5ms E2E latency for 5G networks. It has been argued that 2954 fulfill such rigid latency demand with similar cost will be most 2955 challenging as the system also requires 100 times bandwidth as well 2956 as 100 times of connected devices. As a result to that, simply 2957 adding redundant bandwidth provisioning can be no longer an efficient 2958 solution due to the high bandwidth requirements more than ever 2959 before. In addition to the bandwidth provisioning, the critical flow 2960 within its reserved resource should not be affected by other flows no 2961 matter the pressure of the network. Robust defense of critical flow 2962 is also not depended on redundant bandwidth allocation. 2963 Deterministic networking techniques in both layer-2 and layer-3 using 2964 IETF protocol solutions can be promising to serve these scenarios. 2966 8.3. Coordinated multipoint processing (CoMP) 2968 In the wireless communication system, Coordinated multipoint 2969 processing (CoMP) is considered as an effective technique to solve 2970 the inter-cell interference problem to improve the cell-edge user 2971 throughput [CoMP]. 2973 8.3.1. CoMP Architecture 2974 +--------------------------+ 2975 | CoMP | 2976 +--+--------------------+--+ 2977 | | 2978 +----------+ +------------+ 2979 | Uplink | | Downlink | 2980 +-----+----+ +--------+---+ 2981 | | 2982 ------------------- ----------------------- 2983 | | | | | | 2984 +---------+ +----+ +-----+ +------------+ +-----+ +-----+ 2985 | Joint | | CS | | DPS | | Joint | | CS/ | | DPS | 2986 |Reception| | | | | |Transmission| | CB | | | 2987 +---------+ +----+ +-----+ +------------+ +-----+ +-----+ 2988 | | 2989 |----------- |------------- 2990 | | | | 2991 +------------+ +---------+ +----------+ +------------+ 2992 | Joint | | Soft | | Coherent | | Non- | 2993 |Equalization| |Combining| | JT | | Coherent JT| 2994 +------------+ +---------+ +----------+ +------------+ 2996 Figure 11: Framework of CoMP Technology 2998 As shown in Figure 11, CoMP reception and transmission is a framework 2999 that multiple geographically distributed antenna nodes cooperate to 3000 improve the performance of the users served in the common cooperation 3001 area. The design principal of CoMP is to extend the current single- 3002 cell to multi-UEs transmission to a multi-cell- to-multi-UEs 3003 transmission by base station cooperation. In contrast to single-cell 3004 scenario, CoMP has critical issues such as: Backhaul latency, CSI 3005 (Channel State Information) reporting and accuracy and Network 3006 complexity. Clearly the first two requirements are very much delay 3007 sensitive and will be discussed in next section. 3009 8.3.2. Delay Sensitivity in CoMP 3011 As the essential feature of CoMP, signaling is exchanged between 3012 eNBs, the backhaul latency is the dominating limitation of the CoMP 3013 performance. Generally, JT and JP may benefit from coordinating the 3014 scheduling (distributed or centralized) of different cells in case 3015 that the signaling exchanging between eNBs is limited to 4-10ms. For 3016 C-RAN the backhaul latency requirement is 250us while for D-RAN it is 3017 4-15ms. And this delay requirement is not only rigid but also 3018 absolute since any uncertainty in delay will down the performance 3019 significantly. Note that, some operator's transport network is not 3020 build to support Layer-3 transfer in aggregation layer. In such 3021 case, the signaling is exchanged through EPC which means delay is 3022 supposed to be larger. CoMP has high requirement on delay and 3023 reliability which is lack by current mobile network systems and may 3024 impact the architecture of the mobile network. 3026 8.4. Industrial Automation 3028 Traditional "industrial automation" terminology usually refers to 3029 automation of manufacturing, quality control and material processing. 3030 "Industrial internet" and "industrial 4.0" [EA12] is becoming a hot 3031 topic based on the Internet of Things. This high flexible and 3032 dynamic engineering and manufacturing will result in a lot of so- 3033 called smart approaches such as Smart Factory, Smart Products, Smart 3034 Mobility, and Smart Home/Buildings. No doubt that ultra high 3035 reliability and robustness is a must in data transmission, especially 3036 in the closed loop automation control application where delay 3037 requirement is below 1ms and packet loss less than 10E-9. All these 3038 critical requirements on both latency and loss cannot be fulfilled by 3039 current 4G communication networks. Moreover, the collaboration of 3040 the industrial automation from remote campus with cellular and fixed 3041 network has to be built on an integrated, cloud-based platform. In 3042 this way, the deterministic flows should be guaranteed regardless of 3043 the amount of other flows in the network. The lack of this mechanism 3044 becomes the main obstacle in deployment on of industrial automation. 3046 8.5. Vehicle to Vehicle 3048 V2V communication has gained more and more attention in the last few 3049 years and will be increasingly growth in the future. Not only 3050 equipped with direct communication system which is short ranged, V2V 3051 communication also requires wireless cellular networks to cover wide 3052 range and more sophisticated services. V2V application in the area 3053 autonomous driving has very stringent requirements of latency and 3054 reliability. It is critical that the timely arrival of information 3055 for safety issues. In addition, due to the limitation of processing 3056 of individual vehicle, passing information to the cloud can provide 3057 more functions such as video processing, audio recognition or 3058 navigation systems. All of those requirements lead to a highly 3059 reliable connectivity to the cloud. On the other hand, it is natural 3060 that the provisioning of low latency communication is one of the main 3061 challenges to be overcome as a result of the high mobility, the high 3062 penetration losses caused by the vehicle itself. As result of that, 3063 the data transmission with latency below 5ms and a high reliability 3064 of PER below 10E-6 are demanded. It can benefit from the deployment 3065 of deterministic networking with high reliability. 3067 8.6. Gaming, Media and Virtual Reality 3069 Online gaming and cloud gaming is dominating the gaming market since 3070 it allow multiple players to play together with more challenging and 3071 competing. Connected via current internet, the latency can be a big 3072 issue to degrade the end users' experience. There different types of 3073 games and FPS (First Person Shooting) gaming has been considered to 3074 be the most latency sensitive online gaming due to the high 3075 requirements of timing precision and computing of moving target. 3076 Virtual reality is also receiving more interests than ever before as 3077 a novel gaming experience. The delay here can be very critical to 3078 the interacting in the virtual world. Disagreement between what is 3079 seeing and what is feeling can cause motion sickness and affect what 3080 happens in the game. Supporting fast, real-time and reliable 3081 communications in both PHY/MAC layer, network layer and application 3082 layer is main bottleneck for such use case. The media content 3083 delivery has been and will become even more important use of 3084 Internet. Not only high bandwidth demand but also critical delay and 3085 jitter requirements have to be taken into account to meet the user 3086 demand. To make the smoothness of the video and audio, delay and 3087 jitter has to be guaranteed to avoid possible interruption which is 3088 the killer of all online media on demand service. Now with 4K and 8K 3089 video in the near future, the delay guarantee become one of the most 3090 challenging issue than ever before. 4K/8K UHD video service requires 3091 6Gbps-100Gbps for uncompressed video and compressed video starting 3092 from 60Mbps. The delay requirement is 100ms while some specific 3093 interactive applications may require 10ms delay [UHD-video]. 3095 9. Use Case Common Elements 3097 Looking at the use cases collectively, the following common desires 3098 for the DetNet-based networks of the future emerge: 3100 o Open standards-based network (replace various proprietary 3101 networks, reduce cost, create multi-vendor market) 3103 o Centrally administered (though such administration may be 3104 distributed for scale and resiliency) 3106 o Integrates L2 (bridged) and L3 (routed) environments (independent 3107 of the Link layer, e.g. can be used with Ethernet, 6TiSCH, etc.) 3109 o Carries both deterministic and best-effort traffic (guaranteed 3110 end-to-end delivery of deterministic flows, deterministic flows 3111 isolated from each other and from best-effort traffic congestion, 3112 unused deterministic BW available to best-effort traffic) 3114 o Ability to add or remove systems from the network with minimal, 3115 bounded service interruption (applications include replacement of 3116 failed devices as well as plug and play) 3118 o Uses standardized data flow information models capable of 3119 expressing deterministic properties (models express device 3120 capabilities, flow properties. Protocols for pushing models from 3121 controller to devices, devices to controller) 3123 o Scalable size (long distances (many km) and short distances 3124 (within a single machine), many hops (radio repeaters, microwave 3125 links, fiber links...) and short hops (single machine)) 3127 o Scalable timing parameters and accuracy (bounded latency, 3128 guaranteed worst case maximum, minimum. Low latency, e.g. control 3129 loops may be less than 1ms, but larger for wide area networks) 3131 o High availability (99.9999 percent up time requested, but may be 3132 up to twelve 9s) 3134 o Reliability, redundancy (lives at stake) 3136 o Security (from failures, attackers, misbehaving devices - 3137 sensitive to both packet content and arrival time) 3139 10. Acknowledgments 3141 10.1. Pro Audio 3143 This section was derived from draft-gunther-detnet-proaudio-req-01. 3145 The editors would like to acknowledge the help of the following 3146 individuals and the companies they represent: 3148 Jeff Koftinoff, Meyer Sound 3150 Jouni Korhonen, Associate Technical Director, Broadcom 3152 Pascal Thubert, CTAO, Cisco 3154 Kieran Tyrrell, Sienda New Media Technologies GmbH 3156 10.2. Utility Telecom 3158 This section was derived from draft-wetterwald-detnet-utilities-reqs- 3159 02. 3161 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy 3162 Practice Cisco 3164 Pascal Thubert, CTAO Cisco 3166 10.3. Building Automation Systems 3168 This section was derived from draft-bas-usecase-detnet-00. 3170 10.4. Wireless for Industrial 3172 This section was derived from draft-thubert-6tisch-4detnet-01. 3174 This specification derives from the 6TiSCH architecture, which is the 3175 result of multiple interactions, in particular during the 6TiSCH 3176 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at 3177 the IETF. 3179 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier 3180 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael 3181 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon, 3182 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey, 3183 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria 3184 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation 3185 and various contributions. 3187 10.5. Cellular Radio 3189 This section was derived from draft-korhonen-detnet-telreq-00. 3191 10.6. Industrial M2M 3193 The authors would like to thank Feng Chen and Marcel Kiessling for 3194 their comments and suggestions. 3196 10.7. Other 3198 This section was derived from draft-zha-detnet-use-case-00. 3200 This document has benefited from reviews, suggestions, comments and 3201 proposed text provided by the following members, listed in 3202 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver 3203 Huang. 3205 11. Informative References 3207 [ACE] IETF, "Authentication and Authorization for Constrained 3208 Environments", . 3211 [bacnetip] 3212 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP", 3213 January 1999. 3215 [CCAMP] IETF, "Common Control and Measurement Plane", 3216 . 3218 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND 3219 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_ 3220 and_Enhancement_v2.0, March 2015, 3221 . 3224 [CONTENT_PROTECTION] 3225 Olsen, D., "1722a Content Protection", 2012, 3226 . 3229 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI); 3230 Interface Specification", CPRI Specification V6.1, July 3231 2014, . 3234 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification, 3235 Version 1.2", 2012, . 3237 [DICE] IETF, "DTLS In Constrained Environments", 3238 . 3240 [EA12] Evans, P. and M. Annunziata, "Industrial Internet: Pushing 3241 the Boundaries of Minds and Machines", November 2012. 3243 [ESPN_DC2] 3244 Daley, D., "ESPN's DC2 Scales AVB Large", 2014, 3245 . 3248 [flnet] Japan Electrical Manufacturers' Association, "JEMA 1479 - 3249 English Edition", September 2012. 3251 [Fronthaul] 3252 Chen, D. and T. Mustala, "Ethernet Fronthaul 3253 Considerations", IEEE 1904.3, February 2015, 3254 . 3257 [HART] www.hartcomm.org, "Highway Addressable remote Transducer, 3258 a group of specifications for industrial process and 3259 control devices administered by the HART Foundation". 3261 [I-D.finn-detnet-architecture] 3262 Finn, N., Thubert, P., and M. Teener, "Deterministic 3263 Networking Architecture", draft-finn-detnet- 3264 architecture-02 (work in progress), November 2015. 3266 [I-D.finn-detnet-problem-statement] 3267 Finn, N. and P. Thubert, "Deterministic Networking Problem 3268 Statement", draft-finn-detnet-problem-statement-04 (work 3269 in progress), October 2015. 3271 [I-D.ietf-6tisch-6top-interface] 3272 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer 3273 (6top) Interface", draft-ietf-6tisch-6top-interface-04 3274 (work in progress), July 2015. 3276 [I-D.ietf-6tisch-architecture] 3277 Thubert, P., "An Architecture for IPv6 over the TSCH mode 3278 of IEEE 802.15.4", draft-ietf-6tisch-architecture-09 (work 3279 in progress), November 2015. 3281 [I-D.ietf-6tisch-coap] 3282 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and 3283 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work 3284 in progress), March 2015. 3286 [I-D.ietf-6tisch-terminology] 3287 Palattella, M., Thubert, P., Watteyne, T., and Q. Wang, 3288 "Terminology in IPv6 over the TSCH mode of IEEE 3289 802.15.4e", draft-ietf-6tisch-terminology-06 (work in 3290 progress), November 2015. 3292 [I-D.ietf-ipv6-multilink-subnets] 3293 Thaler, D. and C. Huitema, "Multi-link Subnet Support in 3294 IPv6", draft-ietf-ipv6-multilink-subnets-00 (work in 3295 progress), July 2002. 3297 [I-D.ietf-roll-rpl-industrial-applicability] 3298 Phinney, T., Thubert, P., and R. Assimiti, "RPL 3299 applicability in industrial networks", draft-ietf-roll- 3300 rpl-industrial-applicability-02 (work in progress), 3301 October 2013. 3303 [I-D.ietf-tictoc-1588overmpls] 3304 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L. 3305 Montini, "Transporting Timing messages over MPLS 3306 Networks", draft-ietf-tictoc-1588overmpls-07 (work in 3307 progress), October 2015. 3309 [I-D.kh-spring-ip-ran-use-case] 3310 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing 3311 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02 3312 (work in progress), November 2014. 3314 [I-D.mirsky-mpls-residence-time] 3315 Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S., 3316 and S. Vainshtein, "Residence Time Measurement in MPLS 3317 network", draft-mirsky-mpls-residence-time-07 (work in 3318 progress), July 2015. 3320 [I-D.svshah-tsvwg-deterministic-forwarding] 3321 Shah, S. and P. Thubert, "Deterministic Forwarding PHB", 3322 draft-svshah-tsvwg-deterministic-forwarding-04 (work in 3323 progress), August 2015. 3325 [I-D.thubert-6lowpan-backbone-router] 3326 Thubert, P., "6LoWPAN Backbone Router", draft-thubert- 3327 6lowpan-backbone-router-03 (work in progress), February 3328 2013. 3330 [I-D.wang-6tisch-6top-sublayer] 3331 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer 3332 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in 3333 progress), November 2015. 3335 [IEC61850-90-12] 3336 TC57 WG10, IEC., "IEC 61850-90-12 TR: Communication 3337 networks and systems for power utility automation - Part 3338 90-12: Wide area network engineering guidelines", 2015. 3340 [IEC62439-3:2012] 3341 TC65, IEC., "IEC 62439-3: Industrial communication 3342 networks - High availability automation networks - Part 3: 3343 Parallel Redundancy Protocol (PRP) and High-availability 3344 Seamless Redundancy (HSR)", 2012. 3346 [IEEE1588] 3347 IEEE, "IEEE Standard for a Precision Clock Synchronization 3348 Protocol for Networked Measurement and Control Systems", 3349 IEEE Std 1588-2008, 2008, 3350 . 3353 [IEEE1722] 3354 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport 3355 Protocol for Time Sensitive Applications in a Bridged 3356 Local Area Network", IEEE Std 1722-2011, 2011, 3357 . 3360 [IEEE19043] 3361 IEEE Standards Association, "IEEE 1904.3 TF", IEEE 1904.3, 3362 2015, . 3364 [IEEE802.1TSNTG] 3365 IEEE Standards Association, "IEEE 802.1 Time-Sensitive 3366 Networks Task Group", March 2013, 3367 . 3369 [IEEE802154] 3370 IEEE standard for Information Technology, "IEEE std. 3371 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC) 3372 and Physical Layer (PHY) Specifications for Low-Rate 3373 Wireless Personal Area Networks". 3375 [IEEE802154e] 3376 IEEE standard for Information Technology, "IEEE standard 3377 for Information Technology, IEEE std. 802.15.4, Part. 3378 15.4: Wireless Medium Access Control (MAC) and Physical 3379 Layer (PHY) Specifications for Low-Rate Wireless Personal 3380 Area Networks, June 2011 as amended by IEEE std. 3381 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area 3382 Networks (LR-WPANs) Amendment 1: MAC sublayer", April 3383 2012. 3385 [IEEE8021AS] 3386 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)", 3387 IEEE 802.1AS-2001, 2011, 3388 . 3391 [IEEE8021CM] 3392 Farkas, J., "Time-Sensitive Networking for Fronthaul", 3393 Unapproved PAR, PAR for a New IEEE Standard; 3394 IEEE P802.1CM, April 2015, 3395 . 3398 [IEEE8021TSN] 3399 IEEE 802.1, "The charter of the TG is to provide the 3400 specifications that will allow time-synchronized low 3401 latency streaming services through 802 networks.", 2016, 3402 . 3404 [IETFDetNet] 3405 IETF, "Charter for IETF DetNet Working Group", 2015, 3406 . 3408 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation", 3409 . 3411 [ISA100.11a] 3412 ISA/ANSI, "Wireless Systems for Industrial Automation: 3413 Process Control and Related Applications - ISA100.11a-2011 3414 - IEC 62734", 2011, . 3417 [ISO7240-16] 3418 ISO, "ISO 7240-16:2007 Fire detection and alarm systems -- 3419 Part 16: Sound system control and indicating equipment", 3420 2007, . 3423 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006. 3425 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0", 3426 1994. 3428 [LTE-Latency] 3429 Johnston, S., "LTE Latency: How does it compare to other 3430 technologies", March 2014, 3431 . 3434 [MEF] MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells", 3435 MEF 22.1.1, July 2014, 3436 . 3439 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and 3440 wireless system", ICT-317669-METIS/D1.1 ICT- 3441 317669-METIS/D1.1, April 2013, . 3444 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL 3445 SPECIFICATION V1.1b", December 2006. 3447 [net5G] Ericsson, "5G Radio Access, Challenges for 2020 and 3448 Beyond", Ericsson white paper wp-5g, June 2013, 3449 . 3451 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0, 3452 February 2015, . 3455 [PCE] IETF, "Path Computation Element", 3456 . 3458 [profibus] 3459 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001. 3461 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 3462 Requirement Levels", BCP 14, RFC 2119, 3463 DOI 10.17487/RFC2119, March 1997, 3464 . 3466 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6 3467 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460, 3468 December 1998, . 3470 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 3471 "Definition of the Differentiated Services Field (DS 3472 Field) in the IPv4 and IPv6 Headers", RFC 2474, 3473 DOI 10.17487/RFC2474, December 1998, 3474 . 3476 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 3477 Label Switching Architecture", RFC 3031, 3478 DOI 10.17487/RFC3031, January 2001, 3479 . 3481 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 3482 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 3483 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 3484 . 3486 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 3487 Metric for IP Performance Metrics (IPPM)", RFC 3393, 3488 DOI 10.17487/RFC3393, November 2002, 3489 . 3491 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between 3492 Information Models and Data Models", RFC 3444, 3493 DOI 10.17487/RFC3444, January 2003, 3494 . 3496 [RFC3972] Aura, T., "Cryptographically Generated Addresses (CGA)", 3497 RFC 3972, DOI 10.17487/RFC3972, March 2005, 3498 . 3500 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation 3501 Edge-to-Edge (PWE3) Architecture", RFC 3985, 3502 DOI 10.17487/RFC3985, March 2005, 3503 . 3505 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 3506 Architecture", RFC 4291, DOI 10.17487/RFC4291, February 3507 2006, . 3509 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure- 3510 Agnostic Time Division Multiplexing (TDM) over Packet 3511 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006, 3512 . 3514 [RFC4903] Thaler, D., "Multi-Link Subnet Issues", RFC 4903, 3515 DOI 10.17487/RFC4903, June 2007, 3516 . 3518 [RFC4919] Kushalnagar, N., Montenegro, G., and C. Schumacher, "IPv6 3519 over Low-Power Wireless Personal Area Networks (6LoWPANs): 3520 Overview, Assumptions, Problem Statement, and Goals", 3521 RFC 4919, DOI 10.17487/RFC4919, August 2007, 3522 . 3524 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and 3525 P. Pate, "Structure-Aware Time Division Multiplexed (TDM) 3526 Circuit Emulation Service over Packet Switched Network 3527 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007, 3528 . 3530 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi, 3531 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087, 3532 DOI 10.17487/RFC5087, December 2007, 3533 . 3535 [RFC6282] Hui, J., Ed. and P. Thubert, "Compression Format for IPv6 3536 Datagrams over IEEE 802.15.4-Based Networks", RFC 6282, 3537 DOI 10.17487/RFC6282, September 2011, 3538 . 3540 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J., 3541 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur, 3542 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for 3543 Low-Power and Lossy Networks", RFC 6550, 3544 DOI 10.17487/RFC6550, March 2012, 3545 . 3547 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N., 3548 and D. Barthel, "Routing Metrics Used for Path Calculation 3549 in Low-Power and Lossy Networks", RFC 6551, 3550 DOI 10.17487/RFC6551, March 2012, 3551 . 3553 [RFC6775] Shelby, Z., Ed., Chakrabarti, S., Nordmark, E., and C. 3554 Bormann, "Neighbor Discovery Optimization for IPv6 over 3555 Low-Power Wireless Personal Area Networks (6LoWPANs)", 3556 RFC 6775, DOI 10.17487/RFC6775, November 2012, 3557 . 3559 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using 3560 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the 3561 Internet of Things (IoT): Problem Statement", RFC 7554, 3562 DOI 10.17487/RFC7554, May 2015, 3563 . 3565 [SRP_LATENCY] 3566 Gunther, C., "Specifying SRP Latency", 2014, 3567 . 3570 [STUDIO_IP] 3571 Mace, G., "IP Networked Studio Infrastructure for 3572 Synchronized & Real-Time Multimedia Transmissions", 2007, 3573 . 3576 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in 3577 packet networks", Recommendation G.8261, August 2013, 3578 . 3580 [TEAS] IETF, "Traffic Engineering Architecture and Signaling", 3581 . 3583 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements 3584 for Evolved Universal Terrestrial Radio Access Network 3585 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013. 3587 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception 3588 (FDD)", 3GPP TS 25.104 3.14.0, March 2007. 3590 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access 3591 (E-UTRA); Base Station (BS) radio transmission and 3592 reception", 3GPP TS 36.104 10.11.0, July 2013. 3594 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access 3595 (E-UTRA); Requirements for support of radio resource 3596 management", 3GPP TS 36.133 12.7.0, April 2015. 3598 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access 3599 (E-UTRA); Physical channels and modulation", 3GPP 3600 TS 36.211 10.7.0, March 2013. 3602 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA) 3603 and Evolved Universal Terrestrial Radio Access Network 3604 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300 3605 10.11.0, September 2013. 3607 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive 3608 Networks Task Group", 2013, 3609 . 3611 [UHD-video] 3612 Holub, P., "Ultra-High Definition Videos and Their 3613 Applications over the Network", The 7th International 3614 Symposium on VICTORIES Project PetrHolub_presentation, 3615 October 2014, . 3618 [WirelessHART] 3619 www.hartcomm.org, "Industrial Communication Networks - 3620 Wireless Communication Network and Communication Profiles 3621 - WirelessHART - IEC 62591", 2010. 3623 Authors' Addresses 3624 Ethan Grossman (editor) 3625 Dolby Laboratories, Inc. 3626 1275 Market Street 3627 San Francisco, CA 94103 3628 USA 3630 Phone: +1 415 645 4726 3631 Email: ethan.grossman@dolby.com 3632 URI: http://www.dolby.com 3634 Craig Gunther 3635 Harman International 3636 10653 South River Front Parkway 3637 South Jordan, UT 84095 3638 USA 3640 Phone: +1 801 568-7675 3641 Email: craig.gunther@harman.com 3642 URI: http://www.harman.com 3644 Pascal Thubert 3645 Cisco Systems, Inc 3646 Building D 3647 45 Allee des Ormes - BP1200 3648 MOUGINS - Sophia Antipolis 06254 3649 FRANCE 3651 Phone: +33 497 23 26 34 3652 Email: pthubert@cisco.com 3654 Patrick Wetterwald 3655 Cisco Systems 3656 45 Allees des Ormes 3657 Mougins 06250 3658 FRANCE 3660 Phone: +33 4 97 23 26 36 3661 Email: pwetterw@cisco.com 3662 Jean Raymond 3663 Hydro-Quebec 3664 1500 University 3665 Montreal H3A3S7 3666 Canada 3668 Phone: +1 514 840 3000 3669 Email: raymond.jean@hydro.qc.ca 3671 Jouni Korhonen 3672 Broadcom Corporation 3673 3151 Zanker Road 3674 San Jose, CA 95134 3675 USA 3677 Email: jouni.nospam@gmail.com 3679 Yu Kaneko 3680 Toshiba 3681 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi 3682 Kanagawa, Japan 3684 Email: yu1.kaneko@toshiba.co.jp 3686 Subir Das 3687 Applied Communication Sciences 3688 150 Mount Airy Road, Basking Ridge 3689 New Jersey, 07920, USA 3691 Email: sdas@appcomsci.com 3693 Yiyong Zha 3694 Huawei Technologies 3696 Email: zhayiyong@huawei.com 3698 Balazs Varga 3699 Ericsson 3700 Konyves Kalman krt. 11/B 3701 Budapest 1097 3702 Hungary 3704 Email: balazs.a.varga@ericsson.com 3705 Janos Farkas 3706 Ericsson 3707 Konyves Kalman krt. 11/B 3708 Budapest 1097 3709 Hungary 3711 Email: janos.farkas@ericsson.com 3713 Franz-Josef Goetz 3714 Siemens 3715 Gleiwitzerstr. 555 3716 Nurnberg 90475 3717 Germany 3719 Email: franz-josef.goetz@siemens.com 3721 Juergen Schmitt 3722 Siemens 3723 Gleiwitzerstr. 555 3724 Nurnberg 90475 3725 Germany 3727 Email: juergen.jues.schmitt@siemens.com