idnits 2.17.1 draft-ietf-detnet-use-cases-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (February 9, 2016) is 2999 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'ACE' is defined on line 3329, but no explicit reference was found in the text == Unused Reference: 'DICE' is defined on line 3359, but no explicit reference was found in the text == Unused Reference: 'HART' is defined on line 3379, but no explicit reference was found in the text == Unused Reference: 'I-D.thubert-6lowpan-backbone-router' is defined on line 3447, but no explicit reference was found in the text == Unused Reference: 'IEC61850-90-12' is defined on line 3457, but no explicit reference was found in the text == Unused Reference: 'IEEE8021TSN' is defined on line 3520, but no explicit reference was found in the text == Unused Reference: 'ISA100' is defined on line 3530, but no explicit reference was found in the text == Unused Reference: 'RFC2119' is defined on line 3583, but no explicit reference was found in the text == Unused Reference: 'RFC2460' is defined on line 3588, but no explicit reference was found in the text == Unused Reference: 'RFC2474' is defined on line 3592, but no explicit reference was found in the text == Unused Reference: 'RFC3209' is defined on line 3603, but no explicit reference was found in the text == Unused Reference: 'RFC3393' is defined on line 3608, but no explicit reference was found in the text == Unused Reference: 'RFC4903' is defined on line 3636, but no explicit reference was found in the text == Unused Reference: 'RFC4919' is defined on line 3640, but no explicit reference was found in the text == Unused Reference: 'RFC6282' is defined on line 3657, but no explicit reference was found in the text == Unused Reference: 'RFC6775' is defined on line 3675, but no explicit reference was found in the text == Unused Reference: 'TEAS' is defined on line 3702, but no explicit reference was found in the text == Unused Reference: 'WirelessHART' is defined on line 3740, but no explicit reference was found in the text == Outdated reference: A later version (-08) exists of draft-finn-detnet-architecture-02 == Outdated reference: A later version (-05) exists of draft-finn-detnet-problem-statement-04 == Outdated reference: A later version (-30) exists of draft-ietf-6tisch-architecture-09 == Outdated reference: A later version (-10) exists of draft-ietf-6tisch-terminology-06 -- Obsolete informational reference (is this intentional?): RFC 2460 (Obsoleted by RFC 8200) Summary: 0 errors (**), 0 flaws (~~), 24 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force E. Grossman, Ed. 3 Internet-Draft DOLBY 4 Intended status: Informational C. Gunther 5 Expires: August 12, 2016 HARMAN 6 P. Thubert 7 P. Wetterwald 8 CISCO 9 J. Raymond 10 HYDRO-QUEBEC 11 J. Korhonen 12 BROADCOM 13 Y. Kaneko 14 Toshiba 15 S. Das 16 Applied Communication Sciences 17 Y. Zha 18 HUAWEI 19 B. Varga 20 J. Farkas 21 Ericsson 22 F. Goetz 23 J. Schmitt 24 Siemens 25 February 9, 2016 27 Deterministic Networking Use Cases 28 draft-ietf-detnet-use-cases-01 30 Abstract 32 This draft documents requirements in several diverse industries to 33 establish multi-hop paths for characterized flows with deterministic 34 properties. In this context deterministic implies that streams can 35 be established which provide guaranteed bandwidth and latency which 36 can be established from either a Layer 2 or Layer 3 (IP) interface, 37 and which can co-exist on an IP network with best-effort traffic. 39 Additional requirements include optional redundant paths, very high 40 reliability paths, time synchronization, and clock distribution. 41 Industries considered include wireless for industrial applications, 42 professional audio, electrical utilities, building automation 43 systems, radio/mobile access networks, automotive, and gaming. 45 For each case, this document will identify the application, identify 46 representative solutions used today, and what new uses an IETF DetNet 47 solution may enable. 49 Status of This Memo 51 This Internet-Draft is submitted in full conformance with the 52 provisions of BCP 78 and BCP 79. 54 Internet-Drafts are working documents of the Internet Engineering 55 Task Force (IETF). Note that other groups may also distribute 56 working documents as Internet-Drafts. The list of current Internet- 57 Drafts is at http://datatracker.ietf.org/drafts/current/. 59 Internet-Drafts are draft documents valid for a maximum of six months 60 and may be updated, replaced, or obsoleted by other documents at any 61 time. It is inappropriate to use Internet-Drafts as reference 62 material or to cite them other than as "work in progress." 64 This Internet-Draft will expire on August 12, 2016. 66 Copyright Notice 68 Copyright (c) 2016 IETF Trust and the persons identified as the 69 document authors. All rights reserved. 71 This document is subject to BCP 78 and the IETF Trust's Legal 72 Provisions Relating to IETF Documents 73 (http://trustee.ietf.org/license-info) in effect on the date of 74 publication of this document. Please review these documents 75 carefully, as they describe your rights and restrictions with respect 76 to this document. Code Components extracted from this document must 77 include Simplified BSD License text as described in Section 4.e of 78 the Trust Legal Provisions and are provided without warranty as 79 described in the Simplified BSD License. 81 Table of Contents 83 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 84 2. Pro Audio Use Cases . . . . . . . . . . . . . . . . . . . . . 5 85 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 5 86 2.2. Fundamental Stream Requirements . . . . . . . . . . . . . 6 87 2.2.1. Guaranteed Bandwidth . . . . . . . . . . . . . . . . 6 88 2.2.2. Bounded and Consistent Latency . . . . . . . . . . . 7 89 2.2.2.1. Optimizations . . . . . . . . . . . . . . . . . . 8 90 2.3. Additional Stream Requirements . . . . . . . . . . . . . 9 91 2.3.1. Deterministic Time to Establish Streaming . . . . . . 9 92 2.3.2. Use of Unused Reservations by Best-Effort Traffic . . 9 93 2.3.3. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 10 94 2.3.4. Secure Transmission . . . . . . . . . . . . . . . . . 10 95 2.3.5. Redundant Paths . . . . . . . . . . . . . . . . . . . 10 96 2.3.6. Link Aggregation . . . . . . . . . . . . . . . . . . 10 97 2.3.7. Traffic Segregation . . . . . . . . . . . . . . . . . 11 98 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets . . . 11 99 2.3.7.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 11 100 2.4. Integration of Reserved Streams into IT Networks . . . . 12 101 2.5. Security Considerations . . . . . . . . . . . . . . . . . 12 102 2.5.1. Denial of Service . . . . . . . . . . . . . . . . . . 12 103 2.5.2. Control Protocols . . . . . . . . . . . . . . . . . . 12 104 2.6. A State-of-the-Art Broadcast Installation Hits Technology 105 Limits . . . . . . . . . . . . . . . . . . . . . . . . . 13 106 2.7. Acknowledgements . . . . . . . . . . . . . . . . . . . . 13 107 3. Utility Telecom Use Cases . . . . . . . . . . . . . . . . . . 13 108 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 13 109 3.2. Telecommunications Trends and General telecommunications 110 Requirements . . . . . . . . . . . . . . . . . . . . . . 15 111 3.2.1. General Telecommunications Requirements . . . . . . . 15 112 3.2.1.1. Migration to Packet-Switched Network . . . . . . 16 113 3.2.2. Applications, Use cases and traffic patterns . . . . 17 114 3.2.2.1. Transmission use cases . . . . . . . . . . . . . 17 115 3.2.2.2. Distribution use case . . . . . . . . . . . . . . 26 116 3.2.2.3. Generation use case . . . . . . . . . . . . . . . 29 117 3.2.3. Specific Network topologies of Smart Grid 118 Applications . . . . . . . . . . . . . . . . . . . . 30 119 3.2.4. Precision Time Protocol . . . . . . . . . . . . . . . 31 120 3.3. IANA Considerations . . . . . . . . . . . . . . . . . . . 32 121 3.4. Security Considerations . . . . . . . . . . . . . . . . . 32 122 3.4.1. Current Practices and Their Limitations . . . . . . . 32 123 3.4.2. Security Trends in Utility Networks . . . . . . . . . 34 124 3.5. Acknowledgements . . . . . . . . . . . . . . . . . . . . 35 125 4. Building Automation Systems Use Cases . . . . . . . . . . . . 35 126 4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 36 127 4.2. BAS Functionality . . . . . . . . . . . . . . . . . . . . 36 128 4.3. BAS Architecture . . . . . . . . . . . . . . . . . . . . 37 129 4.4. Deployment Model . . . . . . . . . . . . . . . . . . . . 39 130 4.5. Use cases and Field Network Requirements . . . . . . . . 40 131 4.5.1. Environmental Monitoring . . . . . . . . . . . . . . 41 132 4.5.2. Fire Detection . . . . . . . . . . . . . . . . . . . 41 133 4.5.3. Feedback Control . . . . . . . . . . . . . . . . . . 42 134 4.6. Security Considerations . . . . . . . . . . . . . . . . . 43 135 5. Wireless for Industrial Use Cases . . . . . . . . . . . . . . 44 136 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 44 137 5.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 45 138 5.3. 6TiSCH Overview . . . . . . . . . . . . . . . . . . . . . 45 139 5.3.1. TSCH and 6top . . . . . . . . . . . . . . . . . . . . 48 140 5.3.2. SlotFrames and Priorities . . . . . . . . . . . . . . 48 141 5.3.3. Schedule Management by a PCE . . . . . . . . . . . . 48 142 5.3.4. Track Forwarding . . . . . . . . . . . . . . . . . . 49 143 5.3.4.1. Transport Mode . . . . . . . . . . . . . . . . . 51 144 5.3.4.2. Tunnel Mode . . . . . . . . . . . . . . . . . . . 52 145 5.3.4.3. Tunnel Metadata . . . . . . . . . . . . . . . . . 53 146 5.4. Operations of Interest for DetNet and PCE . . . . . . . . 54 147 5.4.1. Packet Marking and Handling . . . . . . . . . . . . . 55 148 5.4.1.1. Tagging Packets for Flow Identification . . . . . 55 149 5.4.1.2. Replication, Retries and Elimination . . . . . . 55 150 5.4.1.3. Differentiated Services Per-Hop-Behavior . . . . 56 151 5.4.2. Topology and capabilities . . . . . . . . . . . . . . 56 152 5.5. Security Considerations . . . . . . . . . . . . . . . . . 57 153 5.6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . 57 154 6. Cellular Radio Use Cases . . . . . . . . . . . . . . . . . . 57 155 6.1. Introduction and background . . . . . . . . . . . . . . . 57 156 6.2. Network architecture . . . . . . . . . . . . . . . . . . 61 157 6.3. Time synchronization requirements . . . . . . . . . . . . 62 158 6.4. Time-sensitive stream requirements . . . . . . . . . . . 63 159 6.5. Security considerations . . . . . . . . . . . . . . . . . 64 160 7. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 64 161 7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 65 162 7.2. Terminology and Definitions . . . . . . . . . . . . . . . 65 163 7.3. Machine to Machine communication over IP networks . . . . 65 164 7.4. Machine to Machine communication requirements . . . . . . 66 165 7.4.1. Transport parameters . . . . . . . . . . . . . . . . 67 166 7.4.2. Flow maintenance . . . . . . . . . . . . . . . . . . 67 167 7.5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 67 168 7.6. Security Considerations . . . . . . . . . . . . . . . . . 68 169 7.7. Acknowledgements . . . . . . . . . . . . . . . . . . . . 68 170 8. Other Use Cases . . . . . . . . . . . . . . . . . . . . . . . 68 171 8.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 68 172 8.2. Critical Delay Requirements . . . . . . . . . . . . . . . 69 173 8.3. Coordinated multipoint processing (CoMP) . . . . . . . . 70 174 8.3.1. CoMP Architecture . . . . . . . . . . . . . . . . . . 70 175 8.3.2. Delay Sensitivity in CoMP . . . . . . . . . . . . . . 71 176 8.4. Industrial Automation . . . . . . . . . . . . . . . . . . 71 177 8.5. Vehicle to Vehicle . . . . . . . . . . . . . . . . . . . 71 178 8.6. Gaming, Media and Virtual Reality . . . . . . . . . . . . 72 179 9. Use Case Common Elements . . . . . . . . . . . . . . . . . . 72 180 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 73 181 11. Informative References . . . . . . . . . . . . . . . . . . . 73 182 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 82 184 1. Introduction 186 This draft presents use cases from diverse industries which have in 187 common a need for deterministic streams, but which also differ 188 notably in their network topologies and specific desired behavior. 189 Together, they provide broad industry context for DetNet and a 190 yardstick against which proposed DetNet designs can be measured (to 191 what extent does a proposed design satisfy these various use cases?) 192 For DetNet, use cases explicitly do not define requirements; The 193 DetNet WG will consider the use cases, decide which elements are in 194 scope for DetNet, and the results will be incorporated into future 195 drafts. Similarly, the DetNet use case draft explicitly does not 196 suggest any specific design, architecture or protocols, which will be 197 topics of future drafts. 199 We present for each use case the answers to the following questions: 201 o What is the use case? 203 o How is it addressed today? 205 o How would you like it to be addressed in the future? 207 o What do you want the IETF to deliver? 209 The level of detail in each use case should be sufficient to express 210 the relevant elements of the use case, but not more. 212 At the end we consider the use cases collectively, and examine the 213 most significant goals they have in common. 215 2. Pro Audio Use Cases 217 (This section was derived from draft-gunther-detnet-proaudio-req-01) 219 2.1. Introduction 221 The professional audio and video industry includes music and film 222 content creation, broadcast, cinema, and live exposition as well as 223 public address, media and emergency systems at large venues 224 (airports, stadiums, churches, theme parks). These industries have 225 already gone through the transition of audio and video signals from 226 analog to digital, however the interconnect systems remain primarily 227 point-to-point with a single (or small number of) signals per link, 228 interconnected with purpose-built hardware. 230 These industries are now attempting to transition to packet based 231 infrastructure for distributing audio and video in order to reduce 232 cost, increase routing flexibility, and integrate with existing IT 233 infrastructure. 235 However, there are several requirements for making a network the 236 primary infrastructure for audio and video which are not met by 237 todays networks and these are our concern in this draft. 239 The principal requirement is that pro audio and video applications 240 become able to establish streams that provide guaranteed (bounded) 241 bandwidth and latency from the Layer 3 (IP) interface. Such streams 242 can be created today within standards-based layer 2 islands however 243 these are not sufficient to enable effective distribution over wider 244 areas (for example broadcast events that span wide geographical 245 areas). 247 Some proprietary systems have been created which enable deterministic 248 streams at layer 3 however they are engineered networks in that they 249 require careful configuration to operate, often require that the 250 system be over designed, and it is implied that all devices on the 251 network voluntarily play by the rules of that network. To enable 252 these industries to successfully transition to an interoperable 253 multi-vendor packet-based infrastructure requires effective open 254 standards, and we believe that establishing relevant IETF standards 255 is a crucial factor. 257 It would be highly desirable if such streams could be routed over the 258 open Internet, however even intermediate solutions with more limited 259 scope (such as enterprise networks) can provide a substantial 260 improvement over todays networks, and a solution that only provides 261 for the enterprise network scenario is an acceptable first step. 263 We also present more fine grained requirements of the audio and video 264 industries such as safety and security, redundant paths, devices with 265 limited computing resources on the network, and that reserved stream 266 bandwidth is available for use by other best-effort traffic when that 267 stream is not currently in use. 269 2.2. Fundamental Stream Requirements 271 The fundamental stream properties are guaranteed bandwidth and 272 deterministic latency as described in this section. Additional 273 stream requirements are described in a subsequent section. 275 2.2.1. Guaranteed Bandwidth 277 Transmitting audio and video streams is unlike common file transfer 278 activities because guaranteed delivery cannot be achieved by re- 279 trying the transmission; by the time the missing or corrupt packet 280 has been identified it is too late to execute a re-try operation and 281 stream playback is interrupted, which is unacceptable in for example 282 a live concert. In some contexts large amounts of buffering can be 283 used to provide enough delay to allow time for one or more retries, 284 however this is not an effective solution when live interaction is 285 involved, and is not considered an acceptable general solution for 286 pro audio and video. (Have you ever tried speaking into a microphone 287 through a sound system that has an echo coming back at you? It makes 288 it almost impossible to speak clearly). 290 Providing a way to reserve a specific amount of bandwidth for a given 291 stream is a key requirement. 293 2.2.2. Bounded and Consistent Latency 295 Latency in this context means the amount of time that passes between 296 when a signal is sent over a stream and when it is received, for 297 example the amount of time delay between when you speak into a 298 microphone and when your voice emerges from the speaker. Any delay 299 longer than about 10-15 milliseconds is noticeable by most live 300 performers, and greater latency makes the system unusable because it 301 prevents them from playing in time with the other players (see slide 302 6 of [SRP_LATENCY]). 304 The 15ms latency bound is made even more challenging because it is 305 often the case in network based music production with live electric 306 instruments that multiple stages of signal processing are used, 307 connected in series (i.e. from one to the other for example from 308 guitar through a series of digital effects processors) in which case 309 the latencies add, so the latencies of each individual stage must all 310 together remain less than 15ms. 312 In some situations it is acceptable at the local location for content 313 from the live remote site to be delayed to allow for a statistically 314 acceptable amount of latency in order to reduce jitter. However, 315 once the content begins playing in the local location any audio 316 artifacts caused by the local network are unacceptable, especially in 317 those situations where a live local performer is mixed into the feed 318 from the remote location. 320 In addition to being bounded to within some predictable and 321 acceptable amount of time (which may be 15 milliseconds or more or 322 less depending on the application) the latency also has to be 323 consistent. For example when playing a film consisting of a video 324 stream and audio stream over a network, those two streams must be 325 synchronized so that the voice and the picture match up. A common 326 tolerance for audio/video sync is one NTSC video frame (about 33ms) 327 and to maintain the audience perception of correct lip sync the 328 latency needs to be consistent within some reasonable tolerance, for 329 example 10%. 331 A common architecture for synchronizing multiple streams that have 332 different paths through the network (and thus potentially different 333 latencies) is to enable measurement of the latency of each path, and 334 have the data sinks (for example speakers) buffer (delay) all packets 335 on all but the slowest path. Each packet of each stream is assigned 336 a presentation time which is based on the longest required delay. 337 This implies that all sinks must maintain a common time reference of 338 sufficient accuracy, which can be achieved by any of various 339 techniques. 341 This type of architecture is commonly implemented using a central 342 controller that determines path delays and arbitrates buffering 343 delays. 345 2.2.2.1. Optimizations 347 The controller might also perform optimizations based on the 348 individual path delays, for example sinks that are closer to the 349 source can inform the controller that they can accept greater latency 350 since they will be buffering packets to match presentation times of 351 farther away sinks. The controller might then move a stream 352 reservation on a short path to a longer path in order to free up 353 bandwidth for other critical streams on that short path. See slides 354 3-5 of [SRP_LATENCY]. 356 Additional optimization can be achieved in cases where sinks have 357 differing latency requirements, for example in a live outdoor concert 358 the speaker sinks have stricter latency requirements than the 359 recording hardware sinks. See slide 7 of [SRP_LATENCY]. 361 Device cost can be reduced in a system with guaranteed reservations 362 with a small bounded latency due to the reduced requirements for 363 buffering (i.e. memory) on sink devices. For example, a theme park 364 might broadcast a live event across the globe via a layer 3 protocol; 365 in such cases the size of the buffers required is proportional to the 366 latency bounds and jitter caused by delivery, which depends on the 367 worst case segment of the end-to-end network path. For example on 368 todays open internet the latency is typically unacceptable for audio 369 and video streaming without many seconds of buffering. In such 370 scenarios a single gateway device at the local network that receives 371 the feed from the remote site would provide the expensive buffering 372 required to mask the latency and jitter issues associated with long 373 distance delivery. Sink devices in the local location would have no 374 additional buffering requirements, and thus no additional costs, 375 beyond those required for delivery of local content. The sink device 376 would be receiving the identical packets as those sent by the source 377 and would be unaware that there were any latency or jitter issues 378 along the path. 380 2.3. Additional Stream Requirements 382 The requirements in this section are more specific yet are common to 383 multiple audio and video industry applications. 385 2.3.1. Deterministic Time to Establish Streaming 387 Some audio systems installed in public environments (airports, 388 hospitals) have unique requirements with regards to health, safety 389 and fire concerns. One such requirement is a maximum of 3 seconds 390 for a system to respond to an emergency detection and begin sending 391 appropriate warning signals and alarms without human intervention. 392 For this requirement to be met, the system must support a bounded and 393 acceptable time from a notification signal to specific stream 394 establishment. For further details see [ISO7240-16]. 396 Similar requirements apply when the system is restarted after a power 397 cycle, cable re-connection, or system reconfiguration. 399 In many cases such re-establishment of streaming state must be 400 achieved by the peer devices themselves, i.e. without a central 401 controller (since such a controller may only be present during 402 initial network configuration). 404 Video systems introduce related requirements, for example when 405 transitioning from one camera feed to another. Such systems 406 currently use purpose-built hardware to switch feeds smoothly, 407 however there is a current initiative in the broadcast industry to 408 switch to a packet-based infrastructure (see [STUDIO_IP] and the ESPN 409 DC2 use case described below). 411 2.3.2. Use of Unused Reservations by Best-Effort Traffic 413 In cases where stream bandwidth is reserved but not currently used 414 (or is under-utilized) that bandwidth must be available to best- 415 effort (i.e. non-time-sensitive) traffic. For example a single 416 stream may be nailed up (reserved) for specific media content that 417 needs to be presented at different times of the day, ensuring timely 418 delivery of that content, yet in between those times the full 419 bandwidth of the network can be utilized for best-effort tasks such 420 as file transfers. 422 This also addresses a concern of IT network administrators that are 423 considering adding reserved bandwidth traffic to their networks that 424 users will just reserve a ton of bandwidth and then never un-reserve 425 it even though they are not using it, and soon they will have no 426 bandwidth left. 428 2.3.3. Layer 3 Interconnecting Layer 2 Islands 430 As an intermediate step (short of providing guaranteed bandwidth 431 across the open internet) it would be valuable to provide a way to 432 connect multiple Layer 2 networks. For example layer 2 techniques 433 could be used to create a LAN for a single broadcast studio, and 434 several such studios could be interconnected via layer 3 links. 436 2.3.4. Secure Transmission 438 Digital Rights Management (DRM) is very important to the audio and 439 video industries. Any time protected content is introduced into a 440 network there are DRM concerns that must be maintained (see 441 [CONTENT_PROTECTION]). Many aspects of DRM are outside the scope of 442 network technology, however there are cases when a secure link 443 supporting authentication and encryption is required by content 444 owners to carry their audio or video content when it is outside their 445 own secure environment (for example see [DCI]). 447 As an example, two techniques are Digital Transmission Content 448 Protection (DTCP) and High-Bandwidth Digital Content Protection 449 (HDCP). HDCP content is not approved for retransmission within any 450 other type of DRM, while DTCP may be retransmitted under HDCP. 451 Therefore if the source of a stream is outside of the network and it 452 uses HDCP protection it is only allowed to be placed on the network 453 with that same HDCP protection. 455 2.3.5. Redundant Paths 457 On-air and other live media streams must be backed up with redundant 458 links that seamlessly act to deliver the content when the primary 459 link fails for any reason. In point-to-point systems this is 460 provided by an additional point-to-point link; the analogous 461 requirement in a packet-based system is to provide an alternate path 462 through the network such that no individual link can bring down the 463 system. 465 2.3.6. Link Aggregation 467 For transmitting streams that require more bandwidth than a single 468 link in the target network can support, link aggregation is a 469 technique for combining (aggregating) the bandwidth available on 470 multiple physical links to create a single logical link of the 471 required bandwidth. However, if aggregation is to be used, the 472 network controller (or equivalent) must be able to determine the 473 maximum latency of any path through the aggregate link (see Bounded 474 and Consistent Latency section above). 476 2.3.7. Traffic Segregation 478 Sink devices may be low cost devices with limited processing power. 479 In order to not overwhelm the CPUs in these devices it is important 480 to limit the amount of traffic that these devices must process. 482 As an example, consider the use of individual seat speakers in a 483 cinema. These speakers are typically required to be cost reduced 484 since the quantities in a single theater can reach hundreds of seats. 485 Discovery protocols alone in a one thousand seat theater can generate 486 enough broadcast traffic to overwhelm a low powered CPU. Thus an 487 installation like this will benefit greatly from some type of traffic 488 segregation that can define groups of seats to reduce traffic within 489 each group. All seats in the theater must still be able to 490 communicate with a central controller. 492 There are many techniques that can be used to support this 493 requirement including (but not limited to) the following examples. 495 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets 497 Packet forwarding rules can be used to eliminate some extraneous 498 streaming traffic from reaching potentially low powered sink devices, 499 however there may be other types of broadcast traffic that should be 500 eliminated using other means for example VLANs or IP subnets. 502 2.3.7.2. Multicast Addressing (IPv4 and IPv6) 504 Multicast addressing is commonly used to keep bandwidth utilization 505 of shared links to a minimum. 507 Because of the MAC Address forwarding nature of Layer 2 bridges it is 508 important that a multicast MAC address is only associated with one 509 stream. This will prevent reservations from forwarding packets from 510 one stream down a path that has no interested sinks simply because 511 there is another stream on that same path that shares the same 512 multicast MAC address. 514 Since each multicast MAC Address can represent 32 different IPv4 515 multicast addresses there must be a process put in place to make sure 516 this does not occur. Requiring use of IPv6 address can achieve this, 517 however due to their continued prevalence, solutions that are 518 effective for IPv4 installations are also required. 520 2.4. Integration of Reserved Streams into IT Networks 522 A commonly cited goal of moving to a packet based media 523 infrastructure is that costs can be reduced by using off the shelf, 524 commodity network hardware. In addition, economy of scale can be 525 realized by combining media infrastructure with IT infrastructure. 526 In keeping with these goals, stream reservation technology should be 527 compatible with existing protocols, and not compromise use of the 528 network for best effort (non-time-sensitive) traffic. 530 2.5. Security Considerations 532 Many industries that are moving from the point-to-point world to the 533 digital network world have little understanding of the pitfalls that 534 they can create for themselves with improperly implemented network 535 infrastructure. DetNet should consider ways to provide security 536 against DoS attacks in solutions directed at these markets. Some 537 considerations are given here as examples of ways that we can help 538 new users avoid common pitfalls. 540 2.5.1. Denial of Service 542 One security pitfall that this author is aware of involves the use of 543 technology that allows a presenter to throw the content from their 544 tablet or smart phone onto the A/V system that is then viewed by all 545 those in attendance. The facility introducing this technology was 546 quite excited to allow such modern flexibility to those who came to 547 speak. One thing they hadn't realized was that since no security was 548 put in place around this technology it left a hole in the system that 549 allowed other attendees to "throw" their own content onto the A/V 550 system. 552 2.5.2. Control Protocols 554 Professional audio systems can include amplifiers that are capable of 555 generating hundreds or thousands of watts of audio power which if 556 used incorrectly can cause hearing damage to those in the vicinity. 557 Apart from the usual care required by the systems operators to 558 prevent such incidents, the network traffic that controls these 559 devices must be secured (as with any sensitive application traffic). 560 In addition, it would be desirable if the configuration protocols 561 that are used to create the network paths used by the professional 562 audio traffic could be designed to protect devices that are not meant 563 to receive high-amplitude content from having such potentially 564 damaging signals routed to them. 566 2.6. A State-of-the-Art Broadcast Installation Hits Technology Limits 568 ESPN recently constructed a state-of-the-art 194,000 sq ft, $125 569 million broadcast studio called DC2. The DC2 network is capable of 570 handling 46 Tbps of throughput with 60,000 simultaneous signals. 571 Inside the facility are 1,100 miles of fiber feeding four audio 572 control rooms. (See details at [ESPN_DC2] ). 574 In designing DC2 they replaced as much point-to-point technology as 575 they possibly could with packet-based technology. They constructed 576 seven individual studios using layer 2 LANS (using IEEE 802.1 AVB) 577 that were entirely effective at routing audio within the LANs, and 578 they were very happy with the results, however to interconnect these 579 layer 2 LAN islands together they ended up using dedicated links 580 because there is no standards-based routing solution available. 582 This is the kind of motivation we have to develop these standards 583 because customers are ready and able to use them. 585 2.7. Acknowledgements 587 The editors would like to acknowledge the help of the following 588 individuals and the companies they represent: 590 Jeff Koftinoff, Meyer Sound 592 Jouni Korhonen, Associate Technical Director, Broadcom 594 Pascal Thubert, CTAO, Cisco 596 Kieran Tyrrell, Sienda New Media Technologies GmbH 598 3. Utility Telecom Use Cases 600 (This section was derived from draft-wetterwald-detnet-utilities- 601 reqs-02) 603 3.1. Overview 605 [I-D.finn-detnet-problem-statement] defines the characteristics of a 606 deterministic flow as a data communication flow with a bounded 607 latency, extraordinarily low frame loss, and a very narrow jitter. 608 This document intends to define the utility requirements for 609 deterministic networking. 611 Utility Telecom Networks 612 The business and technology trends that are sweeping the utility 613 industry will drastically transform the utility business from the way 614 it has been for many decades. At the core of many of these changes 615 is a drive to modernize the electrical grid with an integrated 616 telecommunications infrastructure. However, interoperability, 617 concerns, legacy networks, disparate tools, and stringent security 618 requirements all add complexity to the grid transformation. Given 619 the range and diversity of the requirements that should be addressed 620 by the next generation telecommunications infrastructure, utilities 621 need to adopt a holistic architectural approach to integrate the 622 electrical grid with digital telecommunications across the entire 623 power delivery chain. 625 Many utilities still rely on complex environments formed of multiple 626 application-specific, proprietary networks. Information is siloed 627 between operational areas. This prevents utility operations from 628 realizing the operational efficiency benefits, visibility, and 629 functional integration of operational information across grid 630 applications and data networks. The key to modernizing grid 631 telecommunications is to provide a common, adaptable, multi-service 632 network infrastructure for the entire utility organization. Such a 633 network serves as the platform for current capabilities while 634 enabling future expansion of the network to accommodate new 635 applications and services. 637 To meet this diverse set of requirements, both today and in the 638 future, the next generation utility telecommunnications network will 639 be based on open-standards-based IP architecture. An end-to-end IP 640 architecture takes advantage of nearly three decades of IP technology 641 development, facilitating interoperability across disparate networks 642 and devices, as it has been already demonstrated in many mission- 643 critical and highly secure networks. 645 IEC (International Electrotechnical Commission) and different 646 National Committees have mandated a specific adhoc group (AHG8) to 647 define the migration strategy to IPv6 for all the IEC TC57 power 648 automation standards. IPv6 is seen as the obvious future 649 telecommunications technology for the Smart Grid. The Adhoc Group 650 has disclosed, to the IEC coordination group, their conclusions at 651 the end of 2014. 653 It is imperative that utilities participate in standards development 654 bodies to influence the development of future solutions and to 655 benefit from shared experiences of other utilities and vendors. 657 3.2. Telecommunications Trends and General telecommunications 658 Requirements 660 These general telecommunications requirements are over and above the 661 specific requirements of the use cases that have been addressed so 662 far. These include both current and future telecommunications 663 related requirements that should be factored into the network 664 architecture and design. 666 3.2.1. General Telecommunications Requirements 668 o IP Connectivity everywhere 670 o Monitoring services everywhere and from different remote centers 672 o Move services to a virtual data center 674 o Unify access to applications / information from the corporate 675 network 677 o Unify services 679 o Unified Communications Solutions 681 o Mix of fiber and microwave technologies - obsolescence of SONET/ 682 SDH or TDM 684 o Standardize grid telecommunications protocol to opened standard to 685 ensure interoperability 687 o Reliable Telecommunications for Transmission and Distribution 688 Substations 690 o IEEE 1588 time synchronization Client / Server Capabilities 692 o Integration of Multicast Design 694 o QoS Requirements Mapping 696 o Enable Future Network Expansion 698 o Substation Network Resilience 700 o Fast Convergence Design 702 o Scalable Headend Design 704 o Define Service Level Agreements (SLA) and Enable SLA Monitoring 705 o Integration of 3G/4G Technologies and future technologies 707 o Ethernet Connectivity for Station Bus Architecture 709 o Ethernet Connectivity for Process Bus Architecture 711 o Protection, teleprotection and PMU (Phaser Measurement Unit) on IP 713 3.2.1.1. Migration to Packet-Switched Network 715 Throughout the world, utilities are increasingly planning for a 716 future based on smart grid applications requiring advanced 717 telecommunications systems. Many of these applications utilize 718 packet connectivity for communicating information and control signals 719 across the utility's Wide Area Network (WAN), made possible by 720 technologies such as multiprotocol label switching (MPLS). The data 721 that traverses the utility WAN includes: 723 o Grid monitoring, control, and protection data 725 o Non-control grid data (e.g. asset data for condition-based 726 monitoring) 728 o Physical safety and security data (e.g. voice and video) 730 o Remote worker access to corporate applications (voice, maps, 731 schematics, etc.) 733 o Field area network backhaul for smart metering, and distribution 734 grid management 736 o Enterprise traffic (email, collaboration tools, business 737 applications) 739 WANs support this wide variety of traffic to and from substations, 740 the transmission and distribution grid, generation sites, between 741 control centers, and between work locations and data centers. To 742 maintain this rapidly expanding set of applications, many utilities 743 are taking steps to evolve present time-division multiplexing (TDM) 744 based and frame relay infrastructures to packet systems. Packet- 745 based networks are designed to provide greater functionalities and 746 higher levels of service for applications, while continuing to 747 deliver reliability and deterministic (real-time) traffic support. 749 3.2.2. Applications, Use cases and traffic patterns 751 Among the numerous applications and use cases that a utility deploys 752 today, many rely on high availability and deterministic behaviour of 753 the telecommunications networks. Protection use cases and generation 754 control are the most demanding and can't rely on a best effort 755 approach. 757 3.2.2.1. Transmission use cases 759 Protection means not only the protection of the human operator but 760 also the protection of the electric equipments and the preservation 761 of the stability and frequency of the grid. If a default occurs on 762 the transmission or the distribution of the electricity, important 763 damages could occured to the human operator but also to very costly 764 electrical equipments and perturb the grid leading to blackouts. The 765 time and reliability requirements are very strong to avoid dramatic 766 impacts to the electrical infrastructure. 768 3.2.2.1.1. Tele Protection 770 The key criteria for measuring Teleprotection performance are command 771 transmission time, dependability and security. These criteria are 772 defined by the IEC standard 60834 as follows: 774 o Transmission time (Speed): The time between the moment where state 775 changes at the transmitter input and the moment of the 776 corresponding change at the receiver output, including propagation 777 delay. Overall operating time for a Teleprotection system 778 includes the time for initiating the command at the transmitting 779 end, the propagation delay over the network (including equipments) 780 and the selection and decision time at the receiving end, 781 including any additional delay due to a noisy environment. 783 o Dependability: The ability to issue and receive valid commands in 784 the presence of interference and/or noise, by minimizing the 785 probability of missing command (PMC). Dependability targets are 786 typically set for a specific bit error rate (BER) level. 788 o Security: The ability to prevent false tripping due to a noisy 789 environment, by minimizing the probability of unwanted commands 790 (PUC). Security targets are also set for a specific bit error 791 rate (BER) level. 793 Additional key elements that may impact Teleprotection performance 794 include bandwidth rate of the Teleprotection system and its 795 resiliency or failure recovery capacity. Transmission time, 796 bandwidth utilization and resiliency are directly linked to the 797 telecommunications equipments and the connections that are used to 798 transfer the commands between relays. 800 3.2.2.1.1.1. Latency Budget Consideration 802 Delay requirements for utility networks may vary depending upon a 803 number of parameters, such as the specific protection equipments 804 used. Most power line equipment can tolerate short circuits or 805 faults for up to approximately five power cycles before sustaining 806 irreversible damage or affecting other segments in the network. This 807 translates to total fault clearance time of 100ms. As a safety 808 precaution, however, actual operation time of protection systems is 809 limited to 70- 80 percent of this period, including fault recognition 810 time, command transmission time and line breaker switching time. 811 Some system components, such as large electromechanical switches, 812 require particularly long time to operate and take up the majority of 813 the total clearance time, leaving only a 10ms window for the 814 telecommunications part of the protection scheme, independent of the 815 distance to travel. Given the sensitivity of the issue, new networks 816 impose requirements that are even more stringent: IEC standard 61850 817 limits the transfer time for protection messages to 1/4 - 1/2 cycle 818 or 4 - 8ms (for 60Hz lines) for the most critical messages. 820 3.2.2.1.1.2. Asymetric delay 822 In addition to minimal transmission delay, a differential protection 823 telecommunications channel must be synchronous, i.e., experiencing 824 symmetrical channel delay in transmit and receive paths. This 825 requires special attention in jitter-prone packet networks. While 826 optimally Teleprotection systems should support zero asymmetric 827 delay, typical legacy relays can tolerate discrepancies of up to 828 750us. 830 The main tools available for lowering delay variation below this 831 threshold are: 833 o A jitter buffer at the multiplexers on each end of the line can be 834 used to offset delay variation by queuing sent and received 835 packets. The length of the queues must balance the need to 836 regulate the rate of transmission with the need to limit overall 837 delay, as larger buffers result in increased latency. This is the 838 old TDM traditional way to fulfill this requirement. 840 o Traffic management tools ensure that the Teleprotection signals 841 receive the highest transmission priority and minimize the number 842 of jitter addition during the path. This is one way to meet the 843 requirement in IP networks. 845 o Standard Packet-Based synchronization technologies, such as 846 1588-2008 Precision Time Protocol (PTP) and Synchronous Ethernet 847 (Sync-E), can help maintain stable networks by keeping a highly 848 accurate clock source on the different network devices involved. 850 3.2.2.1.1.2.1. Other traffic characteristics 852 o Redundancy: The existence in a system of more than one means of 853 accomplishing a given function. 855 o Recovery time : The duration of time within which a business 856 process must be restored after any type of disruption in order to 857 avoid unacceptable consequences associated with a break in 858 business continuity. 860 o performance management : In networking, a management function 861 defined for controlling and analyzing different parameters/metrics 862 such as the throughput, error rate. 864 o packet loss : One or more packets of data travelling across 865 network fail to reach their destination. 867 3.2.2.1.1.2.2. Teleprotection network requirements 869 The following table captures the main network requirements (this is 870 based on IEC 61850 standard) 872 +-----------------------------+-------------------------------------+ 873 | Teleprotection Requirement | Attribute | 874 +-----------------------------+-------------------------------------+ 875 | One way maximum delay | 4-10 ms | 876 | Asymetric delay required | Yes | 877 | Maximum jitter | less than 250 us (750 us for legacy | 878 | | IED) | 879 | Topology | Point to point, point to Multi- | 880 | | point | 881 | Availability | 99.9999 | 882 | precise timing required | Yes | 883 | Recovery time on node | less than 50ms - hitless | 884 | failure | | 885 | performance management | Yes, Mandatory | 886 | Redundancy | Yes | 887 | Packet loss | 0.1% to 1% | 888 +-----------------------------+-------------------------------------+ 890 Table 1: Teleprotection network requirements 892 3.2.2.1.2. Inter-Trip Protection scheme 894 Inter-tripping is the controlled tripping of a circuit breaker to 895 complete the isolation of a circuit or piece of apparatus in concert 896 with the tripping of other circuit breakers. The main use of such 897 schemes is to ensure that protection at both ends of a faulted 898 circuit will operate to isolate the equipment concerned. Inter- 899 tripping schemes use signaling to convey a trip command to remote 900 circuit breakers to isolate circuits. 902 +--------------------------------+----------------------------------+ 903 | Inter-Trip protection | Attribute | 904 | Requirement | | 905 +--------------------------------+----------------------------------+ 906 | One way maximum delay | 5 ms | 907 | Asymetric delay required | No | 908 | Maximum jitter | Not critical | 909 | Topology | Point to point, point to Multi- | 910 | | point | 911 | Bandwidth | 64 Kbps | 912 | Availability | 99.9999 | 913 | precise timing required | Yes | 914 | Recovery time on node failure | less than 50ms - hitless | 915 | performance management | Yes, Mandatory | 916 | Redundancy | Yes | 917 | Packet loss | 0.1% | 918 +--------------------------------+----------------------------------+ 920 Table 2: Inter-Trip protection network requirements 922 3.2.2.1.3. Current Differential Protection Scheme 924 Current differential protection is commonly used for line protection, 925 and is typical for protecting parallel circuits. A main advantage 926 for differential protection is that, compared to overcurrent 927 protection, it allows only the faulted circuit to be de-energized in 928 case of a fault. At both end of the lines, the current is measured 929 by the differential relays, and based on Kirchhoff's law, both relays 930 will trip the circuit breaker if the current going into the line does 931 not equal the current going out of the line. This type of protection 932 scheme assumes some form of communications being present between the 933 relays at both end of the line, to allow both relays to compare 934 measured current values. A fault in line 1 will cause overcurrent to 935 be flowing in both lines, but because the current in line 2 is a 936 through following current, this current is measured equal at both 937 ends of the line, therefore the differential relays on line 2 will 938 not trip line 2. Line 1 will be tripped, as the relays will not 939 measure the same currents at both ends of the line. Line 940 differential protection schemes assume a very low telecommunications 941 delay between both relays, often as low as 5ms. Moreover, as those 942 systems are often not time-synchronized, they also assume symmetric 943 telecommunications paths with constant delay, which allows comparing 944 current measurement values taken at the exact same time. 946 +----------------------------------+--------------------------------+ 947 | Current Differential protection | Attribute | 948 | Requirement | | 949 +----------------------------------+--------------------------------+ 950 | One way maximum delay | 5 ms | 951 | Asymetric delay Required | Yes | 952 | Maximum jitter | less than 250 us (750us for | 953 | | legacy IED) | 954 | Topology | Point to point, point to | 955 | | Multi-point | 956 | Bandwidth | 64 Kbps | 957 | Availability | 99.9999 | 958 | precise timing required | Yes | 959 | Recovery time on node failure | less than 50ms - hitless | 960 | performance management | Yes, Mandatory | 961 | Redundancy | Yes | 962 | Packet loss | 0.1% | 963 +----------------------------------+--------------------------------+ 965 Table 3: Current Differential Protection requirements 967 3.2.2.1.4. Distance Protection Scheme 969 Distance (Impedance Relay) protection scheme is based on voltage and 970 current measurements. A fault on a circuit will generally create a 971 sag in the voltage level. If the ratio of voltage to current 972 measured at the protection relay terminals, which equates to an 973 impedance element, falls within a set threshold the circuit breaker 974 will operate. The operating characteristics of this protection are 975 based on the line characteristics. This means that when a fault 976 appears on the line, the impedance setting in the relay is compared 977 to the apparent impedance of the line from the relay terminals to the 978 fault. If the relay setting is determined to be below the apparent 979 impedance it is determined that the fault is within the zone of 980 protection. When the transmission line length is under a minimum 981 length, distance protection becomes more difficult to coordinate. In 982 these instances the best choice of protection is current differential 983 protection. 985 +-------------------------------+-----------------------------------+ 986 | Distance protection | Attribute | 987 | Requirement | | 988 +-------------------------------+-----------------------------------+ 989 | One way maximum delay | 5 ms | 990 | Asymetric delay Required | No | 991 | Maximum jitter | Not critical | 992 | Topology | Point to point, point to Multi- | 993 | | point | 994 | Bandwidth | 64 Kbps | 995 | Availability | 99.9999 | 996 | precise timing required | Yes | 997 | Recovery time on node failure | less than 50ms - hitless | 998 | performance management | Yes, Mandatory | 999 | Redundancy | Yes | 1000 | Packet loss | 0.1% | 1001 +-------------------------------+-----------------------------------+ 1003 Table 4: Distance Protection requirements 1005 3.2.2.1.5. Inter-Substation Protection Signaling 1007 This use case describes the exchange of Sampled Value and/or GOOSE 1008 (Generic Object Oriented Substation Events) message between 1009 Intelligent Electronic Devices (IED) in two substations for 1010 protection and tripping coordination. The two IEDs are in a master- 1011 slave mode. 1013 The Current Transformer or Voltage Transformer (CT/VT) in one 1014 substation sends the sampled analog voltage or current value to the 1015 Merging Unit (MU) over hard wire. The merging unit sends the time- 1016 synchronized 61850-9-2 sampled values to the slave IED. The slave 1017 IED forwards the information to the Master IED in the other 1018 substation. The master IED makes the determination (for example 1019 based on sampled value differentials) to send a trip command to the 1020 originating IED. Once the slave IED/Relay receives the GOOSE trip 1021 for breaker tripping, it opens the breaker. It then sends a 1022 confirmation message back to the master. All data exchanges between 1023 IEDs are either through Sampled Value and/or GOOSE messages. 1025 +----------------------------------+--------------------------------+ 1026 | Inter-Substation protection | Attribute | 1027 | Requirement | | 1028 +----------------------------------+--------------------------------+ 1029 | One way maximum delay | 5 ms | 1030 | Asymetric delay Required | No | 1031 | Maximum jitter | Not critical | 1032 | Topology | Point to point, point to | 1033 | | Multi-point | 1034 | Bandwidth | 64 Kbps | 1035 | Availability | 99.9999 | 1036 | precise timing required | Yes | 1037 | Recovery time on node failure | less than 50ms - hitless | 1038 | performance management | Yes, Mandatory | 1039 | Redundancy | Yes | 1040 | Packet loss | 1% | 1041 +----------------------------------+--------------------------------+ 1043 Table 5: Inter-Substation Protection requirements 1045 3.2.2.1.6. Intra-Substation Process Bus Communications 1047 This use case describes the data flow from the CT/VT to the IEDs in 1048 the substation via the merging unit (MU). The CT/VT in the 1049 substation send the sampled value (analog voltage or current) to the 1050 Merging Unit (MU) over hard wire. The merging unit sends the time- 1051 synchronized 61850-9-2 sampled values to the IEDs in the substation 1052 in GOOSE message format. The GPS Master Clock can send 1PPS or 1053 IRIG-B format to MU through serial port, or IEEE 1588 protocol via 1054 network. Process bus communication using 61850 simplifies 1055 connectivity within the substation and removes the requirement for 1056 multiple serial connections and removes the slow serial bus 1057 architectures that are typically used. This also ensures increased 1058 flexibility and increased speed with the use of multicast messaging 1059 between multiple devices. 1061 +----------------------------------+--------------------------------+ 1062 | Intra-Substation protection | Attribute | 1063 | Requirement | | 1064 +----------------------------------+--------------------------------+ 1065 | One way maximum delay | 5 ms | 1066 | Asymetric delay Required | No | 1067 | Maximum jitter | Not critical | 1068 | Topology | Point to point, point to | 1069 | | Multi-point | 1070 | Bandwidth | 64 Kbps | 1071 | Availability | 99.9999 | 1072 | precise timing required | Yes | 1073 | Recovery time on Node failure | less than 50ms - hitless | 1074 | performance management | Yes, Mandatory | 1075 | Redundancy | Yes - No | 1076 | Packet loss | 0.1% | 1077 +----------------------------------+--------------------------------+ 1079 Table 6: Intra-Substation Protection requirements 1081 3.2.2.1.7. Wide Area Monitoring and Control Systems 1083 The application of synchrophasor measurement data from Phasor 1084 Measurement Units (PMU) to Wide Area Monitoring and Control Systems 1085 promises to provide important new capabilities for improving system 1086 stability. Access to PMU data enables more timely situational 1087 awareness over larger portions of the grid than what has been 1088 possible historically with normal SCADA (Supervisory Control and Data 1089 Acquisition) data. Handling the volume and real-time nature of 1090 synchrophasor data presents unique challenges for existing 1091 application architectures. Wide Area management System (WAMS) makes 1092 it possible for the condition of the bulk power system to be observed 1093 and understood in real-time so that protective, preventative, or 1094 corrective action can be taken. Because of the very high sampling 1095 rate of measurements and the strict requirement for time 1096 synchronization of the samples, WAMS has stringent telecommunications 1097 requirements in an IP network that are captured in the following 1098 table: 1100 +----------------------+--------------------------------------------+ 1101 | WAMS Requirement | Attribute | 1102 +----------------------+--------------------------------------------+ 1103 | One way maximum | 50 ms | 1104 | delay | | 1105 | Asymetric delay | No | 1106 | Required | | 1107 | Maximum jitter | Not critical | 1108 | Topology | Point to point, point to Multi-point, | 1109 | | Multi-point to Multi-point | 1110 | Bandwidth | 100 Kbps | 1111 | Availability | 99.9999 | 1112 | precise timing | Yes | 1113 | required | | 1114 | Recovery time on | less than 50ms - hitless | 1115 | Node failure | | 1116 | performance | Yes, Mandatory | 1117 | management | | 1118 | Redundancy | Yes | 1119 | Packet loss | 1% | 1120 +----------------------+--------------------------------------------+ 1122 Table 7: WAMS Special Communication Requirements 1124 3.2.2.1.8. IEC 61850 WAN engineering guidelines requirement 1125 classification 1127 The IEC (International Electrotechnical Commission) has recently 1128 published a Technical Report which offers guidelines on how to define 1129 and deploy Wide Area Networks for the interconnections of electric 1130 substations, generation plants and SCADA operation centers. The IEC 1131 61850-90-12 is providing a classification of WAN communication 1132 requirements into 4 classes. You will find herafter the table 1133 summarizing these requirements: 1135 +----------------+------------+------------+------------+-----------+ 1136 | WAN | Class WA | Class WB | Class WC | Class WD | 1137 | Requirement | | | | | 1138 +----------------+------------+------------+------------+-----------+ 1139 | Application | EHV (Extra | HV (High | MV (Medium | General | 1140 | field | High | Voltage) | Voltage) | purpose | 1141 | | Voltage) | | | | 1142 | Latency | 5 ms | 10 ms | 100 ms | > 100 ms | 1143 | Jitter | 10 us | 100 us | 1 ms | 10 ms | 1144 | Latency | 100 us | 1 ms | 10 ms | 100 ms | 1145 | Asymetry | | | | | 1146 | Time Accuracy | 1 us | 10 us | 100 us | 10 to 100 | 1147 | | | | | ms | 1148 | Bit Error rate | 10-7 to | 10-5 to | 10-3 | | 1149 | | 10-6 | 10-4 | | | 1150 | Unavailability | 10-7 to | 10-5 to | 10-3 | | 1151 | | 10-6 | 10-4 | | | 1152 | Recovery delay | Zero | 50 ms | 5 s | 50 s | 1153 | Cyber security | extremely | High | Medium | Medium | 1154 | | high | | | | 1155 +----------------+------------+------------+------------+-----------+ 1157 Table 8: 61850-90-12 Communication Requirements; Courtesy of IEC 1159 3.2.2.2. Distribution use case 1161 3.2.2.2.1. Fault Location Isolation and Service Restoration (FLISR) 1163 As the name implies, Fault Location, Isolation, and Service 1164 Restoration (FLISR) refers to the ability to automatically locate the 1165 fault, isolate the fault, and restore service in the distribution 1166 network. It is a self-healing feature whose purpose is to minimize 1167 the impact of faults by serving portions of the loads on the affected 1168 circuit by switching to other circuits. It reduces the number of 1169 customers that experience a sustained power outage by reconfiguring 1170 distribution circuits. This will likely be the first wide spread 1171 application of distributed intelligence in the grid. Secondary 1172 substations can be connected to multiple primary substations. 1173 Normally, static power switch statuses (open/closed) in the network 1174 dictate the power flow to secondary substations. Reconfiguring the 1175 network in the event of a fault is typically done manually on site to 1176 operate switchgear to energize/de-energize alternate paths. 1177 Automating the operation of substation switchgear allows the utility 1178 to have a more dynamic network where the flow of power can be altered 1179 under fault conditions but also during times of peak load. It allows 1180 the utility to shift peak loads around the network. Or, to be more 1181 precise, alters the configuration of the network to move loads 1182 between different primary substations. The FLISR capability can be 1183 enabled in two modes: 1185 o Managed centrally from DMS (Distribution Management System), or 1187 o Executed locally through distributed control via intelligent 1188 switches and fault sensors. 1190 There are 3 distinct sub-functions that are performed: 1192 1. Fault Location Identification 1194 This sub-function is initiated by SCADA inputs, such as lockouts, 1195 fault indications/location, and, also, by input from the Outage 1196 Management System (OMS), and in the future by inputs from fault- 1197 predicting devices. It determines the specific protective device, 1198 which has cleared the sustained fault, identifies the de-energized 1199 sections, and estimates the probable location of the actual or the 1200 expected fault. It distinguishes faults cleared by controllable 1201 protective devices from those cleared by fuses, and identifies 1202 momentary outages and inrush/cold load pick-up currents. This step 1203 is also referred to as Fault Detection Classification and Location 1204 (FDCL). This step helps to expedite the restoration of faulted 1205 sections through fast fault location identification and improved 1206 diagnostic information available for crew dispatch. Also provides 1207 visualization of fault information to design and implement a 1208 switching plan to isolate the fault. 1210 2. Fault Type Determination 1212 I. Indicates faults cleared by controllable protective devices by 1213 distinguishing between: 1215 a. Faults cleared by fuses 1217 b. Momentary outages 1219 c. Inrush/cold load current 1221 II. Determines the faulted sections based on SCADA fault indications 1222 and protection lockout signals 1224 III. Increases the accuracy of the fault location estimation based 1225 on SCADA fault current measurements and real-time fault analysis 1227 3. Fault Isolation and Service Restoration 1228 Once the location and type of the fault has been pinpointed, the 1229 systems will attempt to isolate the fault and restore the non-faulted 1230 section of the network. This can have three modes of operation: 1232 I. Closed-loop mode : This is initiated by the Fault location sub- 1233 function. It generates a switching order (i.e., sequence of 1234 switching) for the remotely controlled switching devices to isolate 1235 the faulted section, and restore service to the non-faulted sections. 1236 The switching order is automatically executed via SCADA. 1238 II. Advisory mode : This is initiated by the Fault location sub- 1239 function. It generates a switching order for remotely and manually 1240 controlled switching devices to isolate the faulted section, and 1241 restore service to the non-faulted sections. The switching order is 1242 presented to operator for approval and execution. 1244 III. Study mode : the operator initiates this function. It analyzes 1245 a saved case modified by the operator, and generates a switching 1246 order under the operating conditions specified by the operator. 1248 With the increasing volume of data that are collected through fault 1249 sensors, utilities will use Big Data query and analysis tools to 1250 study outage information to anticipate and prevent outages by 1251 detecting failure patterns and their correlation with asset age, 1252 type, load profiles, time of day, weather conditions, and other 1253 conditions to discover conditions that lead to faults and take the 1254 necessary preventive and corrective measures. 1256 +----------------------+--------------------------------------------+ 1257 | FLISR Requirement | Attribute | 1258 +----------------------+--------------------------------------------+ 1259 | One way maximum | 80 ms | 1260 | delay | | 1261 | Asymetric delay | No | 1262 | Required | | 1263 | Maximum jitter | 40 ms | 1264 | Topology | Point to point, point to Multi-point, | 1265 | | Multi-point to Multi-point | 1266 | Bandwidth | 64 Kbps | 1267 | Availability | 99.9999 | 1268 | precise timing | Yes | 1269 | required | | 1270 | Recovery time on | Depends on customer impact | 1271 | Node failure | | 1272 | performance | Yes, Mandatory | 1273 | management | | 1274 | Redundancy | Yes | 1275 | Packet loss | 0.1% | 1276 +----------------------+--------------------------------------------+ 1278 Table 9: FLISR Communication Requirements 1280 3.2.2.3. Generation use case 1282 3.2.2.3.1. Frequency Control / Automatic Generation Control (AGC) 1284 The system frequency should be maintained within a very narrow band. 1285 Deviations from the acceptable frequency range are detected and 1286 forwarded to the Load Frequency Control (LFC) system so that required 1287 up or down generation increase / decrease pulses can be sent to the 1288 power plants for frequency regulation. The trend in system frequency 1289 is a measure of mismatch between demand and generation, and is a 1290 necessary parameter for load control in interconnected systems. 1292 Automatic generation control (AGC) is a system for adjusting the 1293 power output of generators at different power plants, in response to 1294 changes in the load. Since a power grid requires that generation and 1295 load closely balance moment by moment, frequent adjustments to the 1296 output of generators are necessary. The balance can be judged by 1297 measuring the system frequency; if it is increasing, more power is 1298 being generated than used, and all machines in the system are 1299 accelerating. If the system frequency is decreasing, more demand is 1300 on the system than the instantaneous generation can provide, and all 1301 generators are slowing down. 1303 Where the grid has tie lines to adjacent control areas, automatic 1304 generation control helps maintain the power interchanges over the tie 1305 lines at the scheduled levels. The AGC takes into account various 1306 parameters including the most economical units to adjust, the 1307 coordination of thermal, hydroelectric, and other generation types, 1308 and even constraints related to the stability of the system and 1309 capacity of interconnections to other power grids. 1311 For the purpose of AGC we use static frequency measurements and 1312 averaging methods are used to get a more precise measure of system 1313 frequency in steady-state conditions. 1315 During disturbances, more real-time dynamic measurements of system 1316 frequency are taken using PMUs, especially when different areas of 1317 the system exhibit different frequencies. But that is outside the 1318 scope of this use case. 1320 +---------------------------------------------------+---------------+ 1321 | FCAG (Frequency Control Automatic Generation) | Attribute | 1322 | Requirement | | 1323 +---------------------------------------------------+---------------+ 1324 | One way maximum delay | 500 ms | 1325 | Asymetric delay Required | No | 1326 | Maximum jitter | Not critical | 1327 | Topology | Point to | 1328 | | point | 1329 | Bandwidth | 20 Kbps | 1330 | Availability | 99.999 | 1331 | precise timing required | Yes | 1332 | Recovery time on Node failure | N/A | 1333 | performance management | Yes, | 1334 | | Mandatory | 1335 | Redundancy | Yes | 1336 | Packet loss | 1% | 1337 +---------------------------------------------------+---------------+ 1339 Table 10: FCAG Communication Requirements 1341 3.2.3. Specific Network topologies of Smart Grid Applications 1343 Utilities often have very large private telecommunications networks. 1344 It covers an entire territory / country. The main purpose of the 1345 network, until now, has been to support transmission network 1346 monitoring, control, and automation, remote control of generation 1347 sites, and providing FCAPS (Fault. Configuration. Accounting. 1348 Performance. Security) services from centralized network operation 1349 centers. 1351 Going forward, one network will support operation and maintenance of 1352 electrical networks (generation, transmission, and distribution), 1353 voice and data services for ten of thousands of employees and for 1354 exchange with neighboring interconnections, and administrative 1355 services. To meet those requirements, utility may deploy several 1356 physical networks leveraging different technologies across the 1357 country: an optical network and a microwave network for instance. 1358 Each protection and automatism system between two points has two 1359 telecommunications circuits, one on each network. Path diversity 1360 between two substations is key. Regardless of the event type 1361 (hurricane, ice storm, etc.), one path shall stay available so the 1362 SPS can still operate. 1364 In the optical network, signals are transmitted over more than tens 1365 of thousands of circuits using fiber optic links, microwave and 1366 telephone cables. This network is the nervous system of the 1367 utility's power transmission operations. The optical network 1368 represents ten of thousands of km of cable deployed along the power 1369 lines. 1371 Due to vast distances between transmission substations (for example 1372 as far as 280km apart), the fiber signal can be amplified to reach a 1373 distance of 280 km without attenuation. 1375 3.2.4. Precision Time Protocol 1377 Some utilities do not use GPS clocks in generation substations. One 1378 of the main reasons is that some of the generation plants are 30 to 1379 50 meters deep under ground and the GPS signal can be weak and 1380 unreliable. Instead, atomic clocks are used. Clocks are 1381 synchronized amongst each other. Rubidium clocks provide clock and 1382 1ms timestamps for IRIG-B. Some companies plan to transition to the 1383 Precision Time Protocol (IEEE 1588), distributing the synchronization 1384 signal over the IP/MPLS network. 1386 The Precision Time Protocol (PTP) is defined in IEEE standard 1588. 1387 PTP is applicable to distributed systems consisting of one or more 1388 nodes, communicating over a network. Nodes are modeled as containing 1389 a real-time clock that may be used by applications within the node 1390 for various purposes such as generating time-stamps for data or 1391 ordering events managed by the node. The protocol provides a 1392 mechanism for synchronizing the clocks of participating nodes to a 1393 high degree of accuracy and precision. 1395 PTP operates based on the following assumptions : 1397 It is assumed that the network eliminates cyclic forwarding of PTP 1398 messages within each communication path (e.g., by using a spanning 1399 tree protocol). PTP eliminates cyclic forwarding of PTP messages 1400 between communication paths. 1402 PTP is tolerant of an occasional missed message, duplicated 1403 message, or message that arrived out of order. However, PTP 1404 assumes that such impairments are relatively rare. 1406 PTP was designed assuming a multicast communication model. PTP 1407 also supports a unicast communication model as long as the 1408 behavior of the protocol is preserved. 1410 Like all message-based time transfer protocols, PTP time accuracy 1411 is degraded by asymmetry in the paths taken by event messages. 1412 Asymmetry is not detectable by PTP, however, if known, PTP 1413 corrects for asymmetry. 1415 A time-stamp event is generated at the time of transmission and 1416 reception of any event message. The time-stamp event occurs when the 1417 message's timestamp point crosses the boundary between the node and 1418 the network. 1420 IEC 61850 will recommend the use of the IEEE PTP 1588 Utility Profile 1421 (as defined in IEC 62439-3 Annex B) which offers the support of 1422 redundant attachment of clocks to Paralell Redundancy Protcol (PRP) 1423 and High-availability Seamless Redundancy (HSR) networks. 1425 3.3. IANA Considerations 1427 This memo includes no request to IANA. 1429 3.4. Security Considerations 1431 3.4.1. Current Practices and Their Limitations 1433 Grid monitoring and control devices are already targets for cyber 1434 attacks and legacy telecommunications protocols have many intrinsic 1435 network related vulnerabilities. DNP3, Modbus, PROFIBUS/PROFINET, 1436 and other protocols are designed around a common paradigm of request 1437 and respond. Each protocol is designed for a master device such as 1438 an HMI (Human Machine Interface) system to send commands to 1439 subordinate slave devices to retrieve data (reading inputs) or 1440 control (writing to outputs). Because many of these protocols lack 1441 authentication, encryption, or other basic security measures, they 1442 are prone to network-based attacks, allowing a malicious actor or 1443 attacker to utilize the request-and-respond system as a mechanism for 1444 command-and-control like functionality. Specific security concerns 1445 common to most industrial control, including utility 1446 telecommunication protocols include the following: 1448 o Network or transport errors (e.g. malformed packets or excessive 1449 latency) can cause protocol failure. 1451 o Protocol commands may be available that are capable of forcing 1452 slave devices into inoperable states, including powering-off 1453 devices, forcing them into a listen-only state, disabling 1454 alarming. 1456 o Protocol commands may be available that are capable of restarting 1457 communications and otherwise interrupting processes. 1459 o Protocol commands may be available that are capable of clearing, 1460 erasing, or resetting diagnostic information such as counters and 1461 diagnostic registers. 1463 o Protocol commands may be available that are capable of requesting 1464 sensitive information about the controllers, their configurations, 1465 or other need-to-know information. 1467 o Most protocols are application layer protocols transported over 1468 TCP; therefore it is easy to transport commands over non-standard 1469 ports or inject commands into authorized traffic flows. 1471 o Protocol commands may be available that are capable of 1472 broadcasting messages to many devices at once (i.e. a potential 1473 DoS). 1475 o Protocol commands may be available to query the device network to 1476 obtain defined points and their values (i.e. a configuration 1477 scan). 1479 o Protocol commands may be available that will list all available 1480 function codes (i.e. a function scan). 1482 o Bump in the wire (BITW) solutions : A hardware device is added to 1483 provide IPSec services between two routers that are not capable of 1484 IPSec functions. This special IPsec device will intercept then 1485 intercept outgoing datagrams, add IPSec protection to them, and 1486 strip it off incoming datagrams. BITW can all IPSec to legacy 1487 hosts and can retrofit non-IPSec routers to provide security 1488 benefits. The disadvantages are complexity and cost. 1490 These inherent vulnerabilities, along with increasing connectivity 1491 between IT an OT networks, make network-based attacks very feasible. 1492 Simple injection of malicious protocol commands provides control over 1493 the target process. Altering legitimate protocol traffic can also 1494 alter information about a process and disrupt the legitimate controls 1495 that are in place over that process. A man- in-the-middle attack 1496 could provide both control over a process and misrepresentation of 1497 data back to operator consoles. 1499 3.4.2. Security Trends in Utility Networks 1501 Although advanced telecommunications networks can assist in 1502 transforming the energy industry, playing a critical role in 1503 maintaining high levels of reliability, performance, and 1504 manageability, they also introduce the need for an integrated 1505 security infrastructure. Many of the technologies being deployed to 1506 support smart grid projects such as smart meters and sensors can 1507 increase the vulnerability of the grid to attack. Top security 1508 concerns for utilities migrating to an intelligent smart grid 1509 telecommunications platform center on the following trends: 1511 o Integration of distributed energy resources 1513 o Proliferation of digital devices to enable management, automation, 1514 protection, and control 1516 o Regulatory mandates to comply with standards for critical 1517 infrastructure protection 1519 o Migration to new systems for outage management, distribution 1520 automation, condition-based maintenance, load forecasting, and 1521 smart metering 1523 o Demand for new levels of customer service and energy management 1525 This development of a diverse set of networks to support the 1526 integration of microgrids, open-access energy competition, and the 1527 use of network-controlled devices is driving the need for a converged 1528 security infrastructure for all participants in the smart grid, 1529 including utilities, energy service providers, large commercial and 1530 industrial, as well as residential customers. Securing the assets of 1531 electric power delivery systems, from the control center to the 1532 substation, to the feeders and down to customer meters, requires an 1533 end-to-end security infrastructure that protects the myriad of 1534 telecommunications assets used to operate, monitor, and control power 1535 flow and measurement. Cyber security refers to all the security 1536 issues in automation and telecommunications that affect any functions 1537 related to the operation of the electric power systems. 1538 Specifically, it involves the concepts of: 1540 o Integrity : data cannot be altered undetectably 1542 o Authenticity : the telecommunications parties involved must be 1543 validated as genuine 1545 o Authorization : only requests and commands from the authorized 1546 users can be accepted by the system 1548 o Confidentiality : data must not be accessible to any 1549 unauthenticated users 1551 When designing and deploying new smart grid devices and 1552 telecommunications systems, it's imperative to understand the various 1553 impacts of these new components under a variety of attack situations 1554 on the power grid. Consequences of a cyber attack on the grid 1555 telecommunications network can be catastrophic. This is why security 1556 for smart grid is not just an ad hoc feature or product, it's a 1557 complete framework integrating both physical and Cyber security 1558 requirements and covering the entire smart grid networks from 1559 generation to distribution. Security has therefore become one of the 1560 main foundations of the utility telecom network architecture and must 1561 be considered at every layer with a defense-in-depth approach. 1562 Migrating to IP based protocols is key to address these challenges 1563 for two reasons: 1565 1. IP enables a rich set of features and capabilities to enhance the 1566 security posture 1568 2. IP is based on open standards, which allows interoperability 1569 between different vendors and products, driving down the costs 1570 associated with implementing security solutions in OT networks. 1572 Securing OT (Operation technology) telecommunications over packet- 1573 switched IP networks follow the same principles that are foundational 1574 for securing the IT infrastructure, i.e., consideration must be given 1575 to enforcing electronic access control for both person-to-machine and 1576 machine-to-machine communications, and providing the appropriate 1577 levels of data privacy, device and platform integrity, and threat 1578 detection and mitigation. 1580 3.5. Acknowledgements 1582 Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy 1583 Practice Cisco 1585 Pascal Thubert, CTAO Cisco 1587 4. Building Automation Systems Use Cases 1588 4.1. Introduction 1590 Building Automation System (BAS) is a system that manages various 1591 equipment and sensors in buildings (e.g., heating, cooling and 1592 ventilating) for improving residents' comfort, reduction of energy 1593 consumption and automatic responses in case of failure and emergency. 1594 For example, BAS measures temperature of a room by using various 1595 sensors and then controls the HVAC (Heating, Ventilating, and air 1596 Conditioning) system automatically to maintain the temperature level 1597 and minimize the energy consumption. 1599 There are typically two layers of network in a BAS. Upper one is 1600 called management network and the lower one is called field network. 1601 In management networks, an IP-based communication protocol is used 1602 while in field network, non-IP based communication protocols (a.k.a., 1603 field protocol) are mainly used. 1605 There are many field protocols used in today's deployment in which 1606 some medium access control and physical layers protocols are 1607 standards-based and others are proprietary based. Therefore the BAS 1608 needs to have multiple MAC/PHY modules and interfaces to make use of 1609 multiple field protocols based devices. This situation not only 1610 makes BAS more expensive with large development cycle of multiple 1611 devices but also creates the issue of vendor lock-in with multiple 1612 types of management applications. 1614 The other issue with some of the existing field networks and 1615 protocols are security. When these protocols and network were 1616 developed, it was assumed that the field networks are isolated 1617 physically from external networks and therefore the network and 1618 protocol security was not a concern. However, in today's world many 1619 BASes are managed remotely and is connected to shared IP networks and 1620 it is also not uncommon that same IT infrastructure is used be it 1621 office, home or in enterprise networks. Adding network and protocol 1622 security to existing system is a non-trivial task. 1624 This document first describes the BAS functionalities, its 1625 architecture and current deployment models. Then we discuss the use 1626 cases and field network requirements that need to be satisfied by 1627 deterministic networking. 1629 4.2. BAS Functionality 1631 Building Automation System (BAS) is a system that manages various 1632 devices in buildings automatically. BAS primarily performs the 1633 following functions: 1635 o Measures states of devices in a regular interval. For example, 1636 temperature or humidity or illuminance of rooms, on/off state of 1637 room lights, open/close state of doors, FAN speed, valve, running 1638 mode of HVAC, and its power consumption. 1640 o Stores the measured data into a database (Note: The database keeps 1641 the data for several years). 1643 o Provides the measured data for BAS operators for visualization. 1645 o Generates alarms for abnormal state of devices (e.g., calling 1646 operator's cellular phone, sending an e-mail to operators and so 1647 on). 1649 o Controls devices on demand. 1651 o Controls devices with a pre-defined operation schedule (e.g., turn 1652 off room lights at 10:00 PM). 1654 4.3. BAS Architecture 1656 A typical BAS architecture is described below in Figure 1. There are 1657 several elements in a BAS. 1659 +----------------------------+ 1660 | | 1661 | BMS HMI | 1662 | | | | 1663 | +----------------------+ | 1664 | | Management Network | | 1665 | +----------------------+ | 1666 | | | | 1667 | LC LC | 1668 | | | | 1669 | +----------------------+ | 1670 | | Field Network | | 1671 | +----------------------+ | 1672 | | | | | | 1673 | Dev Dev Dev Dev | 1674 | | 1675 +----------------------------+ 1677 BMS := Building Management Server 1678 HMI := Human Machine Interface 1679 LC := Local Controller 1681 Figure 1: BAS architecture 1683 Human Machine Interface (HMI): It is commonly a computing platform 1684 (e.g., desktop PC) used by operators. Operators perform the 1685 following operations through HMI. 1687 o Monitoring devices: HMI displays measured device states. For 1688 example, latest device states, a history chart of states, a popup 1689 window with an alert message. Typically, the measured device 1690 states are stored in BMS (Building Management Server). 1692 o Controlling devices: HMI provides ability to control the devices. 1693 For example, turn on a room light, set a target temperature to 1694 HVAC. Several parameters (a target device, a control value, 1695 etc.), can be set by the operators which then HMI sends to a LC 1696 (Local Controller) via the management network. 1698 o Configuring an operational schedule: HMI provides scheduling 1699 capability through which operational schedule is defined. For 1700 example, schedule includes 1) a time to control, 2) a target 1701 device to control, and 3) a control value. A specific operational 1702 example could be turn off all room lights in the building at 10:00 1703 PM. This schedule is typically stored in BMS. 1705 Building Management Server (BMS) collects device states from LCs 1706 (Local Controllers) and stores it into a database. According to its 1707 configuration, BMS executes the following operation automatically. 1709 o BMS collects device states from LCs in a regular interval and then 1710 stores the information into a database. 1712 o BMS sends control values to LCs according to a pre-configured 1713 schedule. 1715 o BMS sends an alarm signal to operators if it detects abnormal 1716 devices states. For example, turning on a red lamp, calling 1717 operators' cellular phone, sending an e-mail to operators. 1719 BMS and HMI communicate with Local Controllers (LCs) via IP-based 1720 communication protocol standardized by BACnet/IP [bacnetip], KNX/IP 1721 [knx]. These protocols are commonly called as management protocols. 1722 LCs measure device states and provide the information to BMS or HMI. 1723 These devices may include HVAC, FAN, doors, valves, lights, sensors 1724 (e.g., temperature, humidity, and illuminance). LC can also set 1725 control values to the devices. LC sometimes has additional 1726 functions, for example, sending a device state to BMS or HMI if the 1727 device state exceeds a certain threshold value, feedback control to a 1728 device to keep the device state at a certain state. Typical example 1729 of LC is a PLC (Programmable Logic Controller). 1731 Each LC is connected with a different field network and communicates 1732 with several tens or hundreds of devices via the field network. 1733 Today there are many field protocols used in the field network. 1734 Based on the type of field protocol used, LC interfaces and its 1735 hardware/software could be different. Field protocols are currently 1736 non-IP based in which some of them are standards-based (e.g., LonTalk 1737 [lontalk], Modbus [modbus], Profibus [profibus], FL-net [flnet],) and 1738 others are proprietary. 1740 4.4. Deployment Model 1742 An example BAS system deployment model for medium and large buildings 1743 is depicted in Figure 2 below. In this case the physical layout of 1744 the entire system spans across multiple floors in which there is 1745 normally a monitoring room where the BAS management entities are 1746 located. Each floor will have one or more LCs depending upon the 1747 number of devices connected to the field network. 1749 +--------------------------------------------------+ 1750 | Floor 3 | 1751 | +----LC~~~~+~~~~~+~~~~~+ | 1752 | | | | | | 1753 | | Dev Dev Dev | 1754 | | | 1755 |--- | ------------------------------------------| 1756 | | Floor 2 | 1757 | +----LC~~~~+~~~~~+~~~~~+ Field Network | 1758 | | | | | | 1759 | | Dev Dev Dev | 1760 | | | 1761 |--- | ------------------------------------------| 1762 | | Floor 1 | 1763 | +----LC~~~~+~~~~~+~~~~~+ +-----------------| 1764 | | | | | | Monitoring Room | 1765 | | Dev Dev Dev | | 1766 | | | BMS HMI | 1767 | | Management Network | | | | 1768 | +--------------------------------+-----+ | 1769 | | | 1770 +--------------------------------------------------+ 1772 Figure 2: Deployment model for Medium/Large Buildings 1774 Each LC is then connected to the monitoring room via the management 1775 network. In this scenario, the management functions are performed 1776 locally and reside within the building. In most cases, fast Ethernet 1777 (e.g. 100BASE-TX) is used for the management network. In the field 1778 network, variety of physical interfaces such as RS232C, and RS485 are 1779 used. Since management network is non-real time, Ethernet without 1780 quality of service is sufficient for today's deployment. However, 1781 the requirements are different for field networks when they are 1782 replaced by either Ethernet or any wireless technologies supporting 1783 real time requirements (Section 3.4). 1785 Figure 3 depicts a deployment model in which the management can be 1786 hosted remotely. This deployment is becoming popular for small 1787 office and residential buildings whereby having a standalone 1788 monitoring system is not a cost effective solution. In such 1789 scenario, multiple buildings are managed by a remote management 1790 monitoring system. 1792 +---------------+ 1793 | Remote Center | 1794 | | 1795 | BMS HMI | 1796 +------------------------------------+ | | | | 1797 | Floor 2 | | +---+---+ | 1798 | +----LC~~~~+~~~~~+ Field Network| | | | 1799 | | | | | | Router | 1800 | | Dev Dev | +-------|-------+ 1801 | | | | 1802 |--- | ------------------------------| | 1803 | | Floor 1 | | 1804 | +----LC~~~~+~~~~~+ | | 1805 | | | | | | 1806 | | Dev Dev | | 1807 | | | | 1808 | | Management Network | WAN | 1809 | +------------------------Router-------------+ 1810 | | 1811 +------------------------------------+ 1813 Figure 3: Deployment model for Small Buildings 1815 In either case, interoperability today is only limited to the 1816 management network and its protocols. In existing deployment, there 1817 are limited interoperability opportunity in the field network due to 1818 its nature of non-IP-based design and requirements. 1820 4.5. Use cases and Field Network Requirements 1822 In this section, we describe several use cases and corresponding 1823 network requirements. 1825 4.5.1. Environmental Monitoring 1827 In this use case, LCs measure environmental data (e.g. temperatures, 1828 humidity, illuminance, CO2, etc.) from several sensor devices at each 1829 measurement interval. LCs keep latest value of each sensor. BMS 1830 sends data requests to LCs to collect the latest values, then stores 1831 the collected values into a database. Operators check the latest 1832 environmental data that are displayed by the HMI. BMS also checks 1833 the collected data automatically to notify the operators if a room 1834 condition was going to bad (e.g., too hot or cold). The following 1835 table lists the field network requirements in which the number of 1836 devices in a typical building will be ~100s per LC. 1838 +----------------------+-------------+ 1839 | Metric | Requirement | 1840 +----------------------+-------------+ 1841 | Measurement interval | 100 msec | 1842 | | | 1843 | Availability | 99.999 % | 1844 +----------------------+-------------+ 1846 Table 11: Field Network Requirements for Environmental Monitoring 1848 There is a case that BMS sends data requests at each 1 second in 1849 order to draw a historical chart of 1 second granularity. Therefore 1850 100 msec measurement interval is sufficient for this use case, 1851 because typically 10 times granularity (compared with the interval of 1852 data requests) is considered enough accuracy in this use case. A LC 1853 needs to measure values of all sensors connected with itself at each 1854 measurement interval. Each communication delay in this scenario is 1855 not so critical. The important requirement is completing 1856 measurements of all sensor values in the specified measurement 1857 interval. The availability in this use case is very high (Three 9s). 1859 4.5.2. Fire Detection 1861 In the case of fire detection, HMI needs to show a popup window with 1862 an alert message within a few seconds after an abnormal state is 1863 detected. BMS needs to do some operations if it detects fire. For 1864 example, stopping a HVAC, closing fire shutters, and turning on fire 1865 sprinklers. The following table describes requirements in which the 1866 number of devices in a typical building will be ~10s per LC. 1868 +----------------------+---------------+ 1869 | Metric | Requirement | 1870 +----------------------+---------------+ 1871 | Measurement interval | 10s of msec | 1872 | | | 1873 | Communication delay | < 10s of msec | 1874 | | | 1875 | Availability | 99.9999 % | 1876 +----------------------+---------------+ 1878 Table 12: Field Network Requirements for Fire Detection 1880 In order to perform the above operation within a few seconds (1 or 2 1881 seconds) after detecting fire, LCs should measure sensor values at a 1882 regular interval of less than 10s of msec. If a LC detects an 1883 abnormal sensor value, it sends an alarm information to BMS and HMI 1884 immediately. BMS then controls HVAC or fire shutters or fire 1885 sprinklers. HMI then displays a pop up window and generates the 1886 alert message. Since the management network does not operate in real 1887 time, and software run on BMS or HMI requires 100s of ms, the 1888 communication delay should be less than ~10s of msec. The 1889 availability in this use case is very high (Four 9s). 1891 4.5.3. Feedback Control 1893 Feedback control is used to keep a device state at a certain value. 1894 For example, keeping a room temperature at 27 degree Celsius, keeping 1895 a water flow rate at 100 L/m and so on. The target device state is 1896 normally pre-defined in LCs or provided from BMS or from HMI. 1898 In feedback control procedure, a LC repeats the following actions at 1899 a regular interval (feedback interval). 1901 1. The LC measures device states of the target device. 1903 2. The LC calculates a control value by considering the measured 1904 device state. 1906 3. The LC sends the control value to the target device. 1908 The feedback interval highly depends on the characteristics of the 1909 device and a target quality of control value. While several tens of 1910 milliseconds feedback interval is sufficient to control a valve that 1911 regulates a water flow, controlling DC motors requires several 1912 milliseconds interval. The following table describes the field 1913 network requirements in which the number of devices in a typical 1914 building will be ~10s per LC. 1916 +----------------------+---------------+ 1917 | Metric | Requirement | 1918 +----------------------+---------------+ 1919 | Feedback interval | ~10ms - 100ms | 1920 | | | 1921 | Communication delay | < 10s of msec | 1922 | | | 1923 | Communication jitter | < 1 msec | 1924 | | | 1925 | Availability | 99.9999 % | 1926 +----------------------+---------------+ 1928 Table 13: Field Network Requirements for Feedback Control 1930 Small communication delay and jitter are required in this use case in 1931 order to provide high quality of feedback control. This is currently 1932 offered in production environment with hgh availability (Four 9s). 1934 4.6. Security Considerations 1936 Both network and physical security of BAS are important. While 1937 physical security is present in today's deployment, adequate network 1938 security and access control are either not implemented or configured 1939 properly. This was sufficient in networks while they are isolated 1940 and not connected to the IT or other infrastructure networks but when 1941 IT and OT (Operational Technology) are connected in the same 1942 infrastructure network, network security is essential. The 1943 management network being an IP-based network does have the protocols 1944 and knobs to enable the network security but in many cases BAS for 1945 example, does not use device authentication or encryption for data in 1946 transit. On the contrary, many of today's field networks do not 1947 provide any security at all. Following are the high level security 1948 requirements that the network should provide: 1950 o Authentication between management and field devices (both local 1951 and remote) 1953 o Integrity and data origin authentication of communication data 1954 between field and management devices 1956 o Confidentiality of data when communicated to a remote device 1958 o Availability of network data for normal and disaster scenario 1960 5. Wireless for Industrial Use Cases 1962 (This section was derived from draft-thubert-6tisch-4detnet-01) 1964 5.1. Introduction 1966 The emergence of wireless technology has enabled a variety of new 1967 devices to get interconnected, at a very low marginal cost per 1968 device, at any distance ranging from Near Field to interplanetary, 1969 and in circumstances where wiring may not be practical, for instance 1970 on fast-moving or rotating devices. 1972 At the same time, a new breed of Time Sensitive Networks is being 1973 developed to enable traffic that is highly sensitive to jitter, quite 1974 sensitive to latency, and with a high degree of operational 1975 criticality so that loss should be minimized at all times. Such 1976 traffic is not limited to professional Audio/ Video networks, but is 1977 also found in command and control operations such as industrial 1978 automation and vehicular sensors and actuators. 1980 At IEEE802.1, the Audio/Video Task Group [IEEE802.1TSNTG] Time 1981 Sensitive Networking (TSN) to address Deterministic Ethernet. The 1982 Medium access Control (MAC) of IEEE802.15.4 [IEEE802154] has evolved 1983 with the new TimeSlotted Channel Hopping (TSCH) [RFC7554] mode for 1984 deterministic industrial-type applications. TSCH was introduced with 1985 the IEEE802.15.4e [IEEE802154e] amendment and will be wrapped up in 1986 the next revision of the IEEE802.15.4 standard. For all practical 1987 purpose, this document is expected to be insensitive to the future 1988 versions of the IEEE802.15.4 standard, which is thus referenced 1989 undated. 1991 Though at a different time scale, both TSN and TSCH standards provide 1992 Deterministic capabilities to the point that a packet that pertains 1993 to a certain flow crosses the network from node to node following a 1994 very precise schedule, as a train that leaves intermediate stations 1995 at precise times along its path. With TSCH, time is formatted into 1996 timeSlots, and an individual cell is allocated to unicast or 1997 broadcast communication at the MAC level. The time-slotted operation 1998 reduces collisions, saves energy, and enables to more closely 1999 engineer the network for deterministic properties. The channel 2000 hopping aspect is a simple and efficient technique to combat multi- 2001 path fading and co-channel interferences (for example by Wi-Fi 2002 emitters). 2004 The 6TiSCH Architecture [I-D.ietf-6tisch-architecture] defines a 2005 remote monitoring and scheduling management of a TSCH network by a 2006 Path Computation Element (PCE), which cooperates with an abstract 2007 Network Management Entity (NME) to manage timeSlots and device 2008 resources in a manner that minimizes the interaction with and the 2009 load placed on the constrained devices. 2011 This Architecture applies the concepts of Deterministic Networking on 2012 a TSCH network to enable the switching of timeSlots in a G-MPLS 2013 manner. This document details the dependencies that 6TiSCH has on 2014 PCE [PCE] and DetNet [I-D.finn-detnet-architecture] to provide the 2015 necessary capabilities that may be specific to such networks. In 2016 turn, DetNet is expected to integrate and maintain consistency with 2017 the work that has taken place and is continuing at IEEE802.1TSN and 2018 AVnu. 2020 5.2. Terminology 2022 Readers are expected to be familiar with all the terms and concepts 2023 that are discussed in "Multi-link Subnet Support in IPv6" 2024 [I-D.ietf-ipv6-multilink-subnets]. 2026 The draft uses terminology defined or referenced in 2027 [I-D.ietf-6tisch-terminology] and 2028 [I-D.ietf-roll-rpl-industrial-applicability]. 2030 The draft also conforms to the terms and models described in 2031 [RFC3444] and uses the vocabulary and the concepts defined in 2032 [RFC4291] for the IPv6 Architecture. 2034 5.3. 6TiSCH Overview 2036 The scope of the present work is a subnet that, in its basic 2037 configuration, is made of a TSCH [RFC7554] MAC Low Power Lossy 2038 Network (LLN). 2040 ---+-------- ............ ------------ 2041 | External Network | 2042 | +-----+ 2043 +-----+ | NME | 2044 | | LLN Border | | 2045 | | router +-----+ 2046 +-----+ 2047 o o o 2048 o o o o 2049 o o LLN o o o 2050 o o o o 2051 o 2053 Figure 4: Basic Configuration of a 6TiSCH Network 2055 In the extended configuration, a Backbone Router (6BBR) federates 2056 multiple 6TiSCH in a single subnet over a backbone. 6TiSCH 6BBRs 2057 synchronize with one another over the backbone, so as to ensure that 2058 the multiple LLNs that form the IPv6 subnet stay tightly 2059 synchronized. 2061 ---+-------- ............ ------------ 2062 | External Network | 2063 | +-----+ 2064 | +-----+ | NME | 2065 +-----+ | +-----+ | | 2066 | | Router | | PCE | +-----+ 2067 | | +--| | 2068 +-----+ +-----+ 2069 | | 2070 | Subnet Backbone | 2071 +--------------------+------------------+ 2072 | | | 2073 +-----+ +-----+ +-----+ 2074 | | Backbone | | Backbone | | Backbone 2075 o | | router | | router | | router 2076 +-----+ +-----+ +-----+ 2077 o o o o o 2078 o o o o o o o o o o o 2079 o o o LLN o o o o 2080 o o o o o o o o o o o o 2082 Figure 5: Extended Configuration of a 6TiSCH Network 2084 If the Backbone is Deterministic, then the Backbone Router ensures 2085 that the end-to-end deterministic behavior is maintained between the 2086 LLN and the backbone. This SHOULD be done in conformance to the 2087 DetNet Architecture [I-D.finn-detnet-architecture] which studies 2088 Layer-3 aspects of Deterministic Networks, and covers networks that 2089 span multiple Layer-2 domains. One particular requirement is that 2090 the PCE MUST be able to compute a deterministic path and to end 2091 across the TSCH network and an IEEE802.1 TSN Ethernet backbone, and 2092 DetNet MUST enable end-to-end deterministic forwarding. 2094 6TiSCH defines the concept of a Track, which is a complex form of a 2095 uni-directional Circuit ([I-D.ietf-6tisch-terminology]). As opposed 2096 to a simple circuit that is a sequence of nodes and links, a Track is 2097 shaped as a directed acyclic graph towards a destination to support 2098 multi-path forwarding and route around failures. A Track may also 2099 branch off and rejoin, for the purpose of the so-called Packet 2100 Replication and Elimination (PRE), over non congruent branches. PRE 2101 may be used to complement layer-2 Automatic Repeat reQuest (ARQ) to 2102 meet industrial expectations in Packet Delivery Ratio (PDR), in 2103 particular when the Track extends beyond the 6TiSCH network. 2105 +-----+ 2106 | IoT | 2107 | G/W | 2108 +-----+ 2109 ^ <---- Elimination 2110 | | 2111 Track branch | | 2112 +-------+ +--------+ Subnet Backbone 2113 | | 2114 +--|--+ +--|--+ 2115 | | | Backbone | | | Backbone 2116 o | | | router | | | router 2117 +--/--+ +--|--+ 2118 o / o o---o----/ o 2119 o o---o--/ o o o o o 2120 o \ / o o LLN o 2121 o v <---- Replication 2122 o 2124 Figure 6: End-to-End deterministic Track 2126 In the example above, a Track is laid out from a field device in a 2127 6TiSCH network to an IoT gateway that is located on a IEEE802.1 TSN 2128 backbone. 2130 The Replication function in the field device sends a copy of each 2131 packet over two different branches, and the PCE schedules each hop of 2132 both branches so that the two copies arrive in due time at the 2133 gateway. In case of a loss on one branch, hopefully the other copy 2134 of the packet still makes it in due time. If two copies make it to 2135 the IoT gateway, the Elimination function in the gateway ignores the 2136 extra packet and presents only one copy to upper layers. 2138 At each 6TiSCH hop along the Track, the PCE may schedule more than 2139 one timeSlot for a packet, so as to support Layer-2 retries (ARQ). 2140 It is also possible that the field device only uses the second branch 2141 if sending over the first branch fails. 2143 In current deployments, a TSCH Track does not necessarily support PRE 2144 but is systematically multi-path. This means that a Track is 2145 scheduled so as to ensure that each hop has at least two forwarding 2146 solutions, and the forwarding decision is to try the preferred one 2147 and use the other in case of Layer-2 transmission failure as detected 2148 by ARQ. 2150 5.3.1. TSCH and 6top 2152 6top is a logical link control sitting between the IP layer and the 2153 TSCH MAC layer, which provides the link abstraction that is required 2154 for IP operations. The 6top operations are specified in 2155 [I-D.wang-6tisch-6top-sublayer]. 2157 The 6top data model and management interfaces are further discussed 2158 in [I-D.ietf-6tisch-6top-interface] and [I-D.ietf-6tisch-coap]. 2160 The architecture defines "soft" cells and "hard" cells. "Hard" cells 2161 are owned and managed by an separate scheduling entity (e.g. a PCE) 2162 that specifies the slotOffset/channelOffset of the cells to be 2163 added/moved/deleted, in which case 6top can only act as instructed, 2164 and may not move hard cells in the TSCH schedule on its own. 2166 5.3.2. SlotFrames and Priorities 2168 A slotFrame is the base object that the PCE needs to manipulate to 2169 program a schedule into an LLN node. Elaboration on that concept can 2170 be found in section "SlotFrames and Priorities" of the 6TiSCH 2171 architecture [I-D.ietf-6tisch-architecture]. The architecture also 2172 details how the schedule is constructed and how transmission 2173 resources called cells can be allocated to particular transmissions 2174 so as to avoid collisions. 2176 5.3.3. Schedule Management by a PCE 2178 6TiSCH supports a mixed model of centralized routes and distributed 2179 routes. Centralized routes can for example be computed by a entity 2180 such as a PCE. Distributed routes are computed by RPL. 2182 Both methods may inject routes in the Routing Tables of the 6TiSCH 2183 routers. In either case, each route is associated with a 6TiSCH 2184 topology that can be a RPL Instance topology or a track. The 6TiSCH 2185 topology is indexed by a Instance ID, in a format that reuses the 2186 RPLInstanceID as defined in RPL [RFC6550]. 2188 Both RPL and PCE rely on shared sources such as policies to define 2189 Global and Local RPLInstanceIDs that can be used by either method. 2190 It is possible for centralized and distributed routing to share a 2191 same topology. Generally they will operate in different slotFrames, 2192 and centralized routes will be used for scheduled traffic and will 2193 have precedence over distributed routes in case of conflict between 2194 the slotFrames. 2196 Section "Schedule Management Mechanisms" of the 6TiSCH architecture 2197 describes 4 paradigms to manage the TSCH schedule of the LLN nodes: 2198 Static Scheduling, neighbor-to-neighbor Scheduling, remote monitoring 2199 and scheduling management, and Hop-by-hop scheduling. The Track 2200 operation for DetNet corresponds to a remote monitoring and 2201 scheduling management by a PCE. 2203 The 6top interface document [I-D.ietf-6tisch-6top-interface] 2204 specifies the generic data model that can be used to monitor and 2205 manage resources of the 6top sublayer. Abstract methods are 2206 suggested for use by a management entity in the device. The data 2207 model also enables remote control operations on the 6top sublayer. 2209 [I-D.ietf-6tisch-coap] defines an mapping of the 6top set of 2210 commands, which is described in [I-D.ietf-6tisch-6top-interface], to 2211 CoAP resources. This allows an entity to interact with the 6top 2212 layer of a node that is multiple hops away in a RESTful fashion. 2214 [I-D.ietf-6tisch-coap] also defines a basic set CoAP resources and 2215 associated RESTful access methods (GET/PUT/POST/DELETE). The payload 2216 (body) of the CoAP messages is encoded using the CBOR format. The 2217 PCE commands are expected to be issued directly as CoAP requests or 2218 to be mapped back and forth into CoAP by a gateway function at the 2219 edge of the 6TiSCH network. For instance, it is possible that a 2220 mapping entity on the backbone transforms a non-CoAP protocol such as 2221 PCEP into the RESTful interfaces that the 6TiSCH devices support. 2222 This architecture will be refined to comply with DetNet 2223 [I-D.finn-detnet-architecture] when the work is formalized. 2225 5.3.4. Track Forwarding 2227 By forwarding, this specification means the per-packet operation that 2228 allows to deliver a packet to a next hop or an upper layer in this 2229 node. Forwarding is based on pre-existing state that was installed 2230 as a result of the routing computation of a Track by a PCE. The 2231 6TiSCH architecture supports three different forwarding model, G-MPLS 2232 Track Forwarding (TF), 6LoWPAN Fragment Forwarding (FF) and IPv6 2233 Forwarding (6F) which is the classical IP operation. The DetNet case 2234 relates to the Track Forwarding operation under the control of a PCE. 2236 A Track is a unidirectional path between a source and a destination. 2237 In a Track cell, the normal operation of IEEE802.15.4 Automatic 2238 Repeat-reQuest (ARQ) usually happens, though the acknowledgment may 2239 be omitted in some cases, for instance if there is no scheduled cell 2240 for a retry. 2242 Track Forwarding is the simplest and fastest. A bundle of cells set 2243 to receive (RX-cells) is uniquely paired to a bundle of cells that 2244 are set to transmit (TX-cells), representing a layer-2 forwarding 2245 state that can be used regardless of the network layer protocol. 2246 This model can effectively be seen as a Generalized Multi-protocol 2247 Label Switching (G-MPLS) operation in that the information used to 2248 switch a frame is not an explicit label, but rather related to other 2249 properties of the way the packet was received, a particular cell in 2250 the case of 6TiSCH. As a result, as long as the TSCH MAC (and 2251 Layer-2 security) accepts a frame, that frame can be switched 2252 regardless of the protocol, whether this is an IPv6 packet, a 6LoWPAN 2253 fragment, or a frame from an alternate protocol such as WirelessHART 2254 or ISA100.11a. 2256 A data frame that is forwarded along a Track normally has a 2257 destination MAC address that is set to broadcast - or a multicast 2258 address depending on MAC support. This way, the MAC layer in the 2259 intermediate nodes accepts the incoming frame and 6top switches it 2260 without incurring a change in the MAC header. In the case of 2261 IEEE802.15.4, this means effectively broadcast, so that along the 2262 Track the short address for the destination of the frame is set to 2263 0xFFFF. 2265 A Track is thus formed end-to-end as a succession of paired bundles, 2266 a receive bundle from the previous hop and a transmit bundle to the 2267 next hop along the Track, and a cell in such a bundle belongs to at 2268 most one Track. For a given iteration of the device schedule, the 2269 effective channel of the cell is obtained by adding a pseudo-random 2270 number to the channelOffset of the cell, which results in a rotation 2271 of the frequency that used for transmission. The bundles may be 2272 computed so as to accommodate both variable rates and 2273 retransmissions, so they might not be fully used at a given iteration 2274 of the schedule. The 6TiSCH architecture provides additional means 2275 to avoid waste of cells as well as overflows in the transmit bundle, 2276 as follows: 2278 In one hand, a TX-cell that is not needed for the current iteration 2279 may be reused opportunistically on a per-hop basis for routed 2280 packets. When all of the frame that were received for a given Track 2281 are effectively transmitted, any available TX-cell for that Track can 2282 be reused for upper layer traffic for which the next-hop router 2283 matches the next hop along the Track. In that case, the cell that is 2284 being used is effectively a TX-cell from the Track, but the short 2285 address for the destination is that of the next-hop router. It 2286 results that a frame that is received in a RX-cell of a Track with a 2287 destination MAC address set to this node as opposed to broadcast must 2288 be extracted from the Track and delivered to the upper layer (a frame 2289 with an unrecognized MAC address is dropped at the lower MAC layer 2290 and thus is not received at the 6top sublayer). 2292 On the other hand, it might happen that there are not enough TX-cells 2293 in the transmit bundle to accommodate the Track traffic, for instance 2294 if more retransmissions are needed than provisioned. In that case, 2295 the frame can be placed for transmission in the bundle that is used 2296 for layer-3 traffic towards the next hop along the track as long as 2297 it can be routed by the upper layer, that is, typically, if the frame 2298 transports an IPv6 packet. The MAC address should be set to the 2299 next-hop MAC address to avoid confusion. It results that a frame 2300 that is received over a layer-3 bundle may be in fact associated to a 2301 Track. In a classical IP link such as an Ethernet, off-track traffic 2302 is typically in excess over reservation to be routed along the non- 2303 reserved path based on its QoS setting. But with 6TiSCH, since the 2304 use of the layer-3 bundle may be due to transmission failures, it 2305 makes sense for the receiver to recognize a frame that should be re- 2306 tracked, and to place it back on the appropriate bundle if possible. 2307 A frame should be re-tracked if the Per-Hop-Behavior group indicated 2308 in the Differentiated Services Field in the IPv6 header is set to 2309 Deterministic Forwarding, as discussed in Section 5.4.1. A frame is 2310 re-tracked by scheduling it for transmission over the transmit bundle 2311 associated to the Track, with the destination MAC address set to 2312 broadcast. 2314 There are 2 modes for a Track, transport mode and tunnel mode. 2316 5.3.4.1. Transport Mode 2318 In transport mode, the Protocol Data Unit (PDU) is associated with 2319 flow-dependant meta-data that refers uniquely to the Track, so the 2320 6top sublayer can place the frame in the appropriate cell without 2321 ambiguity. In the case of IPv6 traffic, this flow identification is 2322 transported in the Flow Label of the IPv6 header. Associated with 2323 the source IPv6 address, the Flow Label forms a globally unique 2324 identifier for that particular Track that is validated at egress 2325 before restoring the destination MAC address (DMAC) and punting to 2326 the upper layer. 2328 | ^ 2329 +--------------+ | | 2330 | IPv6 | | | 2331 +--------------+ | | 2332 | 6LoWPAN HC | | | 2333 +--------------+ ingress egress 2334 | 6top | sets +----+ +----+ restores 2335 +--------------+ dmac to | | | | dmac to 2336 | TSCH MAC | brdcst | | | | self 2337 +--------------+ | | | | | | 2338 | LLN PHY | +-------+ +--...-----+ +-------+ 2339 +--------------+ 2341 Track Forwarding, Transport Mode 2343 5.3.4.2. Tunnel Mode 2345 In tunnel mode, the frames originate from an arbitrary protocol over 2346 a compatible MAC that may or may not be synchronized with the 6TiSCH 2347 network. An example of this would be a router with a dual radio that 2348 is capable of receiving and sending WirelessHART or ISA100.11a frames 2349 with the second radio, by presenting itself as an access Point or a 2350 Backbone Router, respectively. 2352 In that mode, some entity (e.g. PCE) can coordinate with a 2353 WirelessHART Network Manager or an ISA100.11a System Manager to 2354 specify the flows that are to be transported transparently over the 2355 Track. 2357 +--------------+ 2358 | IPv6 | 2359 +--------------+ 2360 | 6LoWPAN HC | 2361 +--------------+ set restore 2362 | 6top | +dmac+ +dmac+ 2363 +--------------+ to|brdcst to|nexthop 2364 | TSCH MAC | | | | | 2365 +--------------+ | | | | 2366 | LLN PHY | +-------+ +--...-----+ +-------+ 2367 +--------------+ | ingress egress | 2368 | | 2369 +--------------+ | | 2370 | LLN PHY | | | 2371 +--------------+ | | 2372 | TSCH MAC | | | 2373 +--------------+ | dmac = | dmac = 2374 |ISA100/WiHART | | nexthop v nexthop 2375 +--------------+ 2377 Figure 7: Track Forwarding, Tunnel Mode 2379 In that case, the flow information that identifies the Track at the 2380 ingress 6TiSCH router is derived from the RX-cell. The dmac is set 2381 to this node but the flow information indicates that the frame must 2382 be tunneled over a particular Track so the frame is not passed to the 2383 upper layer. Instead, the dmac is forced to broadcast and the frame 2384 is passed to the 6top sublayer for switching. 2386 At the egress 6TiSCH router, the reverse operation occurs. Based on 2387 metadata associated to the Track, the frame is passed to the 2388 appropriate link layer with the destination MAC restored. 2390 5.3.4.3. Tunnel Metadata 2392 Metadata coming with the Track configuration is expected to provide 2393 the destination MAC address of the egress endpoint as well as the 2394 tunnel mode and specific data depending on the mode, for instance a 2395 service access point for frame delivery at egress. If the tunnel 2396 egress point does not have a MAC address that matches the 2397 configuration, the Track installation fails. 2399 In transport mode, if the final layer-3 destination is the tunnel 2400 termination, then it is possible that the IPv6 address of the 2401 destination is compressed at the 6LoWPAN sublayer based on the MAC 2402 address. It is thus mandatory at the ingress point to validate that 2403 the MAC address that was used at the 6LoWPAN sublayer for compression 2404 matches that of the tunnel egress point. For that reason, the node 2405 that injects a packet on a Track checks that the destination is 2406 effectively that of the tunnel egress point before it overwrites it 2407 to broadcast. The 6top sublayer at the tunnel egress point reverts 2408 that operation to the MAC address obtained from the tunnel metadata. 2410 5.4. Operations of Interest for DetNet and PCE 2412 In a classical system, the 6TiSCH device does not place the request 2413 for bandwidth between self and another device in the network. 2414 Rather, an Operation Control System invoked through an Human/Machine 2415 Interface (HMI) indicates the Traffic Specification, in particular in 2416 terms of latency and reliability, and the end nodes. With this, the 2417 PCE must compute a Track between the end nodes and provision the 2418 network with per-flow state that describes the per-hop operation for 2419 a given packet, the corresponding timeSlots, and the flow 2420 identification that enables to recognize when a certain packet 2421 belongs to a certain Track, sort out duplicates, etc... 2423 For a static configuration that serves a certain purpose for a long 2424 period of time, it is expected that a node will be provisioned in one 2425 shot with a full schedule, which incorporates the aggregation of its 2426 behavior for multiple Tracks. 6TiSCH expects that the programing of 2427 the schedule will be done over COAP as discussed in 6TiSCH Resource 2428 Management and Interaction using CoAP [I-D.ietf-6tisch-coap]. 2430 But an Hybrid mode may be required as well whereby a single Track is 2431 added, modified, or removed, for instance if it appears that a Track 2432 does not perform as expected for, say, PDR. For that case, the 2433 expectation is that a protocol that flows along a Track (to be), in a 2434 fashion similar to classical Traffic Engineering (TE) [CCAMP], may be 2435 used to update the state in the devices. 6TiSCH provides means for a 2436 device to negotiate a timeSlot with a neighbor, but in general that 2437 flow was not designed and no protocol was selected and it is expected 2438 that DetNet will determine the appropriate end-to-end protocols to be 2439 used in that case. 2441 Operational System and HMI 2443 -+-+-+-+-+-+-+ Northbound -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- 2445 PCE PCE PCE PCE 2447 -+-+-+-+-+-+-+ Southbound -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- 2449 --- 6TiSCH------6TiSCH------6TiSCH------6TiSCH-- 2450 6TiSCH / Device Device Device Device \ 2451 Device- - 6TiSCH 2452 \ 6TiSCH 6TiSCH 6TiSCH 6TiSCH / Device 2453 ----Device------Device------Device------Device-- 2455 Figure 8: Stream Management Entity 2457 5.4.1. Packet Marking and Handling 2459 Section "Packet Marking and Handling" of 2460 [I-D.ietf-6tisch-architecture] describes the packet tagging and 2461 marking that is expected in 6TiSCH networks. 2463 5.4.1.1. Tagging Packets for Flow Identification 2465 For packets that are routed by a PCE along a Track, the tuple formed 2466 by the IPv6 source address and a local RPLInstanceID is tagged in the 2467 packets to identify uniquely the Track and associated transmit bundle 2468 of timeSlots. 2470 It results that the tagging that is used for a DetNet flow outside 2471 the 6TiSCH LLN MUST be swapped into 6TiSCH formats and back as the 2472 packet enters and then leaves the 6TiSCH network. 2474 Note: The method and format used for encoding the RPLInstanceID at 2475 6lo is generalized to all 6TiSCH topological Instances, which 2476 includes Tracks. 2478 5.4.1.2. Replication, Retries and Elimination 2480 6TiSCH expects elimination and replication of packets along a complex 2481 Track, but has no position about how the sequence numbers would be 2482 tagged in the packet. 2484 As it goes, 6TiSCH expects that timeSlots corresponding to copies of 2485 a same packet along a Track are correlated by configuration, and does 2486 not need to process the sequence numbers. 2488 The semantics of the configuration MUST enable correlated timeSlots 2489 to be grouped for transmit (and respectively receive) with a 'OR' 2490 relations, and then a 'AND' relation MUST be configurable between 2491 groups. The semantics is that if the transmit (and respectively 2492 receive) operation succeeded in one timeSlot in a 'OR' group, then 2493 all the other timeSLots in the group are ignored. Now, if there are 2494 at least two groups, the 'AND' relation between the groups indicates 2495 that one operation must succeed in each of the groups. 2497 On the transmit side, timeSlots provisioned for retries along a same 2498 branch of a Track are placed a same 'OR' group. The 'OR' relation 2499 indicates that if a transmission is acknowledged, then further 2500 transmissions SHOULD NOT be attempted for timeSlots in that group. 2501 There are as many 'OR' groups as there are branches of the Track 2502 departing from this node. Different 'OR' groups are programmed for 2503 the purpose of replication, each group corresponding to one branch of 2504 the Track. The 'AND' relation between the groups indicates that 2505 transmission over any of branches MUST be attempted regardless of 2506 whether a transmission succeeded in another branch. It is also 2507 possible to place cells to different next-hop routers in a same 'OR' 2508 group. This allows to route along multi-path tracks, trying one 2509 next-hop and then another only if sending to the first fails. 2511 On the receive side, all timeSlots are programmed in a same 'OR' 2512 group. Retries of a same copy as well as converging branches for 2513 elimination are converged, meaning that the first successful 2514 reception is enough and that all the other timeSlots can be ignored. 2516 5.4.1.3. Differentiated Services Per-Hop-Behavior 2518 Additionally, an IP packet that is sent along a Track uses the 2519 Differentiated Services Per-Hop-Behavior Group called Deterministic 2520 Forwarding, as described in 2521 [I-D.svshah-tsvwg-deterministic-forwarding]. 2523 5.4.2. Topology and capabilities 2525 6TiSCH nodes are usually IoT devices, characterized by very limited 2526 amount of memory, just enough buffers to store one or a few IPv6 2527 packets, and limited bandwidth between peers. It results that a node 2528 will maintain only a small number of peering information, and will 2529 not be able to store many packets waiting to be forwarded. Peers can 2530 be identified through MAC or IPv6 addresses, but a Cryptographically 2531 Generated Address [RFC3972] (CGA) may also be used. 2533 Neighbors can be discovered over the radio using mechanism such as 2534 beacons, but, though the neighbor information is available in the 2535 6TiSCH interface data model, 6TiSCH does not describe a protocol to 2536 pro-actively push the neighborhood information to a PCE. This 2537 protocol should be described and should operate over CoAP. The 2538 protocol should be able to carry multiple metrics, in particular the 2539 same metrics as used for RPL operations [RFC6551] 2541 The energy that the device consumes in sleep, transmit and receive 2542 modes can be evaluated and reported. So can the amount of energy 2543 that is stored in the device and the power that it can be scavenged 2544 from the environment. The PCE SHOULD be able to compute Tracks that 2545 will implement policies on how the energy is consumed, for instance 2546 balance between nodes, ensure that the spent energy does not exceeded 2547 the scavenged energy over a period of time, etc... 2549 5.5. Security Considerations 2551 On top of the classical protection of control signaling that can be 2552 expected to support DetNet, it must be noted that 6TiSCH networks 2553 operate on limited resources that can be depleted rapidly if an 2554 attacker manages to operate a DoS attack on the system, for instance 2555 by placing a rogue device in the network, or by obtaining management 2556 control and to setup extra paths. 2558 5.6. Acknowledgments 2560 This specification derives from the 6TiSCH architecture, which is the 2561 result of multiple interactions, in particular during the 6TiSCH 2562 (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at 2563 the IETF. 2565 The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier 2566 Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael 2567 Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon, 2568 Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey, 2569 Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria 2570 Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation 2571 and various contributions. 2573 6. Cellular Radio Use Cases 2575 (This section was derived from draft-korhonen-detnet-telreq-00) 2577 6.1. Introduction and background 2579 The recent developments in telecommunication networks, especially in 2580 the cellular domain, are heading towards transport networks where 2581 precise time synchronization support has to be one of the basic 2582 building blocks. While the transport networks themselves have 2583 practically transitioned to all-AP packet based networks to meet the 2584 bandwidth and cost requirements, a highly accurate clock distribution 2585 has become a challenge. Earlier the transport networks in the 2586 cellular domain were typically time division and multiplexing (TDM) 2587 -based and provided frequency synchronization capabilities as a part 2588 of the transport media. Alternatively other technologies such as 2589 Global Positioning System (GPS) or Synchronous Ethernet (SyncE) 2590 [SyncE] were used. New radio access network deployment models and 2591 architectures may require time sensitive networking services with 2592 strict requirements on other parts of the network that previously 2593 were not considered to be packetized at all. The time and 2594 synchronization support are already topical for backhaul and midhaul 2595 packet networks [MEF], and becoming a real issue for fronthaul 2596 networks. Specifically in the fronthaul networks the timing and 2597 synchronization requirements can be extreme for packet based 2598 technologies, for example, in order of sub +-20 ns packet delay 2599 variation (PDV) and frequency accuracy of +0.002 PPM [Fronthaul]. 2601 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985] 2602 for legacy transport support) have become popular tools to build and 2603 manage new all-IP radio access networks (RAN) 2604 [I-D.kh-spring-ip-ran-use-case]. Although various timing and 2605 synchronization optimizations have already been proposed and 2606 implemented including 1588 PTP enhancements 2607 [I-D.ietf-tictoc-1588overmpls][I-D.mirsky-mpls-residence-time], these 2608 solution are not necessarily sufficient for the forthcoming RAN 2609 architectures or guarantee the higher time-synchronization 2610 requirements [CPRI]. There are also existing solutions for the TDM 2611 over IP [RFC5087] [RFC4553] or Ethernet transports [RFC5086]. The 2612 really interesting and important existing work for time sensitive 2613 networking has been done for Ethernet [TSNTG], which specifies the 2614 use of IEEE 1588 time precision protocol (PTP) [IEEE1588] in the 2615 context of IEEE 802.1D and IEEE 802.1Q. While IEEE 802.1AS 2616 [IEEE8021AS] specifies a Layer-2 time synchronizing service other 2617 specification, such as IEEE 1722 [IEEE1722] specify Ethernet-based 2618 Layer-2 transport for time-sensitive streams. New promising work 2619 seeks to enable the transport of time-sensitive fronthaul streams in 2620 Ethernet bridged networks [IEEE8021CM]. Similarly to IEEE 1722 there 2621 is an ongoing standardization effort to define Layer-2 transport 2622 encapsulation format for transporting radio over Ethernet (RoE) in 2623 IEEE 1904.3 Task Force [IEEE19043]. 2625 As already mentioned all-IP RANs and various "haul" networks would 2626 benefit from time synchronization and time-sensitive transport 2627 services. Although Ethernet appears to be the unifying technology 2628 for the transport there is still a disconnect providing Layer-3 2629 services. The protocol stack typically has a number of layers below 2630 the Ethernet Layer-2 that shows up to the Layer-3 IP transport. It 2631 is not uncommon that on top of the lowest layer (optical) transport 2632 there is the first layer of Ethernet followed one or more layers of 2633 MPLS, PseudoWires and/or other tunneling protocols finally carrying 2634 the Ethernet layer visible to the user plane IP traffic. While there 2635 are existing technologies, especially in MPLS/PWE space, to establish 2636 circuits through the routed and switched networks, there is a lack of 2637 signaling the time synchronization and time-sensitive stream 2638 requirements/reservations for Layer-3 flows in a way that the entire 2639 transport stack is addressed and the Ethernet layers that needs to be 2640 configured are addressed. Furthermore, not all "user plane" traffic 2641 will be IP. Therefore, the same solution need also address the use 2642 cases where the user plane traffic is again another layer or Ethernet 2643 frames. There is existing work describing the problem statement 2644 [I-D.finn-detnet-problem-statement] and the architecture 2645 [I-D.finn-detnet-architecture] for deterministic networking (DetNet) 2646 that eventually targets to provide solutions for time-sensitive (IP/ 2647 transport) streams with deterministic properties over Ethernet-based 2648 switched networks. 2650 This document describes requirements for deterministic networking in 2651 a cellular telecom transport networks context. The requirements 2652 include time synchronization, clock distribution and ways of 2653 establishing time-sensitive streams for both Layer-2 and Layer-3 user 2654 plane traffic using IETF protocol solutions. 2656 The recent developments in telecommunication networks, especially in 2657 the cellular domain, are heading towards transport networks where 2658 precise time synchronization support has to be one of the basic 2659 building blocks. While the transport networks themselves have 2660 practically transitioned to all-AP packet based networks to meet the 2661 bandwidth and cost requirements, a highly accurate clock distribution 2662 has become a challenge. Earlier the transport networks in the 2663 cellular domain were typically time division and multiplexing (TDM) 2664 -based and provided frequency synchronization capabilities as a part 2665 of the transport media. Alternatively other technologies such as 2666 Global Positioning System (GPS) or Synchronous Ethernet (SyncE) 2667 [SyncE] were used. New radio access network deployment models and 2668 architectures may require time sensitive networking services with 2669 strict requirements on other parts of the network that previously 2670 were not considered to be packetized at all. The time and 2671 synchronization support are already topical for backhaul and midhaul 2672 packet networks [MEF], and becoming a real issue for fronthaul 2673 networks. Specifically in the fronthaul networks the timing and 2674 synchronization requirements can be extreme for packet based 2675 technologies, for example, in order of sub +-20 ns packet delay 2676 variation (PDV) and frequency accuracy of +0.002 PPM [Fronthaul]. 2678 Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985] 2679 for legacy transport support) have become popular tools to build and 2680 manage new all-IP radio access networks (RAN) 2681 [I-D.kh-spring-ip-ran-use-case]. Although various timing and 2682 synchronization optimizations have already been proposed and 2683 implemented including 1588 PTP enhancements 2684 [I-D.ietf-tictoc-1588overmpls][I-D.mirsky-mpls-residence-time], these 2685 solution are not necessarily sufficient for the forthcoming RAN 2686 architectures or guarantee the higher time-synchronization 2687 requirements [CPRI]. There are also existing solutions for the TDM 2688 over IP [RFC5087] [RFC4553] or Ethernet transports [RFC5086]. The 2689 really interesting and important existing work for time sensitive 2690 networking has been done for Ethernet [TSNTG], which specifies the 2691 use of IEEE 1588 time precision protocol (PTP) [IEEE1588] in the 2692 context of IEEE 802.1D and IEEE 802.1Q. While IEEE 802.1AS 2693 [IEEE8021AS] specifies a Layer-2 time synchronizing service other 2694 specification, such as IEEE 1722 [IEEE1722] specify Ethernet-based 2695 Layer-2 transport for time-sensitive streams. New promising work 2696 seeks to enable the transport of time-sensitive fronthaul streams in 2697 Ethernet bridged networks [IEEE8021CM]. Similarly to IEEE 1722 there 2698 is an ongoing standardization effort to define Layer-2 transport 2699 encapsulation format for transporting radio over Ethernet (RoE) in 2700 IEEE 1904.3 Task Force [IEEE19043]. 2702 As already mentioned all-IP RANs and various "haul" networks would 2703 benefit from time synchronization and time-sensitive transport 2704 services. Although Ethernet appears to be the unifying technology 2705 for the transport there is still a disconnect providing Layer-3 2706 services. The protocol stack typically has a number of layers below 2707 the Ethernet Layer-2 that shows up to the Layer-3 IP transport. It 2708 is not uncommon that on top of the lowest layer (optical) transport 2709 there is the first layer of Ethernet followed one or more layers of 2710 MPLS, PseudoWires and/or other tunneling protocols finally carrying 2711 the Ethernet layer visible to the user plane IP traffic. While there 2712 are existing technologies, especially in MPLS/PWE space, to establish 2713 circuits through the routed and switched networks, there is a lack of 2714 signaling the time synchronization and time-sensitive stream 2715 requirements/reservations for Layer-3 flows in a way that the entire 2716 transport stack is addressed and the Ethernet layers that needs to be 2717 configured are addressed. Furthermore, not all "user plane" traffic 2718 will be IP. Therefore, the same solution need also address the use 2719 cases where the user plane traffic is again another layer or Ethernet 2720 frames. There is existing work describing the problem statement 2721 [I-D.finn-detnet-problem-statement] and the architecture 2722 [I-D.finn-detnet-architecture] for deterministic networking (DetNet) 2723 that eventually targets to provide solutions for time-sensitive (IP/ 2724 transport) streams with deterministic properties over Ethernet-based 2725 switched networks. 2727 This document describes requirements for deterministic networking in 2728 a cellular telecom transport networks context. The requirements 2729 include time synchronization, clock distribution and ways of 2730 establishing time-sensitive streams for both Layer-2 and Layer-3 user 2731 plane traffic using IETF protocol solutions. 2733 6.2. Network architecture 2735 Figure Figure 9 illustrates a typical, 3GPP defined, cellular network 2736 architecture, which also has fronthaul and midhaul network segments. 2737 The fronthaul refers to the network connecting base stations (base 2738 band processing units) to the remote radio heads (antennas). The 2739 midhaul network typically refers to the network inter-connecting base 2740 stations (or small/pico cells). 2742 Fronthaul networks build on the available excess time after the base 2743 band processing of the radio frame has completed. Therefore, the 2744 available time for networking is actually very limited, which in 2745 practise determines how far the remote radio heads can be from the 2746 base band processing units (i.e. base stations). For example, in a 2747 case of LTE radio the Hybrid ARQ processing of a radio frame is 2748 allocated 3 ms. Typically the processing completes way earlier (say 2749 up to 400 us, could be much less, though) thus allowing the remaining 2750 time to be used e.g. for fronthaul network. 200 us equals roughly 40 2751 km of optical fiber based transport (assuming round trip time would 2752 be total 2*200 us). The base band processing time and the available 2753 "delay budget" for the fronthaul is a subject to change, possibly 2754 dramatically, in the forthcoming "5G" to meet, for example, the 2755 envisioned reduced radio round trip times, and other architecural and 2756 service requirements [NGMN]. 2758 The maximum "delay budget" is then consumed by all nodes and required 2759 buffering between the remote radio head and the base band processing 2760 in addition to the distance incurred delay. Packet delay variation 2761 (PDV) is problematic to fronthaul networks and must be minimized. If 2762 the transport network cannot guarantee low enough PDV additional 2763 buffering has to be introduced at the edges of the network to buffer 2764 out the jitter. Any buffering will eat up the total available delay 2765 budget, though. Section Section 6.3 will discuss the PDV 2766 requirements in more detail. 2768 Y (remote radios) 2769 \ 2770 Y__ \.--. .--. +------+ 2771 \_( `. +---+ _(Back`. | 3GPP | 2772 Y------( Front )----|eNB|----( Haul )----| core | 2773 ( ` .Haul ) +---+ ( ` . ) ) | netw | 2774 /`--(___.-' \ `--(___.-' +------+ 2775 Y_/ / \.--. \ 2776 Y_/ _( Mid`. \ 2777 ( Haul ) \ 2778 ( ` . ) ) \ 2779 `--(___.-'\_____+---+ (small cells) 2780 \ |SCe|__Y 2781 +---+ +---+ 2782 Y__|eNB|__Y 2783 +---+ 2784 Y_/ \_Y ("local" radios) 2786 Figure 9: Generic 3GPP-based cellular network architecture with 2787 Front/Mid/Backhaul networks 2789 6.3. Time synchronization requirements 2791 Cellular networks starting from long term evolution (LTE) [TS36300] 2792 [TS23401] radio the phase synchronization is also needed in addition 2793 to the frequency synchronization. The commonly referenced fronthaul 2794 network synchronization requirements are typically drawn from the 2795 common public radio interface (CPRI) [CPRI] specification that 2796 defines the transport protocol between the base band processing - 2797 radio equipment controller (REC) and the remote antenna - radio 2798 equipment (RE). However, the fundamental requirements still 2799 originate from the respective cellular system and radio 2800 specifications such as the 3GPP ones [TS25104][TS36104][TS36211] 2801 [TS36133]. 2803 The fronthaul time synchronization requirements for the current 3GPP 2804 LTE-based networks are listed below: 2806 Transport link contribution to radio frequency error: 2808 +-2 PPB. The given value is considered to be "available" for the 2809 fronthaul link out of the total 50 PPB budget reserved for the 2810 radio interface. 2812 Delay accuracy: 2814 +-8.138 ns i.e. +-1/32 Tc (UMTS Chip time, Tc, 1/3.84 MHz) to 2815 downlink direction and excluding the (optical) cable length in one 2816 direction. Round trip accuracy is then +-16.276 ns. The value is 2817 this low to meet the 3GPP timing alignment error (TAE) measurement 2818 requirements. 2820 Packet delay variation (PDV): 2822 * For multiple input multiple output (MIMO) or TX diversity 2823 transmissions, at each carrier frequency, TAE shall not exceed 2824 65 ns (i.e. 1/4 Tc). 2826 * For intra-band contiguous carrier aggregation, with or without 2827 MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2 2828 Tc). 2830 * For intra-band non-contiguous carrier aggregation, with or 2831 without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e. 2832 one Tc). 2834 * For inter-band carrier aggregation, with or without MIMO or TX 2835 diversity, TAE shall not exceed 260 ns. 2837 The above listed time synchronization requirements are hard to meet 2838 even with point to point connected networks, not to mention cases 2839 where the underlying transport network actually constitutes of 2840 multiple hops. It is expected that network deployments have to deal 2841 with the jitter requirements buffering at the very ends of the 2842 connections, since trying to meet the jitter requirements in every 2843 intermediate node is likely to be too costly. However, every measure 2844 to reduce jitter and delay on the path are valuable to make it easier 2845 to meet the end to end requirements. 2847 In order to meet the timing requirements both senders and receivers 2848 must is perfect sync. This asks for a very accurate clock 2849 distribution solution. Basically all means and hardware support for 2850 guaranteeing accurate time synchronization in the network is needed. 2851 As an example support for 1588 transparent clocks (TC) in every 2852 intermediate node would be helpful. 2854 6.4. Time-sensitive stream requirements 2856 In addition to the time synchronization requirements listed in 2857 Section Section 6.3 the fronthaul networks assume practically error 2858 free transport. The maximum bit error rate (BER) has been defined to 2859 be 10^-12. When packetized that would equal roughly to packet error 2860 rate (PER) of 2.4*10^-9 (assuming ~300 bytes packets). 2861 Retransmitting lost packets and/or using forward error coding (FEC) 2862 to circumvent bit errors are practically impossible due additional 2863 incurred delay. Using redundant streams for better guarantees for 2864 delivery is also practically impossible due to high bandwidth 2865 requirements fronthaul networks have. For instance, current 2866 uncompressed CPRI bandwidth expansion ratio is roughly 20:1 compared 2867 to the IP layer user payload it carries in a "radio sample form". 2869 The other fundamental assumption is that fronthaul links are 2870 symmetric. Last, all fronthaul streams (carrying radio data) have 2871 equal priority and cannot delay or pre-empt each other. This implies 2872 the network has always be sufficiently under subscribed to guarantee 2873 each time-sensitive flow meets their schedule. 2875 Mapping the fronthaul requirements to [I-D.finn-detnet-architecture] 2876 Section 3 "Providing the DetNet Quality of Service" what is seemed 2877 usable are: 2879 (a) Zero congestion loss. 2881 (b) Pinned-down paths. 2883 The current time-sensitive networking features may still not be 2884 sufficient for fronthaul traffic. Therefore, having specific 2885 profiles that take the requirements of fronthaul into account are 2886 deemed to be useful [IEEE8021CM]. 2888 The actual transport protocols and/or solutions to establish required 2889 transport "circuits" (pinned-down paths) for fronthaul traffic are 2890 still undefined. Those are likely to include but not limited to 2891 solutions directly over Ethernet, over IP, and MPLS/PseudoWire 2892 transport. 2894 6.5. Security considerations 2896 Establishing time-sensitive streams in the network entails reserving 2897 networking resources sometimes for a considerable long time. It is 2898 important that these reservation requests must be authenticated to 2899 prevent malicious reservation attempts from hostile nodes or even 2900 accidental misconfiguration. This is specifically important in a 2901 case where the reservation requests span administrative domains. 2902 Furthermore, the reservation information itself should be digitally 2903 signed to reduce the risk where a legitimate node pushed a stale or 2904 hostile configuration into the networking node. 2906 7. Industrial M2M 2908 (This section was derived from draft-varga-industrial-m2m-00) 2910 7.1. Introduction 2912 Traditional "industrial automation" and terminology usually refers to 2913 automation of manufacturing, quality control and material processing. 2914 In practice, it means that machine units in a plant floor need cyclic 2915 control data exchange to upstream or downstream machine modules or to 2916 a supervisory control in a local network, which is often based on 2917 proprietary networking technologies today. 2919 For such communication between industrial entities, it is critical to 2920 ensure proper and deterministic end to end delivery time of messages 2921 with (very) high reliability and robustness, especially in closed 2922 loop automation control. 2924 Moreover, the recent trend is to use standard networking technologies 2925 in the local network and for connecting remote industrial automation 2926 sites, e.g., over an enterprise or metro network which also carries 2927 other types of traffic. Therefore, deterministic flows should be 2928 guaranteed regardless of the amount of other flows in those networks 2929 for the deployment of future industrial automation. 2931 This document covers a selected industrial application, identifies 2932 representative solutions used today, and points on new use cases an 2933 IETF DetNet solution may enable. 2935 7.2. Terminology and Definitions 2937 DetNet: Deterministic Networking. [IETFDetNet] 2939 M2M: Machine to Machine. 2941 MES: Manufacturing-Execution-System. 2943 PLC: Programmable Logic Control. 2945 S-PLC: Supervisory Programmable Logic Control. 2947 7.3. Machine to Machine communication over IP networks 2949 In case of industrial automation, the actors of Machine to Machine 2950 (M2M) communication are Programmable Logic Controls (PLC). The 2951 communication between PLCs and between PLCs and the supervisory PLC 2952 (S-PLC) is achieved via critical Control-Data-Streams Figure 10. 2953 This draft focuses on PLC related communications and communication to 2954 Manufacturing-Execution-System (MES) are out-of-scope. The PLC 2955 related Control-Data-Streams are transmitted periodically and they 2956 are established either with (i) a pre-configured payload or (ii) a 2957 payload configuration during runtime. 2959 S (Sensor) 2960 \ +-----+ 2961 PLC__ \.--. .--. ---| MES | 2962 \_( `. _( `./ +-----+ 2963 A------( Local )-------------( L2 ) 2964 ( Net ) ( Net ) +-------+ 2965 /`--(___.-' `--(___.-' ----| S-PLC | 2966 S_/ / PLC .--. / +-------+ 2967 A_/ \_( `. 2968 (Actuator) ( Local ) 2969 ( Net ) 2970 /`--(___.-'\ 2971 / \ A 2972 S A 2974 Figure 10: Current generic industrial M2M network architecture 2976 The network topologies used today by applications of industrial 2977 automation are (i) daisy chain, (ii) ring and (iii) hub and spoke. 2978 Such topologies are often used in telecommunication networks too. In 2979 industrial networks comb (being a subset of daisy-chain) is also 2980 used. 2982 Some industrial applications require Time Synchronization (Sync) to 2983 end nodes, which is also similar to some telecommunication networks, 2984 e.g., mobile Radio Access Networks. For such time coordinated PLCs, 2985 accuracy of 1 microseconds is required. In case of non-time 2986 coordinated PLCs, a requirement for Time Sync may still exist, e.g., 2987 for time stamping of collected measurement (sensor) data. 2989 7.4. Machine to Machine communication requirements 2991 The requirements listed here refer to critical Control-Data-Streams. 2992 Non-critical traffic of industrial automation applications can be 2993 served with currently available prioritizing techniques. 2995 In an industrial environment, non-time-critical traffic is related to 2996 (i) communication of state, configuration, set-up, etc., (ii) 2997 connection to Manufacturing-Execution-System (MES) and (iii) database 2998 communication. Such type of traffic can use up to 80% of the 2999 available bandwidth. There is a subset of non-time-critical traffic 3000 that their bandwidth should be guaranteed. 3002 The rest of this chapter is dealing only with time-critical traffic. 3004 7.4.1. Transport parameters 3006 The Cycle Time defines the frequency of message(s) between industrial 3007 entities. The Cycle Time is application dependent, it is in the 3008 range of 1ms - 100ms for critical Control-Data-Streams. 3010 As industrial applications assume deterministic transport instead of 3011 defining latency and delay variation parameters for critical Control- 3012 Data-Stream parameters, it is enough to fulfill the upper bound of 3013 latency (maximum latency). The communication must ensure a maximum 3014 end to end delivery time of messages in the range of 100 microseconds 3015 to 50 milliseconds depending on the control loop application. 3017 Bandwidth requirements of Control-Data-Streams are usually calculated 3018 directly from the bytes per cycle parameter of the control loop. For 3019 PLC to PLC communication one can expect 2 - 32 streams with packet 3020 size in the range of 100 - 700 bytes. For S-PLC to PLCs the number 3021 of streams is higher up-to 256 streams need to be supported. Usually 3022 no more than 20% of available bandwidth is used for critical Control- 3023 Data-Streams in today's networks, which comprise Gbps links. 3025 Usual PLC control loops are rather tolerant for packet loss. 3026 Critical Control-Data-Streams accept no more than 1 packet loss per 3027 consecutive communication cycles. The required network availability 3028 is rather high, it hits the 5 nines (99,999%). 3030 Based on the above parameters, it can be concluded that some form of 3031 redundancy might be required for M2M communication. The actual 3032 solution depends on several parameters, like cycle time, delivery 3033 time, etc. 3035 7.4.2. Flow maintenance 3037 Most Critical Control-Data-Streams get created at startup, however, 3038 flexibility is also needed during runtime (e.g. add / remove 3039 machine). In an industrial environment, critical Control-Data- 3040 Streams are created rather infrequent: ~10 times per day / week / 3041 month. With the future advent of flexible production systems, flow 3042 maintenance parameters are expected to increase significantly. 3044 7.5. Summary 3046 This document specifies an industrial machine-to-machine use-case in 3047 the DetNet context. 3049 7.6. Security Considerations 3051 Industrial network scenarios require advanced security solutions. 3052 Many of the current industrial production networks are physically 3053 separated. Protection of critical flows are handled today by 3054 gateways / firewalls. 3056 7.7. Acknowledgements 3058 The authors would like to thank Feng Chen and Marcel Kiessling for 3059 their comments and suggestions. 3061 8. Other Use Cases 3063 (This section was derived from draft-zha-detnet-use-case-00) 3065 8.1. Introduction 3067 The rapid growth of the today's communication system and its access 3068 into almost all aspects of daily life has led to great dependency on 3069 services it provides. The communication network, as it is today, has 3070 applications such as multimedia and peer-to-peer file sharing 3071 distribution that require Quality of Service (QoS) guarantees in 3072 terms of delay and jitter to maintain a certain level of performance. 3073 Meanwhile, mobile wireless communications has become an important 3074 part to support modern sociality with increasing importance over the 3075 last years. A communication network of hard real-time and high 3076 reliability is essential for the next concurrent and next generation 3077 mobile wireless networks as well as its bearer network for E-2-E 3078 performance requirements. 3080 Conventional transport network is IP-based because of the bandwidth 3081 and cost requirements. However the delay and jitter guarantee 3082 becomes a challenge in case of contention since the service here is 3083 not deterministic but best effort. With more and more rigid demand 3084 in latency control in the future network [METIS], deterministic 3085 networking [I-D.finn-detnet-architecture] is a promising solution to 3086 meet the ultra low delay applications and use cases. There are 3087 already typical issues for delay sensitive networking requirements in 3088 midhaul and backhaul network to support LTE and future 5G network 3089 [net5G]. And not only in the telecom industry but also other 3090 vertical industry has increasing demand on delay sensitive 3091 communications as the automation becomes critical recently. 3093 More specifically, CoMP techniques, D-2-D, industrial automation and 3094 gaming/media service all have great dependency on the low delay 3095 communications as well as high reliability to guarantee the service 3096 performance. Note that the deterministic networking is not equal to 3097 low latency as it is more focused on the worst case delay bound of 3098 the duration of certain application or service. It can be argued 3099 that without high certainty and absolute delay guarantee, low delay 3100 provisioning is just relative [rfc3393], which is not sufficient to 3101 some delay critical service since delay violation in an instance 3102 cannot be tolerated. Overall, the requirements from vertical 3103 industries seem to be well aligned with the expected low latency and 3104 high determinist performance of future networks 3106 This document describes several use cases and scenarios with 3107 requirements on deterministic delay guarantee within the scope of the 3108 deterministic network [I-D.finn-detnet-problem-statement]. 3110 8.2. Critical Delay Requirements 3112 Delay and jitter requirement has been take into account as a major 3113 component in QoS provisioning since the birth of Internet. The delay 3114 sensitive networking with increasing importance become the root of 3115 mobile wireless communications as well as the applicable areas which 3116 are all greatly relied on low delay communications. Due to the best 3117 effort feature of the IP networking, mitigate contention and 3118 buffering is the main solution to serve the delay sensitive service. 3119 More bandwidth is assigned to keep the link low loaded or in another 3120 word, reduce the probability of congestion. However, not only lack 3121 of determinist but also has limitation to serve the applications in 3122 the future communication system, keeping low loaded cannot provide 3123 deterministic delay guarantee. Take the [METIS] that documents the 3124 fundamental challenges as well as overall technical goal of the 5G 3125 mobile and wireless system as the starting point. It should 3126 supports: -1000 times higher mobile data volume per area, -10 times 3127 to 100 times higher typical user data rate, -10 times to 100 times 3128 higher number of connected devices, -10 times longer battery life for 3129 low power devices, and -5 times reduced End-to-End (E2E) latency, at 3130 similar cost and energy consumption levels as today's system. Taking 3131 part of these requirements related to latency, current LTE networking 3132 system has E2E latency less than 20ms [LTE-Latency] which leads to 3133 around 5ms E2E latency for 5G networks. It has been argued that 3134 fulfill such rigid latency demand with similar cost will be most 3135 challenging as the system also requires 100 times bandwidth as well 3136 as 100 times of connected devices. As a result to that, simply 3137 adding redundant bandwidth provisioning can be no longer an efficient 3138 solution due to the high bandwidth requirements more than ever 3139 before. In addition to the bandwidth provisioning, the critical flow 3140 within its reserved resource should not be affected by other flows no 3141 matter the pressure of the network. Robust defense of critical flow 3142 is also not depended on redundant bandwidth allocation. 3143 Deterministic networking techniques in both layer-2 and layer-3 using 3144 IETF protocol solutions can be promising to serve these scenarios. 3146 8.3. Coordinated multipoint processing (CoMP) 3148 In the wireless communication system, Coordinated multipoint 3149 processing (CoMP) is considered as an effective technique to solve 3150 the inter-cell interference problem to improve the cell-edge user 3151 throughput [CoMP]. 3153 8.3.1. CoMP Architecture 3155 +--------------------------+ 3156 | CoMP | 3157 +--+--------------------+--+ 3158 | | 3159 +----------+ +------------+ 3160 | Uplink | | Downlink | 3161 +-----+----+ +--------+---+ 3162 | | 3163 ------------------- ----------------------- 3164 | | | | | | 3165 +---------+ +----+ +-----+ +------------+ +-----+ +-----+ 3166 | Joint | | CS | | DPS | | Joint | | CS/ | | DPS | 3167 |Reception| | | | | |Transmission| | CB | | | 3168 +---------+ +----+ +-----+ +------------+ +-----+ +-----+ 3169 | | 3170 |----------- |------------- 3171 | | | | 3172 +------------+ +---------+ +----------+ +------------+ 3173 | Joint | | Soft | | Coherent | | Non- | 3174 |Equalization| |Combining| | JT | | Coherent JT| 3175 +------------+ +---------+ +----------+ +------------+ 3177 Figure 11: Framework of CoMP Technology 3179 As shown in Figure 11, CoMP reception and transmission is a framework 3180 that multiple geographically distributed antenna nodes cooperate to 3181 improve the performance of the users served in the common cooperation 3182 area. The design principal of CoMP is to extend the current single- 3183 cell to multi-UEs transmission to a multi-cell- to-multi-UEs 3184 transmission by base station cooperation. In contrast to single-cell 3185 scenario, CoMP has critical issues such as: Backhaul latency, CSI 3186 (Channel State Information) reporting and accuracy and Network 3187 complexity. Clearly the first two requirements are very much delay 3188 sensitive and will be discussed in next section. 3190 8.3.2. Delay Sensitivity in CoMP 3192 As the essential feature of CoMP, signaling is exchanged between 3193 eNBs, the backhaul latency is the dominating limitation of the CoMP 3194 performance. Generally, JT and JP may benefit from coordinating the 3195 scheduling (distributed or centralized) of different cells in case 3196 that the signaling exchanging between eNBs is limited to 4-10ms. For 3197 C-RAN the backhaul latency requirement is 250us while for D-RAN it is 3198 4-15ms. And this delay requirement is not only rigid but also 3199 absolute since any uncertainty in delay will down the performance 3200 significantly. Note that, some operator's transport network is not 3201 build to support Layer-3 transfer in aggregation layer. In such 3202 case, the signaling is exchanged through EPC which means delay is 3203 supposed to be larger. CoMP has high requirement on delay and 3204 reliability which is lack by current mobile network systems and may 3205 impact the architecture of the mobile network. 3207 8.4. Industrial Automation 3209 Traditional "industrial automation" terminology usually refers to 3210 automation of manufacturing, quality control and material processing. 3211 "Industrial internet" and "industrial 4.0" [EA12] is becoming a hot 3212 topic based on the Internet of Things. This high flexible and 3213 dynamic engineering and manufacturing will result in a lot of so- 3214 called smart approaches such as Smart Factory, Smart Products, Smart 3215 Mobility, and Smart Home/Buildings. No doubt that ultra high 3216 reliability and robustness is a must in data transmission, especially 3217 in the closed loop automation control application where delay 3218 requirement is below 1ms and packet loss less than 10E-9. All these 3219 critical requirements on both latency and loss cannot be fulfilled by 3220 current 4G communication networks. Moreover, the collaboration of 3221 the industrial automation from remote campus with cellular and fixed 3222 network has to be built on an integrated, cloud-based platform. In 3223 this way, the deterministic flows should be guaranteed regardless of 3224 the amount of other flows in the network. The lack of this mechanism 3225 becomes the main obstacle in deployment on of industrial automation. 3227 8.5. Vehicle to Vehicle 3229 V2V communication has gained more and more attention in the last few 3230 years and will be increasingly growth in the future. Not only 3231 equipped with direct communication system which is short ranged, V2V 3232 communication also requires wireless cellular networks to cover wide 3233 range and more sophisticated services. V2V application in the area 3234 autonomous driving has very stringent requirements of latency and 3235 reliability. It is critical that the timely arrival of information 3236 for safety issues. In addition, due to the limitation of processing 3237 of individual vehicle, passing information to the cloud can provide 3238 more functions such as video processing, audio recognition or 3239 navigation systems. All of those requirements lead to a highly 3240 reliable connectivity to the cloud. On the other hand, it is natural 3241 that the provisioning of low latency communication is one of the main 3242 challenges to be overcome as a result of the high mobility, the high 3243 penetration losses caused by the vehicle itself. As result of that, 3244 the data transmission with latency below 5ms and a high reliability 3245 of PER below 10E-6 are demanded. It can benefit from the deployment 3246 of deterministic networking with high reliability. 3248 8.6. Gaming, Media and Virtual Reality 3250 Online gaming and cloud gaming is dominating the gaming market since 3251 it allow multiple players to play together with more challenging and 3252 competing. Connected via current internet, the latency can be a big 3253 issue to degrade the end users' experience. There different types of 3254 games and FPS (First Person Shooting) gaming has been considered to 3255 be the most latency sensitive online gaming due to the high 3256 requirements of timing precision and computing of moving target. 3257 Virtual reality is also receiving more interests than ever before as 3258 a novel gaming experience. The delay here can be very critical to 3259 the interacting in the virtual world. Disagreement between what is 3260 seeing and what is feeling can cause motion sickness and affect what 3261 happens in the game. Supporting fast, real-time and reliable 3262 communications in both PHY/MAC layer, network layer and application 3263 layer is main bottleneck for such use case. The media content 3264 delivery has been and will become even more important use of 3265 Internet. Not only high bandwidth demand but also critical delay and 3266 jitter requirements have to be taken into account to meet the user 3267 demand. To make the smoothness of the video and audio, delay and 3268 jitter has to be guaranteed to avoid possible interruption which is 3269 the killer of all online media on demand service. Now with 4K and 8K 3270 video in the near future, the delay guarantee become one of the most 3271 challenging issue than ever before. 4K/8K UHD video service requires 3272 6Gbps-100Gbps for uncompressed video and compressed video starting 3273 from 60Mbps. The delay requirement is 100ms while some specific 3274 interactive applications may require 10ms delay [UHD-video]. 3276 9. Use Case Common Elements 3278 Looking at the use cases collectively, the following common desires 3279 for the DetNet-based networks of the future emerge: 3281 o Open standards-based network (replace various proprietary 3282 networks, reduce cost, create multi-vendor market) 3284 o Centrally administered (though such administration may be 3285 distributed for scale and resiliency) 3287 o Integrates L2 (bridged) and L3 (routed) environments (independent 3288 of the Link layer, e.g. can be used with Ethernet, 6TiSCH, etc.) 3290 o Carries both deterministic and best-effort traffic (guaranteed 3291 end-to-end delivery of deterministic flows, deterministic flows 3292 isolated from each other and from best-effort traffic congestion, 3293 unused deterministic BW available to best-effort traffic) 3295 o Ability to add or remove systems from the network with minimal, 3296 bounded service interruption (applications include replacement of 3297 failed devices as well as plug and play) 3299 o Uses standardized data flow information models capable of 3300 expressing deterministic properties (models express device 3301 capabilities, flow properties. Protocols for pushing models from 3302 controller to devices, devices to controller) 3304 o Scalable size (long distances (many km) and short distances 3305 (within a single machine), many hops (radio repeaters, microwave 3306 links, fiber links...) and short hops (single machine)) 3308 o Scalable timing parameters and accuracy (bounded latency, 3309 guaranteed worst case maximum, minimum. Low latency, e.g. control 3310 loops may be less than 1ms, but larger for wide area networks) 3312 o High availability (99.9999 percent up time requested, but may be 3313 up to twelve 9s) 3315 o Reliability, redundancy (lives at stake) 3317 o Security (from failures, attackers, misbehaving devices - 3318 sensitive to both packet content and arrival time) 3320 10. Acknowledgments 3322 This document has benefited from reviews, suggestions, comments and 3323 proposed text provided by the following members, listed in 3324 alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver 3325 Huang. 3327 11. Informative References 3329 [ACE] IETF, "Authentication and Authorization for Constrained 3330 Environments", . 3333 [bacnetip] 3334 ASHRAE, "Annex J to ANSI/ASHRAE 135-1995 - BACnet/IP", 3335 January 1999. 3337 [CCAMP] IETF, "Common Control and Measurement Plane", 3338 . 3340 [CoMP] NGMN Alliance, "RAN EVOLUTION PROJECT COMP EVALUATION AND 3341 ENHANCEMENT", NGMN Alliance NGMN_RANEV_D3_CoMP_Evaluation_ 3342 and_Enhancement_v2.0, March 2015, 3343 . 3346 [CONTENT_PROTECTION] 3347 Olsen, D., "1722a Content Protection", 2012, 3348 . 3351 [CPRI] CPRI Cooperation, "Common Public Radio Interface (CPRI); 3352 Interface Specification", CPRI Specification V6.1, July 3353 2014, . 3356 [DCI] Digital Cinema Initiatives, LLC, "DCI Specification, 3357 Version 1.2", 2012, . 3359 [DICE] IETF, "DTLS In Constrained Environments", 3360 . 3362 [EA12] Evans, P. and M. Annunziata, "Industrial Internet: Pushing 3363 the Boundaries of Minds and Machines", November 2012. 3365 [ESPN_DC2] 3366 Daley, D., "ESPN's DC2 Scales AVB Large", 2014, 3367 . 3370 [flnet] Japan Electrical Manufacturers' Association, "JEMA 1479 - 3371 English Edition", September 2012. 3373 [Fronthaul] 3374 Chen, D. and T. Mustala, "Ethernet Fronthaul 3375 Considerations", IEEE 1904.3, February 2015, 3376 . 3379 [HART] www.hartcomm.org, "Highway Addressable remote Transducer, 3380 a group of specifications for industrial process and 3381 control devices administered by the HART Foundation". 3383 [I-D.finn-detnet-architecture] 3384 Finn, N., Thubert, P., and M. Teener, "Deterministic 3385 Networking Architecture", draft-finn-detnet- 3386 architecture-02 (work in progress), November 2015. 3388 [I-D.finn-detnet-problem-statement] 3389 Finn, N. and P. Thubert, "Deterministic Networking Problem 3390 Statement", draft-finn-detnet-problem-statement-04 (work 3391 in progress), October 2015. 3393 [I-D.ietf-6tisch-6top-interface] 3394 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer 3395 (6top) Interface", draft-ietf-6tisch-6top-interface-04 3396 (work in progress), July 2015. 3398 [I-D.ietf-6tisch-architecture] 3399 Thubert, P., "An Architecture for IPv6 over the TSCH mode 3400 of IEEE 802.15.4", draft-ietf-6tisch-architecture-09 (work 3401 in progress), November 2015. 3403 [I-D.ietf-6tisch-coap] 3404 Sudhaakar, R. and P. Zand, "6TiSCH Resource Management and 3405 Interaction using CoAP", draft-ietf-6tisch-coap-03 (work 3406 in progress), March 2015. 3408 [I-D.ietf-6tisch-terminology] 3409 Palattella, M., Thubert, P., Watteyne, T., and Q. Wang, 3410 "Terminology in IPv6 over the TSCH mode of IEEE 3411 802.15.4e", draft-ietf-6tisch-terminology-06 (work in 3412 progress), November 2015. 3414 [I-D.ietf-ipv6-multilink-subnets] 3415 Thaler, D. and C. Huitema, "Multi-link Subnet Support in 3416 IPv6", draft-ietf-ipv6-multilink-subnets-00 (work in 3417 progress), July 2002. 3419 [I-D.ietf-roll-rpl-industrial-applicability] 3420 Phinney, T., Thubert, P., and R. Assimiti, "RPL 3421 applicability in industrial networks", draft-ietf-roll- 3422 rpl-industrial-applicability-02 (work in progress), 3423 October 2013. 3425 [I-D.ietf-tictoc-1588overmpls] 3426 Davari, S., Oren, A., Bhatia, M., Roberts, P., and L. 3427 Montini, "Transporting Timing messages over MPLS 3428 Networks", draft-ietf-tictoc-1588overmpls-07 (work in 3429 progress), October 2015. 3431 [I-D.kh-spring-ip-ran-use-case] 3432 Khasnabish, B., hu, f., and L. Contreras, "Segment Routing 3433 in IP RAN use case", draft-kh-spring-ip-ran-use-case-02 3434 (work in progress), November 2014. 3436 [I-D.mirsky-mpls-residence-time] 3437 Mirsky, G., Ruffini, S., Gray, E., Drake, J., Bryant, S., 3438 and S. Vainshtein, "Residence Time Measurement in MPLS 3439 network", draft-mirsky-mpls-residence-time-07 (work in 3440 progress), July 2015. 3442 [I-D.svshah-tsvwg-deterministic-forwarding] 3443 Shah, S. and P. Thubert, "Deterministic Forwarding PHB", 3444 draft-svshah-tsvwg-deterministic-forwarding-04 (work in 3445 progress), August 2015. 3447 [I-D.thubert-6lowpan-backbone-router] 3448 Thubert, P., "6LoWPAN Backbone Router", draft-thubert- 3449 6lowpan-backbone-router-03 (work in progress), February 3450 2013. 3452 [I-D.wang-6tisch-6top-sublayer] 3453 Wang, Q. and X. Vilajosana, "6TiSCH Operation Sublayer 3454 (6top)", draft-wang-6tisch-6top-sublayer-04 (work in 3455 progress), November 2015. 3457 [IEC61850-90-12] 3458 TC57 WG10, IEC., "IEC 61850-90-12 TR: Communication 3459 networks and systems for power utility automation - Part 3460 90-12: Wide area network engineering guidelines", 2015. 3462 [IEC62439-3:2012] 3463 TC65, IEC., "IEC 62439-3: Industrial communication 3464 networks - High availability automation networks - Part 3: 3465 Parallel Redundancy Protocol (PRP) and High-availability 3466 Seamless Redundancy (HSR)", 2012. 3468 [IEEE1588] 3469 IEEE, "IEEE Standard for a Precision Clock Synchronization 3470 Protocol for Networked Measurement and Control Systems", 3471 IEEE Std 1588-2008, 2008, 3472 . 3475 [IEEE1722] 3476 IEEE, "1722-2011 - IEEE Standard for Layer 2 Transport 3477 Protocol for Time Sensitive Applications in a Bridged 3478 Local Area Network", IEEE Std 1722-2011, 2011, 3479 . 3482 [IEEE19043] 3483 IEEE Standards Association, "IEEE 1904.3 TF", IEEE 1904.3, 3484 2015, . 3486 [IEEE802.1TSNTG] 3487 IEEE Standards Association, "IEEE 802.1 Time-Sensitive 3488 Networks Task Group", March 2013, 3489 . 3491 [IEEE802154] 3492 IEEE standard for Information Technology, "IEEE std. 3493 802.15.4, Part. 15.4: Wireless Medium Access Control (MAC) 3494 and Physical Layer (PHY) Specifications for Low-Rate 3495 Wireless Personal Area Networks". 3497 [IEEE802154e] 3498 IEEE standard for Information Technology, "IEEE standard 3499 for Information Technology, IEEE std. 802.15.4, Part. 3500 15.4: Wireless Medium Access Control (MAC) and Physical 3501 Layer (PHY) Specifications for Low-Rate Wireless Personal 3502 Area Networks, June 2011 as amended by IEEE std. 3503 802.15.4e, Part. 15.4: Low-Rate Wireless Personal Area 3504 Networks (LR-WPANs) Amendment 1: MAC sublayer", April 3505 2012. 3507 [IEEE8021AS] 3508 IEEE, "Timing and Synchronizations (IEEE 802.1AS-2011)", 3509 IEEE 802.1AS-2001, 2011, 3510 . 3513 [IEEE8021CM] 3514 Farkas, J., "Time-Sensitive Networking for Fronthaul", 3515 Unapproved PAR, PAR for a New IEEE Standard; 3516 IEEE P802.1CM, April 2015, 3517 . 3520 [IEEE8021TSN] 3521 IEEE 802.1, "The charter of the TG is to provide the 3522 specifications that will allow time-synchronized low 3523 latency streaming services through 802 networks.", 2016, 3524 . 3526 [IETFDetNet] 3527 IETF, "Charter for IETF DetNet Working Group", 2015, 3528 . 3530 [ISA100] ISA/ANSI, "ISA100, Wireless Systems for Automation", 3531 . 3533 [ISA100.11a] 3534 ISA/ANSI, "Wireless Systems for Industrial Automation: 3535 Process Control and Related Applications - ISA100.11a-2011 3536 - IEC 62734", 2011, . 3539 [ISO7240-16] 3540 ISO, "ISO 7240-16:2007 Fire detection and alarm systems -- 3541 Part 16: Sound system control and indicating equipment", 3542 2007, . 3545 [knx] KNX Association, "ISO/IEC 14543-3 - KNX", November 2006. 3547 [lontalk] ECHELON, "LonTalk(R) Protocol Specification Version 3.0", 3548 1994. 3550 [LTE-Latency] 3551 Johnston, S., "LTE Latency: How does it compare to other 3552 technologies", March 2014, 3553 . 3556 [MEF] MEF, "Mobile Backhaul Phase 2 Amendment 1 -- Small Cells", 3557 MEF 22.1.1, July 2014, 3558 . 3561 [METIS] METIS, "Scenarios, requirements and KPIs for 5G mobile and 3562 wireless system", ICT-317669-METIS/D1.1 ICT- 3563 317669-METIS/D1.1, April 2013, . 3566 [modbus] Modbus Organization, "MODBUS APPLICATION PROTOCOL 3567 SPECIFICATION V1.1b", December 2006. 3569 [net5G] Ericsson, "5G Radio Access, Challenges for 2020 and 3570 Beyond", Ericsson white paper wp-5g, June 2013, 3571 . 3573 [NGMN] NGMN Alliance, "5G White Paper", NGMN 5G White Paper v1.0, 3574 February 2015, . 3577 [PCE] IETF, "Path Computation Element", 3578 . 3580 [profibus] 3581 IEC, "IEC 61158 Type 3 - Profibus DP", January 2001. 3583 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 3584 Requirement Levels", BCP 14, RFC 2119, 3585 DOI 10.17487/RFC2119, March 1997, 3586 . 3588 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6 3589 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460, 3590 December 1998, . 3592 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 3593 "Definition of the Differentiated Services Field (DS 3594 Field) in the IPv4 and IPv6 Headers", RFC 2474, 3595 DOI 10.17487/RFC2474, December 1998, 3596 . 3598 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 3599 Label Switching Architecture", RFC 3031, 3600 DOI 10.17487/RFC3031, January 2001, 3601 . 3603 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 3604 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 3605 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 3606 . 3608 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 3609 Metric for IP Performance Metrics (IPPM)", RFC 3393, 3610 DOI 10.17487/RFC3393, November 2002, 3611 . 3613 [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between 3614 Information Models and Data Models", RFC 3444, 3615 DOI 10.17487/RFC3444, January 2003, 3616 . 3618 [RFC3972] Aura, T., "Cryptographically Generated Addresses (CGA)", 3619 RFC 3972, DOI 10.17487/RFC3972, March 2005, 3620 . 3622 [RFC3985] Bryant, S., Ed. and P. Pate, Ed., "Pseudo Wire Emulation 3623 Edge-to-Edge (PWE3) Architecture", RFC 3985, 3624 DOI 10.17487/RFC3985, March 2005, 3625 . 3627 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 3628 Architecture", RFC 4291, DOI 10.17487/RFC4291, February 3629 2006, . 3631 [RFC4553] Vainshtein, A., Ed. and YJ. Stein, Ed., "Structure- 3632 Agnostic Time Division Multiplexing (TDM) over Packet 3633 (SAToP)", RFC 4553, DOI 10.17487/RFC4553, June 2006, 3634 . 3636 [RFC4903] Thaler, D., "Multi-Link Subnet Issues", RFC 4903, 3637 DOI 10.17487/RFC4903, June 2007, 3638 . 3640 [RFC4919] Kushalnagar, N., Montenegro, G., and C. Schumacher, "IPv6 3641 over Low-Power Wireless Personal Area Networks (6LoWPANs): 3642 Overview, Assumptions, Problem Statement, and Goals", 3643 RFC 4919, DOI 10.17487/RFC4919, August 2007, 3644 . 3646 [RFC5086] Vainshtein, A., Ed., Sasson, I., Metz, E., Frost, T., and 3647 P. Pate, "Structure-Aware Time Division Multiplexed (TDM) 3648 Circuit Emulation Service over Packet Switched Network 3649 (CESoPSN)", RFC 5086, DOI 10.17487/RFC5086, December 2007, 3650 . 3652 [RFC5087] Stein, Y(J)., Shashoua, R., Insler, R., and M. Anavi, 3653 "Time Division Multiplexing over IP (TDMoIP)", RFC 5087, 3654 DOI 10.17487/RFC5087, December 2007, 3655 . 3657 [RFC6282] Hui, J., Ed. and P. Thubert, "Compression Format for IPv6 3658 Datagrams over IEEE 802.15.4-Based Networks", RFC 6282, 3659 DOI 10.17487/RFC6282, September 2011, 3660 . 3662 [RFC6550] Winter, T., Ed., Thubert, P., Ed., Brandt, A., Hui, J., 3663 Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur, 3664 JP., and R. Alexander, "RPL: IPv6 Routing Protocol for 3665 Low-Power and Lossy Networks", RFC 6550, 3666 DOI 10.17487/RFC6550, March 2012, 3667 . 3669 [RFC6551] Vasseur, JP., Ed., Kim, M., Ed., Pister, K., Dejean, N., 3670 and D. Barthel, "Routing Metrics Used for Path Calculation 3671 in Low-Power and Lossy Networks", RFC 6551, 3672 DOI 10.17487/RFC6551, March 2012, 3673 . 3675 [RFC6775] Shelby, Z., Ed., Chakrabarti, S., Nordmark, E., and C. 3676 Bormann, "Neighbor Discovery Optimization for IPv6 over 3677 Low-Power Wireless Personal Area Networks (6LoWPANs)", 3678 RFC 6775, DOI 10.17487/RFC6775, November 2012, 3679 . 3681 [RFC7554] Watteyne, T., Ed., Palattella, M., and L. Grieco, "Using 3682 IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) in the 3683 Internet of Things (IoT): Problem Statement", RFC 7554, 3684 DOI 10.17487/RFC7554, May 2015, 3685 . 3687 [SRP_LATENCY] 3688 Gunther, C., "Specifying SRP Latency", 2014, 3689 . 3692 [STUDIO_IP] 3693 Mace, G., "IP Networked Studio Infrastructure for 3694 Synchronized & Real-Time Multimedia Transmissions", 2007, 3695 . 3698 [SyncE] ITU-T, "G.8261 : Timing and synchronization aspects in 3699 packet networks", Recommendation G.8261, August 2013, 3700 . 3702 [TEAS] IETF, "Traffic Engineering Architecture and Signaling", 3703 . 3705 [TS23401] 3GPP, "General Packet Radio Service (GPRS) enhancements 3706 for Evolved Universal Terrestrial Radio Access Network 3707 (E-UTRAN) access", 3GPP TS 23.401 10.10.0, March 2013. 3709 [TS25104] 3GPP, "Base Station (BS) radio transmission and reception 3710 (FDD)", 3GPP TS 25.104 3.14.0, March 2007. 3712 [TS36104] 3GPP, "Evolved Universal Terrestrial Radio Access 3713 (E-UTRA); Base Station (BS) radio transmission and 3714 reception", 3GPP TS 36.104 10.11.0, July 2013. 3716 [TS36133] 3GPP, "Evolved Universal Terrestrial Radio Access 3717 (E-UTRA); Requirements for support of radio resource 3718 management", 3GPP TS 36.133 12.7.0, April 2015. 3720 [TS36211] 3GPP, "Evolved Universal Terrestrial Radio Access 3721 (E-UTRA); Physical channels and modulation", 3GPP 3722 TS 36.211 10.7.0, March 2013. 3724 [TS36300] 3GPP, "Evolved Universal Terrestrial Radio Access (E-UTRA) 3725 and Evolved Universal Terrestrial Radio Access Network 3726 (E-UTRAN); Overall description; Stage 2", 3GPP TS 36.300 3727 10.11.0, September 2013. 3729 [TSNTG] IEEE Standards Association, "IEEE 802.1 Time-Sensitive 3730 Networks Task Group", 2013, 3731 . 3733 [UHD-video] 3734 Holub, P., "Ultra-High Definition Videos and Their 3735 Applications over the Network", The 7th International 3736 Symposium on VICTORIES Project PetrHolub_presentation, 3737 October 2014, . 3740 [WirelessHART] 3741 www.hartcomm.org, "Industrial Communication Networks - 3742 Wireless Communication Network and Communication Profiles 3743 - WirelessHART - IEC 62591", 2010. 3745 Authors' Addresses 3746 Ethan Grossman (editor) 3747 Dolby Laboratories, Inc. 3748 1275 Market Street 3749 San Francisco, CA 94103 3750 USA 3752 Phone: +1 415 645 4726 3753 Email: ethan.grossman@dolby.com 3754 URI: http://www.dolby.com 3756 Craig Gunther 3757 Harman International 3758 10653 South River Front Parkway 3759 South Jordan, UT 84095 3760 USA 3762 Phone: +1 801 568-7675 3763 Email: craig.gunther@harman.com 3764 URI: http://www.harman.com 3766 Pascal Thubert 3767 Cisco Systems, Inc 3768 Building D 3769 45 Allee des Ormes - BP1200 3770 MOUGINS - Sophia Antipolis 06254 3771 FRANCE 3773 Phone: +33 497 23 26 34 3774 Email: pthubert@cisco.com 3776 Patrick Wetterwald 3777 Cisco Systems 3778 45 Allees des Ormes 3779 Mougins 06250 3780 FRANCE 3782 Phone: +33 4 97 23 26 36 3783 Email: pwetterw@cisco.com 3784 Jean Raymond 3785 Hydro-Quebec 3786 1500 University 3787 Montreal H3A3S7 3788 Canada 3790 Phone: +1 514 840 3000 3791 Email: raymond.jean@hydro.qc.ca 3793 Jouni Korhonen 3794 Broadcom Corporation 3795 3151 Zanker Road 3796 San Jose, CA 95134 3797 USA 3799 Email: jouni.nospam@gmail.com 3801 Yu Kaneko 3802 Toshiba 3803 1 Komukai-Toshiba-cho, Saiwai-ku, Kasasaki-shi 3804 Kanagawa, Japan 3806 Email: yu1.kaneko@toshiba.co.jp 3808 Subir Das 3809 Applied Communication Sciences 3810 150 Mount Airy Road, Basking Ridge 3811 New Jersey, 07920, USA 3813 Email: sdas@appcomsci.com 3815 Yiyong Zha 3816 Huawei Technologies 3818 Email: zhayiyong@huawei.com 3820 Balazs Varga 3821 Ericsson 3822 Konyves Kalman krt. 11/B 3823 Budapest 1097 3824 Hungary 3826 Email: balazs.a.varga@ericsson.com 3827 Janos Farkas 3828 Ericsson 3829 Konyves Kalman krt. 11/B 3830 Budapest 1097 3831 Hungary 3833 Email: janos.farkas@ericsson.com 3835 Franz-Josef Goetz 3836 Siemens 3837 Gleiwitzerstr. 555 3838 Nurnberg 90475 3839 Germany 3841 Email: franz-josef.goetz@siemens.com 3843 Juergen Schmitt 3844 Siemens 3845 Gleiwitzerstr. 555 3846 Nurnberg 90475 3847 Germany 3849 Email: juergen.jues.schmitt@siemens.com