idnits 2.17.1 draft-finn-detnet-bounded-latency-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 609 has weird spacing: '...N queue non...' -- The document date (June 25, 2019) is 1759 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '2' on line 270 -- Looks like a reference, but probably isn't: '4' on line 270 == Unused Reference: 'NetCalBook' is defined on line 1165, but no explicit reference was found in the text == Outdated reference: A later version (-13) exists of draft-ietf-detnet-architecture-08 == Outdated reference: A later version (-07) exists of draft-ietf-detnet-ip-00 == Outdated reference: A later version (-13) exists of draft-ietf-detnet-mpls-00 Summary: 2 errors (**), 0 flaws (~~), 7 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DetNet N. Finn 3 Internet-Draft Huawei Technologies Co. Ltd 4 Intended status: Informational J-Y. Le Boudec 5 Expires: December 27, 2019 E. Mohammadpour 6 EPFL 7 J. Zhang 8 Huawei Technologies Co. Ltd 9 B. Varga 10 J. Farkas 11 Ericsson 12 June 25, 2019 14 DetNet Bounded Latency 15 draft-finn-detnet-bounded-latency-04 17 Abstract 19 This document presents a timing model for Deterministic Networking 20 (DetNet), so that existing and future standards can achieve the 21 DetNet quality of service features of bounded latency and zero 22 congestion loss. It defines requirements for resource reservation 23 protocols or servers. It calls out queuing mechanisms, defined in 24 other documents, that can provide the DetNet quality of service. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at https://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on December 27, 2019. 43 Copyright Notice 45 Copyright (c) 2019 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (https://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 Table of Contents 60 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 61 2. Terminology and Definitions . . . . . . . . . . . . . . . . . 3 62 3. DetNet bounded latency model . . . . . . . . . . . . . . . . 4 63 3.1. Flow creation . . . . . . . . . . . . . . . . . . . . . . 4 64 3.1.1. Static flow latency calculation . . . . . . . . . . . 4 65 3.1.2. Dynamic flow latency calculation . . . . . . . . . . 5 66 3.2. Relay node model . . . . . . . . . . . . . . . . . . . . 6 67 4. Computing End-to-end Latency Bounds . . . . . . . . . . . . . 8 68 4.1. Non-queuing delay bound . . . . . . . . . . . . . . . . . 8 69 4.2. Queuing delay bound . . . . . . . . . . . . . . . . . . . 8 70 4.2.1. Per-flow queuing mechanisms . . . . . . . . . . . . . 9 71 4.2.2. Per-class queuing mechanisms . . . . . . . . . . . . 9 72 4.3. Ingress considerations . . . . . . . . . . . . . . . . . 10 73 4.4. Interspersed non-DetNet transit nodes . . . . . . . . . . 11 74 5. Achieving zero congestion loss . . . . . . . . . . . . . . . 11 75 5.1. A General Formula . . . . . . . . . . . . . . . . . . . . 11 76 6. Queuing techniques . . . . . . . . . . . . . . . . . . . . . 12 77 6.1. Queuing data model . . . . . . . . . . . . . . . . . . . 12 78 6.2. Preemption . . . . . . . . . . . . . . . . . . . . . . . 14 79 6.3. Time-scheduled queuing . . . . . . . . . . . . . . . . . 15 80 6.4. Credit-Based Shaper with Asynchronous Traffic Shaping . . 16 81 6.4.1. Flow Admission . . . . . . . . . . . . . . . . . . . 19 82 6.5. IntServ . . . . . . . . . . . . . . . . . . . . . . . . . 20 83 6.6. Cyclic Queuing and Forwarding . . . . . . . . . . . . . . 22 84 6.6.1. CQF timing sequence . . . . . . . . . . . . . . . . . 23 85 6.6.2. CQF latency calculation . . . . . . . . . . . . . . . 24 86 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 24 87 7.1. Normative References . . . . . . . . . . . . . . . . . . 24 88 7.2. Informative References . . . . . . . . . . . . . . . . . 25 89 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 26 91 1. Introduction 93 The ability for IETF Deterministic Networking (DetNet) or IEEE 802.1 94 Time-Sensitive Networking (TSN, [IEEE8021TSN]) to provide the DetNet 95 services of bounded latency and zero congestion loss depends upon A) 96 configuring and allocating network resources for the exclusive use of 97 DetNet/TSN flows; B) identifying, in the data plane, the resources to 98 be utilized by any given packet, and C) the detailed behavior of 99 those resources, especially transmission queue selection, so that 100 latency bounds can be reliably assured. Thus, DetNet is an example 101 of an IntServ Guaranteed Quality of Service [RFC2212] 103 As explained in [I-D.ietf-detnet-architecture], DetNet flows are 104 characterized by 1) a maximum bandwidth, guaranteed either by the 105 transmitter or by strict input metering; and 2) a requirement for a 106 guaranteed worst-case end-to-end latency. That latency guarantee, in 107 turn, provides the opportunity for the network to supply enough 108 buffer space to guarantee zero congestion loss. 110 To be of use to the applications identified in [RFC8578], it must be 111 possible to calculate, before the transmission of a DetNet flow 112 commences, both the worst-case end-to-end network latency, and the 113 amount of buffer space required at each hop to ensure against 114 congestion loss. 116 This document references specific queuing mechanisms, defined in 117 other documents, that can be used to control packet transmission at 118 each output port and achieve the DetNet qualities of service. This 119 document presents a timing model for sources, destinations, and the 120 DetNet transit nodes that relay packets that is applicable to all of 121 those referenced queuing mechanisms. 123 Using the model presented in this document, it should be possible for 124 an implementor, user, or standards development organization to select 125 a particular set of queuing mechanisms for each device in a DetNet 126 network, and to select a resource reservation algorithm for that 127 network, so that those elements can work together to provide the 128 DetNet service. 130 This document does not specify any resource reservation protocol or 131 server. It does not describe all of the requirements for that 132 protocol or server. It does describe requirements for such resource 133 reservation methods, and for queuing mechanisms that, if met, will 134 enable them to work together. 136 2. Terminology and Definitions 138 This document uses the terms defined in 139 [I-D.ietf-detnet-architecture]. 141 3. DetNet bounded latency model 143 3.1. Flow creation 145 This document assumes that following paradigm is used for 146 provisioning DetNet flows: 148 1. Perform any configuration required by the DetNet transit nodes in 149 the network for the classes of service to be offered, including 150 one or more classes of DetNet service. This configuration is 151 done beforehand, and not tied to any particular flow. 153 2. Characterize the new DetNet flow, particularly in terms of 154 required bandwidth. 156 3. Establish the path that the DetNet flow will take through the 157 network from the source to the destination(s). This can be a 158 point-to-point or a point-to-multipoint path. 160 4. Select one of the DetNet classes of service for the DetNet flow. 162 5. Compute the worst-case end-to-end latency for the DetNet flow, 163 using one of the methods, below (Section 3.1.1, Section 3.1.2). 164 In the process, determine whether sufficient resources are 165 available for that flow to guarantee the required latency and to 166 provide zero congestion loss. 168 6. Assuming that the resources are available, commit those resources 169 to the flow. This may or may not require adjusting the 170 parameters that control the filtering and/or queuing mechanisms 171 at each hop along the flow's path. 173 This paradigm can be implemented using peer-to-peer protocols or 174 using a central server. In some situations, a lack of resources can 175 require backtracking and recursing through this list. 177 Issues such as un-provisioning a DetNet flow in favor of another when 178 resources are scarce are not considered, here. Also not addressed is 179 the question of how to choose the path to be taken by a DetNet flow. 181 3.1.1. Static flow latency calculation 183 The static problem: 184 Given a network and a set of DetNet flows, compute an end-to- 185 end latency bound (if computable) for each flow, and compute 186 the resources, particularly buffer space, required in each 187 DetNet transit node to achieve zero congestion loss. 189 In this calculation, all of the DetNet flows are known before the 190 calculation commences. This problem is of interest to relatively 191 static networks, or static parts of larger networks. It gives the 192 best possible worst-case behavior. The calculations can be extended 193 to provide global optimizations, such as altering the path of one 194 DetNet flow in order to make resources available to another DetNet 195 flow with tighter constraints. 197 The static flow calculation is not limited only to static networks; 198 the entire calculation for all flows can be repeated each time a new 199 DetNet flow is created or deleted. If some already-established flow 200 would be pushed beyond its latency requirements by the new flow, then 201 the new flow can be refused, or some other suitable action taken. 203 This calculation may be more difficult to perform than that of the 204 dynamic calculation (Section 3.1.2), because the flows passing 205 through one port on a DetNet transit node affect each others' 206 latency. The effects can even be circular, from Flow A to B to C and 207 back to A. On the other hand, the static calculation can often 208 accommodate queuing methods, such as transmission selection by strict 209 priority, that are unsuitable for the dynamic calculation. 211 3.1.2. Dynamic flow latency calculation 213 The dynamic problem: 214 Given a network whose maximum capacity for DetNet flows is 215 bounded by a set of static configuration parameters applied 216 to the DetNet transit nodes, and given just one DetNet flow, 217 compute the worst-case end-to-end latency that can be 218 experienced by that flow, no matter what other DetNet flows 219 (within the network's configured parameters) might be created 220 or deleted in the future. Also, compute the resources, 221 particularly buffer space, required in each DetNet transit 222 node to achieve zero congestion loss. 224 This calculation is dynamic, in the sense that flows can be added or 225 deleted at any time, with a minimum of computation effort, and 226 without affecting the guarantees already given to other flows. 228 The choice of queuing methods is critical to the applicability of the 229 dynamic calculation. Some queuing methods (e.g. CQF, Section 6.6) 230 make it easy to configure bounds on the network's capacity, and to 231 make independent calculations for each flow. Other queuing methods 232 (e.g., transmission selection by strict priority), make this 233 calculation impossible, because the worst case for one flow cannot be 234 computed without complete knowledge of all other flows. Other 235 queuing methods (e.g. the credit-based shaper defined in [IEEE8021Q] 236 section 8.6.8.2) can be used for dynamic flow creation, but yield 237 poorer latency and buffer space guarantees than when that same 238 queuing method is used for static flow creation (Section 3.1.1). 240 3.2. Relay node model 242 A model for the operation of a DetNet transit node is required, in 243 order to define the latency and buffer calculations. In Figure 1 we 244 see a breakdown of the per-hop latency experienced by a packet 245 passing through a DetNet transit node, in terms that are suitable for 246 computing both hop-by-hop latency and per-hop buffer requirements. 248 DetNet transit node A DetNet transit node B 249 +-------------------------+ +------------------------+ 250 | Queuing | | Queuing | 251 | Regulator subsystem | | Regulator subsystem | 252 | +-+-+-+-+ +-+-+-+-+ | | +-+-+-+-+ +-+-+-+-+ | 253 -->+ | | | | | | | | | + +------>+ | | | | | | | | | + +---> 254 | +-+-+-+-+ +-+-+-+-+ | | +-+-+-+-+ +-+-+-+-+ | 255 | | | | 256 +-------------------------+ +------------------------+ 257 |<->|<------>|<------->|<->|<---->|<->|<------>|<------>|<->|<-- 258 2,3 4 5 6 1 2,3 4 5 6 1 2,3 259 1: Output delay 4: Processing delay 260 2: Link delay 5: Regulation delay 261 3: Preemption delay 6: Queuing delay. 263 Figure 1: Timing model for DetNet or TSN 265 In Figure 1, we see two DetNet transit nodes (typically, bridges or 266 routers), with a wired link between them. In this model, the only 267 queues we deal with explicitly are attached to the output port; other 268 queues are modeled as variations in the other delay times. (E.g., an 269 input queue could be modeled as either a variation in the link delay 270 [2] or the processing delay [4].) There are six delays that a packet 271 can experience from hop to hop. 273 1. Output delay 274 The time taken from the selection of a packet for output from a 275 queue to the transmission of the first bit of the packet on the 276 physical link. If the queue is directly attached to the physical 277 port, output delay can be a constant. But, in many 278 implementations, the queuing mechanism in a forwarding ASIC is 279 separated from a multi-port MAC/PHY, in a second ASIC, by a 280 multiplexed connection. This causes variations in the output 281 delay that are hard for the forwarding node to predict or control. 283 2. Link delay 284 The time taken from the transmission of the first bit of the 285 packet to the reception of the last bit, assuming that the 286 transmission is not suspended by a preemption event. This delay 287 has two components, the first-bit-out to first-bit-in delay and 288 the first-bit-in to last-bit-in delay that varies with packet 289 size. The former is typically measured by the Precision Time 290 Protocol and is constant (see [I-D.ietf-detnet-architecture]). 291 However, a virtual "link" could exhibit a variable link delay. 293 3. Preemption delay 294 If the packet is interrupted in order to transmit another packet 295 or packets, (e.g. [IEEE8023] clause 99 frame preemption) an 296 arbitrary delay can result. 298 4. Processing delay 299 This delay covers the time from the reception of the last bit of 300 the packet to the time the packet is enqueued in the regulator 301 (Queuing subsystem, if there is no regulation). This delay can be 302 variable, and depends on the details of the operation of the 303 forwarding node. 305 5. Regulator delay 306 This is the time spent from the insertion of the last bit of a 307 packet into a regulation queue until the time the packet is 308 declared eligible according to its regulation constraints. We 309 assume that this time can be calculated based on the details of 310 regulation policy. If there is no regulation, this time is zero. 312 6. Queuing subsystem delay 313 This is the time spent for a packet from being declared eligible 314 until being selected for output on the next link. We assume that 315 this time is calculable based on the details of the queuing 316 mechanism. If there is no regulation, this time is from the 317 insertion of the packet into a queue until it is selected for 318 output on the next link. 320 Not shown in Figure 1 are the other output queues that we presume are 321 also attached to that same output port as the queue shown, and 322 against which this shown queue competes for transmission 323 opportunities. 325 The initial and final measurement point in this analysis (that is, 326 the definition of a "hop") is the point at which a packet is selected 327 for output. In general, any queue selection method that is suitable 328 for use in a DetNet network includes a detailed specification as to 329 exactly when packets are selected for transmission. Any variations 330 in any of the delay times 1-4 result in a need for additional buffers 331 in the queue. If all delays 1-4 are constant, then any variation in 332 the time at which packets are inserted into a queue depends entirely 333 on the timing of packet selection in the previous node. If the 334 delays 1-4 are not constant, then additional buffers are required in 335 the queue to absorb these variations. Thus: 337 o Variations in output delay (1) require buffers to absorb that 338 variation in the next hop, so the output delay variations of the 339 previous hop (on each input port) must be known in order to 340 calculate the buffer space required on this hop. 342 o Variations in processing delay (4) require additional output 343 buffers in the queues of that same DetNet transit node. Depending 344 on the details of the queueing subsystem delay (6) calculations, 345 these variations need not be visible outside the DetNet transit 346 node. 348 4. Computing End-to-end Latency Bounds 350 4.1. Non-queuing delay bound 352 End-to-end latency bounds can be computed using the delay model in 353 Section 3.2. Here it is important to be aware that for several 354 queuing mechanisms, the worst-case end-to-end delay is less than the 355 sum of the per-hop worst-case delays. An end-to-end latency bound 356 for one DetNet flow can be computed as 358 end_to_end_latency_bound = non_queuing_latency + queuing_latency 360 The two terms in the above formula are computed as follows. First, 361 at the h-th hop along the path of this DetNet flow, obtain an upper 362 bound per-hop_non_queuing_latency[h] on the sum of delays 1,2,3,4 of 363 Figure 1. These upper-bounds are expected to depend on the specific 364 technology of the DetNet transit node at the h-th hop but not on the 365 T-SPEC of this DetNet flow. Then set non_queuing_latency = the sum 366 of per-hop_non_queuing_latency[h] over all hops h. 368 4.2. Queuing delay bound 370 Second, compute queuing_latency as an upper bound to the sum of the 371 queuing delays along the path. The value of queuing_latency depends 372 on the T-SPEC of this flow and possibly of other flows in the 373 network, as well as the specifics of the queuing mechanisms deployed 374 along the path of this flow. 376 For several queuing mechanisms, queuing_latency is less than the sum 377 of upper bounds on the queuing delays (5,6) at every hop. This 378 occurs with (1) per-flow queuing, and (2) per-class queuing with 379 regulators, as explained in Section 4.2.1, Section 4.2.2, and 380 Section 6. 382 For other queuing mechanisms the only available value of 383 queuing_latency is the sum of the per-hop queuing delay bounds. In 384 such cases, the computation of per-hop queuing delay bounds must 385 account for the fact that the T-SPEC of a DetNet flow is no longer 386 satisfied at the ingress of a hop, since burstiness increases as one 387 flow traverses one DetNet transit node. 389 4.2.1. Per-flow queuing mechanisms 391 With such mechanisms, each flow uses a separate queue inside every 392 node. The service for each queue is abstracted with a guaranteed 393 rate and a delay. For every flow the per-node delay bound as well as 394 end-to-end delay bound can be computed from the traffic specification 395 of this flow at its source and from the values of rates and latencies 396 at all nodes along its path. Details of calculation for IntServ are 397 described in Section 6.5. 399 4.2.2. Per-class queuing mechanisms 401 With such mechanisms, the flows that have the same class share the 402 same queue. A practical example is the credit-based shaper defined 403 in section 8.6.8.2 of [IEEE8021Q]. One key issue in this context is 404 how to deal with the burstiness cascade: individual flows that share 405 a resource dedicated to a class may see their burstiness increase, 406 which may in turn cause increased burstiness to other flows 407 downstream of this resource. Computing latency upper bounds for such 408 cases is difficult, and in some conditions impossible 409 [charny2000delay][bennett2002delay]. Also, when bounds are obtained, 410 they depend on the complete configuration, and must be recomputed 411 when one flow is added. (The dynamic calculation, Section 3.1.2.) 413 A solution to deal with this issue is to reshape the flows at every 414 hop. This can be done with per-flow regulators (e.g. leaky bucket 415 shapers), but this requires per-flow queuing and defeats the purpose 416 of per-class queuing. An alternative is the interleaved regulator, 417 which reshapes individual flows without per-flow queuing 418 ([Specht2016UBS], [IEEE8021Qcr]). With an interleaved regulator, the 419 packet at the head of the queue is regulated based on its (flow) 420 regulation constraints; it is released at the earliest time at which 421 this is possible without violating the constraint. One key feature 422 of per-flow or interleaved regulator is that, it does not increase 423 worst-case latency bounds [le_boudec_theory_2018]. Specifically, 424 when an interleaved regulator is appended to a FIFO subsystem, it 425 does not increase the worst-case delay of the latter. 427 Figure 2 shows an example of a network with 5 nodes, per-class 428 queuing mechanism and interleaved regulators as in Figure 1. An end- 429 to-end delay bound for flow f, traversing nodes 1 to 5, is calculated 430 as follows: 432 end_to_end_latency_bound_of_flow_f = C12 + C23 + C34 + S4 434 In the above formula, Cij is a bound on the aggregate response time 435 of queuing subsystem in node i and interleaved regulator of node j, 436 and S4 is a bound on the response time of the queuing subsystem in 437 node 4 for flow f. In fact, using the delay definitions in 438 Section 3.2, Cij is a bound on sum of the delays 1,2,3,6 of node i 439 and 4,5 of node j. Similarly, S4 is a bound on sum of the delays 440 1,2,3,6 of node 4. A practical example of queuing model and delay 441 calculation is presented Section 6.4. 443 f 444 -----------------------------> 445 +---+ +---+ +---+ +---+ +---+ 446 | 1 |---| 2 |---| 3 |---| 4 |---| 5 | 447 +---+ +---+ +---+ +---+ +---+ 448 \__C12_/\__C23_/\__C34_/\_S4_/ 450 Figure 2: End-to-end latency computation example 452 REMARK: The end-to-end delay bound calculation provided here gives a 453 much better upper bound in comparison with end-to-end delay bound 454 computation by adding the delay bounds of each node in the path of a 455 flow [TSNwithATS]. 457 4.3. Ingress considerations 459 A sender can be a DetNet node which uses exactly the same queuing 460 methods as its adjacent DetNet transit node, so that the latency and 461 buffer calculations at the first hop are indistinguishable from those 462 at a later hop within the DetNet domain. On the other hand, the 463 sender may be DetNet unaware, in which case some conditioning of the 464 flow may be necessary at the ingress DetNet transit node. 466 This ingress conditioning typically consists of a FIFO with an output 467 regulator that is compatible with the queuing employed by the DetNet 468 transit node on its output port(s). For some queuing methods, simply 469 requires added extra buffer space in the queuing subsystem. Ingress 470 conditioning requirements for different queuing methods are mentioned 471 in the sections, below, describing those queuing methods. 473 4.4. Interspersed non-DetNet transit nodes 475 It is sometimes desirable to build a network that has both DetNet 476 aware transit nodes and DetNet non-aware transit nodes, and for a 477 DetNet flow to traverse an island of non-DetNet transit nodes, while 478 still allowing the network to offer latency and congestion loss 479 guarantees. This is possible under certain conditions. 481 In general, when passing through a non-DetNet island, the island 482 causes delay variation in excess of what would be caused by DetNet 483 nodes. That is, the DetNet flow is "lumpier" after traversing the 484 non-DetNet island. DetNet guarantees for latency and buffer 485 requirements can still be calculated and met if and only if the 486 following are true: 488 1. The latency variation across the non-DetNet island must be 489 bounded and calculable. 491 2. An ingress conditioning function (Section 4.3) may be required at 492 the re-entry to the DetNet-aware domain. This will, at least, 493 require some extra buffering to accommodate the additional delay 494 variation, and thus further increases the worst-case latency. 496 The ingress conditioning is exactly the same problem as that of a 497 sender at the edge of the DetNet domain. The requirement for bounds 498 on the latency variation across the non-DetNet island is typically 499 the most difficult to achieve. Without such a bound, it is obvious 500 that DetNet cannot deliver its guarantees, so a non-DetNet island 501 that cannot offer bounded latency variation cannot be used to carry a 502 DetNet flow. 504 5. Achieving zero congestion loss 506 When the input rate to an output queue exceeds the output rate for a 507 sufficient length of time, the queue must overflow. This is 508 congestion loss, and this is what deterministic networking seeks to 509 avoid. 511 5.1. A General Formula 513 To avoid congestion losses, an upper bound on the backlog present in 514 the regulator and queuing subsystem of Figure 1 must be computed 515 during resource reservation. This bound depends on the set of flows 516 that use these queues, the details of the specific queuing mechanism 517 and an upper bound on the processing delay (4). The queue must 518 contain the packet in transmission plus all other packets that are 519 waiting to be selected for output. 521 A conservative backlog bound, that applies to all systems, can be 522 derived as follows. 524 The backlog bound is counted in data units (bytes, or words of 525 multiple bytes) that are relevant for buffer allocation. For every 526 class we need one buffer space for the packet in transmission, plus 527 space for the packets that are waiting to be selected for output. 528 Excluding transmission and preemption times, the packets are waiting 529 in the queue since reception of the last bit, for a duration equal to 530 the processing delay (4) plus the queuing delays (5,6). 532 Let 534 o nb_classes be the number of classes of traffic that may use this 535 output port 537 o total_in_rate be the sum of the line rates of all input ports that 538 send traffic of any class to this output port. The value of 539 total_in_rate is in data units (e.g. bytes) per second. 541 o nb_input_ports be the number input ports that send traffic of any 542 class to this output port 544 o max_packet_length be the maximum packet size for packets of any 545 class that may be sent to this output port. This is counted in 546 data units. 548 o max_delay45 be an upper bound, in seconds, on the sum of the 549 processing delay (4) and the queuing delays (5,6) for a packet of 550 any class at this output port. 552 Then a bound on the backlog of traffic of all classes in the queue at 553 this output port is 555 backlog_bound = ( nb_classes + nb_input_ports ) * 556 max_packet_length + total_in_rate* max_delay45 558 6. Queuing techniques 560 6.1. Queuing data model 562 Sophisticated queuing mechanisms are available in Layer 3 (L3, see, 563 e.g., [RFC7806] for an overview). In general, we assume that "Layer 564 3" queues, shapers, meters, etc., are precisely the "regulators" 565 shown in Figure 1. The "queuing subsystems" in this figure are not 566 the province solely of bridges; they are an essential part of any 567 DetNet transit node. As illustrated by numerous implementation 568 examples, some of the "Layer 3" mechanisms described in documents 569 such as [RFC7806] are often integrated, in an implementation, with 570 the "Layer 2" mechanisms also implemented in the same node. An 571 integrated model is needed in order to successfully predict the 572 interactions among the different queuing mechanisms needed in a 573 network carrying both DetNet flows and non-DetNet flows. 575 Figure 3 shows the general model for the flow of packets through the 576 queues of a DetNet transit node. Packets are assigned to a class of 577 service. The classes of service are mapped to some number of 578 regulator queues. Only DetNet/TSN packets pass through regulators. 579 Queues compete for the selection of packets to be passed to queues in 580 the queuing subsystem. Packets again are selected for output from 581 the queuing subsystem. 583 | 584 +--------------------------------V----------------------------------+ 585 | Class of Service Assignment | 586 +--+------+----------+---------+-----------+-----+-------+-------+--+ 587 | | | | | | | | 588 +--V-+ +--V-+ +--V--+ +--V--+ +--V--+ | | | 589 |Flow| |Flow| |Flow | |Flow | |Flow | | | | 590 | 0 | | 1 | ... | i | | i+1 | ... | n | | | | 591 | reg| | reg| | reg | | reg | | reg | | | | 592 +--+-+ +--+-+ +--+--+ +--+--+ +--+--+ | | | 593 | | | | | | | | 594 +--V------V----------V--+ +--V-----------V--+ | | | 595 | Trans. selection | | Trans. select. | | | | 596 +----------+------------+ +-----+-----------+ | | | 597 | | | | | 598 +--V--+ +--V--+ +--V--+ +--V--+ +--V--+ 599 | out | | out | | out | | out | | out | 600 |queue| |queue| |queue| |queue| |queue| 601 | 1 | | 2 | | 3 | | 4 | | 5 | 602 +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ 603 | | | | | 604 +----------V----------------------V--------------V-------V-------V--+ 605 | Transmission selection | 606 +----------+----------------------+--------------+-------+-------+--+ 607 | | | | | 608 V V V V V 609 DetNet/TSN queue DetNet/TSN queue non-DetNet/TSN queues 611 Figure 3: IEEE 802.1Q Queuing Model: Data flow 613 Some relevant mechanisms are hidden in this figure, and are performed 614 in the queue boxes: 616 o Discarding packets because a queue is full. 618 o Discarding packets marked "yellow" by a metering function, in 619 preference to discarding "green" packets. 621 Ideally, neither of these actions are performed on DetNet packets. 622 Full queues for DetNet packets should occur only when a flow is 623 misbehaving, and the DetNet QoS does not include "yellow" service for 624 packets in excess of committed rate. 626 The Class of Service Assignment function can be quite complex, even 627 in a bridge [IEEE8021Q], since the introduction of per-stream 628 filtering and policing ([IEEE8021Q] clause 8.6.5.1). In addition to 629 the Layer 2 priority expressed in the 802.1Q VLAN tag, a DetNet 630 transit node can utilize any of the following information to assign a 631 packet to a particular class of service (queue): 633 o Input port. 635 o Selector based on a rotating schedule that starts at regular, 636 time-synchronized intervals and has nanosecond precision. 638 o MAC addresses, VLAN ID, IP addresses, Layer 4 port numbers, DSCP. 639 ([I-D.ietf-detnet-ip], [I-D.ietf-detnet-mpls]) (Work items are 640 expected to add MPC and other indicators.) 642 o The Class of Service Assignment function can contain metering and 643 policing functions. 645 o MPLS and/or pseudowire ([RFC6658]) labels. 647 The "Transmission selection" function decides which queue is to 648 transfer its oldest packet to the output port when a transmission 649 opportunity arises. 651 6.2. Preemption 653 In [IEEE8021Q] and [IEEE8023], the transmission of a frame can be 654 interrupted by one or more "express" frames, and then the interrupted 655 frame can continue transmission. This frame preemption is modeled as 656 consisting of two MAC/PHY stacks, one for packets that can be 657 interrupted, and one for packets that can interrupt the interruptible 658 packets. The Class of Service (queue) determines which packets are 659 which. Only one layer of preemption is supported -- a transmitter 660 cannot have more than one interrupted frame in progress. DetNet 661 flows typically pass through the interrupting MAC. Best-effort 662 queues pass through the interruptible MAC, and can thus be preempted. 664 6.3. Time-scheduled queuing 666 In [IEEE8021Q], the notion of time-scheduling queue gates is 667 described in section 8.6.8.4. Below every output queue (the lower 668 row of queues in Figure 3) is a gate that permits or denies the queue 669 to present data for transmission selection. The gates are controlled 670 by a rotating schedule that can be locked to a clock that is 671 synchronized with other DetNet transit nodes. The DetNet class of 672 service can be supplied by queuing mechanisms based on time, rather 673 than the regulator model in Figure 3. Generally speacking, this 674 time-aware scheduling can be used as a layer 2 time division 675 multiplexing (TDM) technique. 677 Consider the static configuration of a deterministic network. To 678 provide end-to-end latency guaranteed service, network nodes can 679 support time-based behavior, which is determined by gate control list 680 (GCL). GCL defines the gate operation, in open or closed state, with 681 associated timing for each traffic class queue. A time slice with 682 gate state "open" is called transmission window. The time-based 683 traffic scheduling must be coordinated among the DetNet transit nodes 684 along the path from sender to receiver, to control the transmission 685 of time-sensitive traffic. 687 Ideally all network devices are time synchronized and static GCL 688 configurations on all devices along the routed path are coordinated 689 to ensure that length of transmission window fits the assigned 690 frames, and no two time windows for DetNet traffic on the same port 691 overlap. (DetNet flows' windows can overlap with best-effort 692 windows, so that unused DetNet bandwidth is available to best-effort 693 traffic.) The processing delay, link delay and output delay in 694 transmitting are considered in GCL computation. Transmission window 695 for a certain flow may require that a time offset on consecutive hops 696 be selected to reduce queueing delay as much as possible. In this 697 case, TSN/DetNet frames transmit at the assigned transmission window 698 at every node through the routed path, with zero congestion loss and 699 bounded end-to-end latency. Then, the worst-case end-to-end latency 700 of the flow can be derived from GCL configuration. For a TSN or 701 DetNet frame, denote the transmission window on last hop closes at 702 gate_close_time_last_hop. Assuming talker supports scheduled traffic 703 behavior, it starts the transmission at gate_open_time_on_talker. 704 Then worst case end-to-end delay of this flow is bounded by 705 gate_close_time_last_hop - gate_open_time_on_talker + 706 link_delay_last_hop. 708 It should be noted that scheduled traffic service relies on a 709 synchronized network and coordinated GCL configuration. Synthesis of 710 GCL on multiple nodes in network is a scheduling problem considering 711 all TSN/DetNet flows traversing the network, which is a non- 712 deterministic polynomial-time hard (NP-hard) problem. Also, at this 713 writing, scheduled traffic service supports no more than eight 714 traffic classes, typically using up to seven priority classes and at 715 least one best effort class. 717 6.4. Credit-Based Shaper with Asynchronous Traffic Shaping 719 Consider a network with a set of nodes (DetNet transit nodes and 720 hosts) along with a set of flows between hosts. Hosts are sources or 721 destinations of flows. There are four types of flows, namely, 722 control-data traffic (CDT), class A, class B, and best effort (BE) in 723 decreasing order of priority. Flows of classes A and B are together 724 referred to AVB flows. It is assumed a subset of TSN functions as 725 described next. 727 It is also assumed that contention occurs only at the output port of 728 a TSN node. Each node output port performs per-class scheduling with 729 eight classes: one for CDT, one for class A traffic, one for class B 730 traffic, and five for BE traffic denoted as BE0-BE4 (according to TSN 731 standard). In addition, each node output port also performs per-flow 732 regulation for AVB flows using an interleaved regulator (IR), called 733 Asynchronous Traffic Shaper (ATS) in TSN. Thus, at each output port 734 of a node, there is one interleaved regulator per-input port and per- 735 class. The detailed picture of scheduling and regulation 736 architecture at a node output port is given by Figure 4. The packets 737 received at a node input port for a given class are enqueued in the 738 respective interleaved regulator at the output port. Then, the 739 packets from all the flows, including CDT and BE flows, are enqueued 740 in a class based FIFO system (CBFS) [TSNwithATS]. 742 +--+ +--+ +--+ +--+ 743 | | | | | | | | 744 |IR| |IR| |IR| |IR| 745 | | | | | | | | 746 +-++XXX++-+ +-++XXX++-+ 747 | | | | 748 | | | | 749 +---+ +-v-XXX-v-+ +-v-XXX-v-+ +-----+ +-----+ +-----+ +-----+ +-----+ 750 | | | | | | |Class| |Class| |Class| |Class| |Class| 751 |CDT| | Class A | | Class B | | BE4 | | BE3 | | BE2 | | BE1 | | BE0 | 752 | | | | | | | | | | | | | | | | 753 +-+-+ +----+----+ +----+----+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ 754 | | | | | | | | 755 | +-v-+ +-v-+ | | | | | 756 | |CBS| |CBS| | | | | | 757 | +-+-+ +-+-+ | | | | | 758 | | | | | | | | 759 +-v--------v-----------v---------v-------V-------v-------v-------v--+ 760 | Strict Priority selection | 761 +--------------------------------+----------------------------------+ 762 | 763 V 765 Figure 4: Architecture of a TSN node output port with interleaved 766 regulators (IRs) 768 The CBFS includes two Credit-Based Shaper (CBS) subsystems, one for 769 each class A and B. The CBS serves a packet from a class according 770 to the available credit for that class. The credit for each class A 771 or B increases based on the idle slope, and decreases based on the 772 send slope, both of which are parameters of the CBS. The CDT and 773 BE0-BE4 flows in the CBFS are served by separate FIFO subsystems. 774 Then, packets from all flows are served by a transmission selection 775 subsystem that serves packets from each class based on its priority. 776 All subsystems are non-preemptive. Guarantees for AVB traffic can be 777 provided only if CDT traffic is bounded; it is assumed that the CDT 778 traffic has leaky bucket arrival curve with two parameters r_h as 779 rate and b_h as bucket size, i.e., the amount of bits entering a node 780 within a time interval t is bounded by r_h t + b_h. 782 Additionally, it is assumed that the AVB flows are also regulated at 783 their source according to leaky bucket arrival curve. At the source 784 hosts, the traffic satisfies its regulation constraint, i.e. the 785 delay due to interleaved regulator at hosts is ignored. 787 At each DetNet transit node implementing an interleaved regulator, 788 packets of multiple flows are processed in one FIFO queue; the packet 789 at the head of the queue is regulated based on its leaky bucket 790 parameters; it is released at the earliest time at which this is 791 possible without violating the constraint. The regulation parameters 792 for a flow (leaky bucket rate and bucket size) are the same at its 793 source and at all DetNet transit nodes along its path. A delay bound 794 of CBFS for an AVB flow f of class A or B can be computed if the 795 following condition holds: 797 sum of leaky bucket rates of all flows of this class at this node 798 <= R, where R is given below for every class. 800 If the condition holds, the delay bound is: 802 d_f = T + (b_t-L_min_f)/R - L_min_f/c 804 where L_min_f is the minimum packet length of flow f; c is the output 805 link transmission rate; b_t is the sum of the b term (bucket size) 806 for all the flows having the same class as flow f at this node. 807 Parameters R and T are calculated as follows for class A and class B, 808 separately: 810 If f is of class A: 812 R = I_A (c-r_h)/ c 814 T = L_nA + b_h + r_h L_n/c)/(c-r_h) 816 where L_nA is the maximum packet length of class B and BE packets; 817 L_n is the maximum packet length of classes A,B, and BE. 819 If f is of class B: 821 R = I_B (c-r_h)/ c 823 T = (L_BE + L_A + L_nA I_A/(c_h-I_A) + b_h + r_h L_n/c)/(c-r_h) 825 where L_A is the maximum packet length of class A; L_BE is the 826 maximum packet length of class BE. 828 Then, an end-to-end delay bound is calculated by the formula 829 Section 4.2.2, where for Cij: 831 Cij = max(d_f') 833 where f' is any flow that shares the same CBFS class with flow f at 834 node i and the same interleaved regulator as flow f at node j. 836 More information of delay analysis in such a DetNet transit node is 837 described in [TSNwithATS]. 839 6.4.1. Flow Admission 841 The delay calculation requires some information about each node. For 842 each node, it is required to know the idle slope of CBS for each 843 class A and B (I_A and I_B), as well as the transmission rate of the 844 output link (c). Besides, it is necessary to have the information on 845 each class, i.e. maximum packet length of classes A, B, and BE. 846 Moreover, the leaky bucket parameters of CDT (r_h,b_h) should be 847 known. To admit a flow/flows, their delay requirements should be 848 guaranteed not to be violated. As described in Section 3.1, the two 849 problems static and dynamic are addressed separately. In either of 850 the problems, the rate and delay should be guaranteed. Thus, 852 The static admission control: 853 The leaky bucket parameters of all flows are known, 854 therefore, for each flow a delay bound can be calculated. 855 The computed delay bound for every flow should not be more 856 than its delay requirement. Moreover, the sum of the rate of 857 each flow (r_f) should not be more than the rate allocated to 858 each class (R). If these two conditions hold, the 859 configuration is declared admissible. 861 The dynamic admission control: 862 For dynamic admission control, we allocate to every node and 863 class A or B, static value for rate (R) and maximum 864 burstiness (b_t). In addition, for every node and every 865 class A and B, two counters are maintained: 867 R_acc is equal to the sum of the leaky-bucket rates of all 868 flows of this class already admitted at this node; At all 869 times, we must have: 871 R_acc <=R, (Eq. 1) 873 b_acc is equal to the sum of the bucket sizes of all flows 874 of this class already admitted at this node; At all times, 875 we must have: 877 b_acc <=b_t. (Eq. 2) 879 A new flow is admitted at this node, if Eqs. (1) and (2) 880 continue to be satisfied after adding its leaky bucket rate 881 and bucket size to R_acc and b_acc. A flow is admitted in 882 the network, if it is admitted at all nodes along its path. 883 When this happens, all variables R_acc and b_acc along its 884 path must be incremented to reflect the addition of the flow. 885 Similarly, when a flow leaves the network, all variables 886 R_acc and b_acc along its path must be decremented to reflect 887 the removal of the flow. 889 The choice of the static values of R and b_t at all nodes and classes 890 must be done in a prior configuration phase; R controls the bandwidth 891 allocated to this class at this node, b_t affects the delay bound and 892 the buffer requirement. R must satisfy the constraints given in 893 Annex L.1 of [IEEE8021Q]. 895 6.5. IntServ 897 Integrated service (IntServ) is an architecture that specifies the 898 elements to guarantee quality of service (QoS) on networks. To 899 satisfied guaranteed service, a flow must conform to a traffic 900 specification (T-spec), and reservation is made along a path, only if 901 routers are able to guarantee the required bandwidth and buffer. 903 Consider the traffic model which conforms to token bucket regulator 904 (r, b), with 906 o Token bucket depth (b). 908 o Token bucket rate (r). 910 The traffic specification can be described as an arrival curve: 912 alpha(t) = b + rt 914 This token bucket regulator requires that, during any time window t, 915 the number of bit for the flow is limited by alpha(t) = b + rt. 917 If resource reservation on a path is applied, IntServ model of a 918 router can be described as a rate-latency service curve beta(t). 920 beta(t) = max(0, R(t-T)) 922 It describes that bits might have to wait up to T before being served 923 with a rate greater or equal to R. 925 It should be noted that, the guaranteed service rate R is a share of 926 link's bandwidth. The choice of R is related to the specification of 927 flows which will transmit on this node. For example, in strict 928 priority policy, considering a flow with priority j, its share of 929 bandwidth may be R=c-sum(r_i), i 0.7 1 (units of Tc) 2 3 999 DetNet transit node A out port 1 1000 | a <-DT->| b | c | d 1001 +------------+------+-------------------+-------------------+-------- 1002 \_____ \_____ 1003 \_____ \_____ queue-to-queue delay = 1.3 Tc 1004 \_____ \_____ 1005 \_____ \_____ DetNet transit node B 1006 \_ \_ queue assignment, in 1007 | | |<-DT->| port 2 to out 3 | 1008 -------+-------------------+------------+------+-------------------+- 1009 0.3 time--> 1.3 2.0 2.3 3.3 1011 window to transfer 1012 to buffer c ---> VVVVVVVVVVVV 1013 if dead time not window to transfer 1014 excessive VVVVVVVVVVVVVVVVVVV <--- to buffer d 1015 DetNet transit node B out port 3 1016 | a | b | c | d 1017 +-------------------+-------------------+-------------------+-------- 1018 0 time--> 1 2 3 1020 Figure 6: CQF timing diagram 1022 Figure 6 shows two DetNet transit nodes A and B, including three 1023 timelines for: 1025 1. The output queues on port 1 in node A. 1027 2. The input gate function ([IEEE8021Q], 8.6.5.1) that assigns 1028 packets received on port 1 of transit node B to output queues on 1029 port 2 of transit node B. 1031 3. The output queues on port 2 of node B. 1033 In this figure, the output ports on the two nodes are synchronized, 1034 and a new buffer starts transmitting at each tick, shown as 0, 1, 2, 1035 ... The output times shown for timelines 1 and 3 are the times at 1036 which packets are selected for output, which is the start point of 1037 the output time (1) of Figure 1. The queue assignments times on 1038 timeline 3 take place at the beginning of the queuing delay (6) of 1039 Figure 1. Time-based CQF, as described here, does not require any 1040 regulator queues. In the shown in the figure, the total time for 1041 delays 1 through 6 of Figure 1 is 1.3Tc. Of course, any value is 1042 possible. 1044 6.6.1. CQF timing sequence 1046 In general, as shown in Figure 6, the windows for buffer assignment 1047 do not align perfectly with the windows for buffer transmission. The 1048 input gates (the center timeline in Figure 6) must switch from using 1049 one buffer to using another buffer in sync with the (delayed) 1050 received data, at times offset by the dead time from the output 1051 buffer switching (the bottom timeline in Figure 6). 1053 If the dead time DT in Figure 6 is not excessive, then it is feasible 1054 to subtract the dead time from the cycle time Tc, and use the 1055 remainder as the input window. In the example in Figure 6, packets 1056 from node A buffer a can be transferred from the input port to node 1057 B's buffer "c" during the window shown by the upper row "VVVV...". 1058 Input must cease by time = 2.0, because that is when transit node B 1059 starts transmitting the contents of buffer c. In this case, only two 1060 output buffers are in use, one filling and one outputting. 1062 If the dead time is too large (e.g., if the delays placed the middle 1063 timeline's switching points at n+0.9, instead of n+0.3), three 1064 buffers are used by node B. This case is shown by the lower row 1065 "VVVV..." in Figure 6. In this case, node B places the data received 1066 from node A buffer a into node B buffer d between the times 1.3 and 1067 2.3 in Figure 6. Buffer b starts outputting at time = 2.0, while 1068 buffer d is filling. Thus, three buffers are in use, one filling, 1069 one waiting, and one emptying. 1071 6.6.2. CQF latency calculation 1073 The per-hop latency is trivially determined by the wire delay and the 1074 queuing delay. Since the wire delay is either absorbed into the 1075 queueing delay (dead time is small and two buffers are used) or 1076 padded out to a whole cycle time Tc (three buffers are used) the per- 1077 hop latency is always an integral number of cycle times Tc, with a 1078 latency variation at the output of the final hop of Tc. 1080 Ingress conditioning (Section 4.3) may be required if the source of a 1081 DetNet flow does not, itself, employ CQF. 1083 Note that there are no per-flow parameters in the CQF technique. 1084 Therefore, there is no requirement for per-hop configuration when a 1085 new DetNet flow is added to a network, except perhaps for ingress 1086 checks to see that the transmitter does not exceed the contracted 1087 bandwidth. 1089 7. References 1091 7.1. Normative References 1093 [I-D.ietf-detnet-architecture] 1094 Finn, N., Thubert, P., Varga, B., and J. Farkas, 1095 "Deterministic Networking Architecture", draft-ietf- 1096 detnet-architecture-08 (work in progress), September 2018. 1098 [I-D.ietf-detnet-ip] 1099 Varga, B., Farkas, J., Berger, L., Fedyk, D., Malis, A., 1100 Bryant, S., and J. Korhonen, "DetNet Data Plane: IP", 1101 draft-ietf-detnet-ip-00 (work in progress), May 2019. 1103 [I-D.ietf-detnet-mpls] 1104 Varga, B., Farkas, J., Berger, L., Fedyk, D., Malis, A., 1105 Bryant, S., and J. Korhonen, "DetNet Data Plane: MPLS", 1106 draft-ietf-detnet-mpls-00 (work in progress), May 2019. 1108 [RFC2212] Shenker, S., Partridge, C., and R. Guerin, "Specification 1109 of Guaranteed Quality of Service", RFC 2212, 1110 DOI 10.17487/RFC2212, September 1997, 1111 . 1113 [RFC6658] Bryant, S., Ed., Martini, L., Swallow, G., and A. Malis, 1114 "Packet Pseudowire Encapsulation over an MPLS PSN", 1115 RFC 6658, DOI 10.17487/RFC6658, July 2012, 1116 . 1118 [RFC7806] Baker, F. and R. Pan, "On Queuing, Marking, and Dropping", 1119 RFC 7806, DOI 10.17487/RFC7806, April 2016, 1120 . 1122 [RFC8578] Grossman, E., Ed., "Deterministic Networking Use Cases", 1123 RFC 8578, DOI 10.17487/RFC8578, May 2019, 1124 . 1126 7.2. Informative References 1128 [bennett2002delay] 1129 J.C.R. Bennett, K. Benson, A. Charny, W.F. Courtney, and 1130 J.-Y. Le Boudec, "Delay Jitter Bounds and Packet Scale 1131 Rate Guarantee for Expedited Forwarding", 1132 . 1134 [charny2000delay] 1135 A. Charny and J.-Y. Le Boudec, "Delay Bounds in a Network 1136 with Aggregate Scheduling", . 1139 [IEEE8021Q] 1140 IEEE 802.1, "IEEE Std 802.1Q-2018: IEEE Standard for Local 1141 and metropolitan area networks - Bridges and Bridged 1142 Networks", 2018, 1143 . 1145 [IEEE8021Qcr] 1146 IEEE 802.1, "IEEE P802.1Qcr: IEEE Draft Standard for Local 1147 and metropolitan area networks - Bridges and Bridged 1148 Networks - Amendment: Asynchronous Traffic Shaping", 2017, 1149 . 1151 [IEEE8021TSN] 1152 IEEE 802.1, "IEEE 802.1 Time-Sensitive Networking (TSN) 1153 Task Group", . 1155 [IEEE8023] 1156 IEEE 802.3, "IEEE Std 802.3-2018: IEEE Standard for 1157 Ethernet", 2018, 1158 . 1160 [le_boudec_theory_2018] 1161 J.-Y. Le Boudec, "A Theory of Traffic Regulators for 1162 Deterministic Networks with Application to Interleaved 1163 Regulators", . 1165 [NetCalBook] 1166 Le Boudec, Jean-Yves, and Patrick Thiran, "Network 1167 calculus: a theory of deterministic queuing systems for 1168 the internet", 2001, . 1170 [Specht2016UBS] 1171 J. Specht and S. Samii, "Urgency-Based Scheduler for Time- 1172 Sensitive Switched Ethernet Networks", 1173 . 1175 [TSNwithATS] 1176 E. Mohammadpour, E. Stai, M. Mohiuddin, and J.-Y. Le 1177 Boudec, "End-to-end Latency and Backlog Bounds in Time- 1178 Sensitive Networking with Credit Based Shapers and 1179 Asynchronous Traffic Shaping", 1180 . 1182 Authors' Addresses 1184 Norman Finn 1185 Huawei Technologies Co. Ltd 1186 3101 Rio Way 1187 Spring Valley, California 91977 1188 US 1190 Phone: +1 925 980 6430 1191 Email: nfinn@nfinnconsulting.com 1193 Jean-Yves Le Boudec 1194 EPFL 1195 IC Station 14 1196 Lausanne EPFL 1015 1197 Switzerland 1199 Email: jean-yves.leboudec@epfl.ch 1201 Ehsan Mohammadpour 1202 EPFL 1203 IC Station 14 1204 Lausanne EPFL 1015 1205 Switzerland 1207 Email: ehsan.mohammadpour@epfl.ch 1208 Jiayi Zhang 1209 Huawei Technologies Co. Ltd 1210 Q22, No.156 Beiqing Road 1211 Beijing 100095 1212 China 1214 Email: zhangjiayi11@huawei.com 1216 Balazs Varga 1217 Ericsson 1218 Konyves Kalman krt. 11/B 1219 Budapest 1097 1220 Hungary 1222 Email: balazs.a.varga@ericsson.com 1224 Janos Farkas 1225 Ericsson 1226 Konyves Kalman krt. 11/B 1227 Budapest 1097 1228 Hungary 1230 Email: janos.farkas@ericsson.com