idnits 2.17.1 draft-ietf-detnet-bounded-latency-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 22, 2021) is 1124 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Network' is mentioned on line 1010, but not defined == Outdated reference: A later version (-05) exists of draft-ietf-detnet-controller-plane-framework-00 Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DetNet N. Finn 3 Internet-Draft Huawei Technologies Co. Ltd 4 Intended status: Informational J-Y. Le Boudec 5 Expires: September 23, 2021 E. Mohammadpour 6 EPFL 7 J. Zhang 8 Huawei Technologies Co. Ltd 9 B. Varga 10 J. Farkas 11 Ericsson 12 March 22, 2021 14 DetNet Bounded Latency 15 draft-ietf-detnet-bounded-latency-04 17 Abstract 19 This document references specific queuing mechanisms, defined in 20 other documents, that can be used to control packet transmission at 21 each output port and achieve the DetNet qualities of service. This 22 document presents a timing model for sources, destinations, and the 23 DetNet transit nodes that relay packets that is applicable to all of 24 those referenced queuing mechanisms. Using the model presented in 25 this document, it should be possible for an implementor, user, or 26 standards development organization to select a particular set of 27 queuing mechanisms for each device in a DetNet network, and to select 28 a resource reservation algorithm for that network, so that those 29 elements can work together to provide the DetNet service. 31 Status of This Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 Internet-Drafts are working documents of the Internet Engineering 37 Task Force (IETF). Note that other groups may also distribute 38 working documents as Internet-Drafts. The list of current Internet- 39 Drafts is at https://datatracker.ietf.org/drafts/current/. 41 Internet-Drafts are draft documents valid for a maximum of six months 42 and may be updated, replaced, or obsoleted by other documents at any 43 time. It is inappropriate to use Internet-Drafts as reference 44 material or to cite them other than as "work in progress." 46 This Internet-Draft will expire on September 23, 2021. 48 Copyright Notice 50 Copyright (c) 2021 IETF Trust and the persons identified as the 51 document authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal 54 Provisions Relating to IETF Documents 55 (https://trustee.ietf.org/license-info) in effect on the date of 56 publication of this document. Please review these documents 57 carefully, as they describe your rights and restrictions with respect 58 to this document. Code Components extracted from this document must 59 include Simplified BSD License text as described in Section 4.e of 60 the Trust Legal Provisions and are provided without warranty as 61 described in the Simplified BSD License. 63 Table of Contents 65 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 66 2. Terminology and Definitions . . . . . . . . . . . . . . . . . 4 67 3. DetNet bounded latency model . . . . . . . . . . . . . . . . 4 68 3.1. Flow admission . . . . . . . . . . . . . . . . . . . . . 4 69 3.1.1. Static latency calculation . . . . . . . . . . . . . 4 70 3.1.2. Dynamic latency calculation . . . . . . . . . . . . . 5 71 3.2. Relay node model . . . . . . . . . . . . . . . . . . . . 6 72 4. Computing End-to-end Delay Bounds . . . . . . . . . . . . . . 8 73 4.1. Non-queuing delay bound . . . . . . . . . . . . . . . . . 8 74 4.2. Queuing delay bound . . . . . . . . . . . . . . . . . . . 9 75 4.2.1. Per-flow queuing mechanisms . . . . . . . . . . . . . 9 76 4.2.2. Aggregate queuing mechanisms . . . . . . . . . . . . 9 77 4.3. Ingress considerations . . . . . . . . . . . . . . . . . 10 78 4.4. Interspersed DetNet-unaware transit nodes . . . . . . . . 11 79 5. Achieving zero congestion loss . . . . . . . . . . . . . . . 11 80 6. Queuing techniques . . . . . . . . . . . . . . . . . . . . . 12 81 6.1. Queuing data model . . . . . . . . . . . . . . . . . . . 13 82 6.2. Frame Preemption . . . . . . . . . . . . . . . . . . . . 15 83 6.3. Time Aware Shaper . . . . . . . . . . . . . . . . . . . . 15 84 6.4. Credit-Based Shaper with Asynchronous Traffic Shaping . . 16 85 6.4.1. Delay Bound Calculation . . . . . . . . . . . . . . . 18 86 6.4.2. Flow Admission . . . . . . . . . . . . . . . . . . . 19 87 6.5. IntServ . . . . . . . . . . . . . . . . . . . . . . . . . 20 88 6.6. Cyclic Queuing and Forwarding . . . . . . . . . . . . . . 21 89 7. Example application on DetNet IP network . . . . . . . . . . 22 90 8. Security considerations . . . . . . . . . . . . . . . . . . . 24 91 9. IANA considerations . . . . . . . . . . . . . . . . . . . . . 24 92 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 24 93 10.1. Normative References . . . . . . . . . . . . . . . . . . 24 94 10.2. Informative References . . . . . . . . . . . . . . . . . 25 95 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 97 1. Introduction 99 The ability for IETF Deterministic Networking (DetNet) or IEEE 802.1 100 Time-Sensitive Networking (TSN, [IEEE8021TSN]) to provide the DetNet 101 services of bounded latency and zero congestion loss depends upon A) 102 configuring and allocating network resources for the exclusive use of 103 DetNet flows; B) identifying, in the data plane, the resources to be 104 utilized by any given packet, and C) the detailed behavior of those 105 resources, especially transmission queue selection, so that latency 106 bounds can be reliably assured. 108 As explained in [RFC8655], DetNet flows are characterized by 1) a 109 maximum bandwidth, guaranteed either by the transmitter or by strict 110 input metering; and 2) a requirement for a guaranteed worst-case end- 111 to-end latency. That latency guarantee, in turn, provides the 112 opportunity for the network to supply enough buffer space to 113 guarantee zero congestion loss. 115 To be used by the applications identified in [RFC8578], it must be 116 possible to calculate, before the transmission of a DetNet flow 117 commences, both the worst-case end-to-end network latency, and the 118 amount of buffer space required at each hop to ensure against 119 congestion loss. 121 This document references specific queuing mechanisms, defined in 122 [RFC8655], that can be used to control packet transmission at each 123 output port and achieve the DetNet qualities of service. This 124 document presents a timing model for sources, destinations, and the 125 DetNet transit nodes that relay packets that is applicable to all of 126 those referenced queuing mechanisms. It furthermore provides end-to- 127 end delay bound and backlog bound computations for such mechanisms 128 that can be used by the control plane to provide DetNet QoS. 130 Using the model presented in this document, it should be possible for 131 an implementor, user, or standards development organization to select 132 a particular set of queuing mechanisms for each device in a DetNet 133 network, and to select a resource reservation algorithm for that 134 network, so that those elements can work together to provide the 135 DetNet service. Section 7 provides an example application of this 136 document to a DetNet IP network with combination of different queuing 137 mechanisms. 139 This document does not specify any resource reservation protocol or 140 control plane function. It does not describe all of the requirements 141 for that protocol or control plane function. It does describe 142 requirements for such resource reservation methods, and for queuing 143 mechanisms that, if met, will enable them to work together. 145 2. Terminology and Definitions 147 This document uses the terms defined in [RFC8655]. 149 3. DetNet bounded latency model 151 3.1. Flow admission 153 This document assumes that following paradigm is used to admit DetNet 154 flows: 156 1. Perform any configuration required by the DetNet transit nodes in 157 the network for aggregates of DetNet flows. This configuration 158 is done beforehand, and not tied to any particular DetNet flow. 160 2. Characterize the new DetNet flow, particularly in terms of 161 required bandwidth. 163 3. Establish the path that the DetNet flow will take through the 164 network from the source to the destination(s). This can be a 165 point-to-point or a point-to-multipoint path. 167 4. Compute the worst-case end-to-end latency for the DetNet flow, 168 using one of the methods, below (Section 3.1.1, Section 3.1.2). 169 In the process, determine whether sufficient resources are 170 available for the DetNet flow to guarantee the required latency 171 and to provide zero congestion loss. 173 5. Assuming that the resources are available, commit those resources 174 to the DetNet flow. This may or may not require adjusting the 175 parameters that control the filtering and/or queuing mechanisms 176 at each hop along the DetNet flow's path. 178 This paradigm can be implemented using peer-to-peer protocols or 179 using a central controller. In some situations, a lack of resources 180 can require backtracking and recursing through this list. 182 Issues such as service preemption of a DetNet flow in favor of 183 another, when resources are scarce, are not considered, here. Also 184 not addressed is the question of how to choose the path to be taken 185 by a DetNet flow. 187 3.1.1. Static latency calculation 189 The static problem: 190 Given a network and a set of DetNet flows, compute an end-to- 191 end latency bound (if computable) for each DetNet flow, and 192 compute the resources, particularly buffer space, required in 193 each DetNet transit node to achieve zero congestion loss. 195 In this calculation, all of the DetNet flows are known before the 196 calculation commences. This problem is of interest to relatively 197 static networks, or static parts of larger networks. It provides 198 bounds on delay and buffer size. The calculations can be extended to 199 provide global optimizations, such as altering the path of one DetNet 200 flow in order to make resources available to another DetNet flow with 201 tighter constraints. 203 The static latency calculation is not limited only to static 204 networks; the entire calculation for all DetNet flows can be repeated 205 each time a new DetNet flow is created or deleted. If some already- 206 established DetNet flow would be pushed beyond its latency 207 requirements by the new DetNet flow, then the new DetNet flow can be 208 refused, or some other suitable action taken. 210 This calculation may be more difficult to perform than that of the 211 dynamic calculation (Section 3.1.2), because the DetNet flows passing 212 through one port on a DetNet transit node affect each others' 213 latency. The effects can even be circular, from a node A to B to C 214 and back to A. On the other hand, the static calculation can often 215 accommodate queuing methods, such as transmission selection by strict 216 priority, that are unsuitable for the dynamic calculation. 218 3.1.2. Dynamic latency calculation 220 The dynamic problem: 221 Given a network whose maximum capacity for DetNet flows is 222 bounded by a set of static configuration parameters applied 223 to the DetNet transit nodes, and given just one DetNet flow, 224 compute the worst-case end-to-end latency that can be 225 experienced by that flow, no matter what other DetNet flows 226 (within the network's configured parameters) might be created 227 or deleted in the future. Also, compute the resources, 228 particularly buffer space, required in each DetNet transit 229 node to achieve zero congestion loss. 231 This calculation is dynamic, in the sense that DetNet flows can be 232 added or deleted at any time, with a minimum of computation effort, 233 and without affecting the guarantees already given to other DetNet 234 flows. 236 The choice of queuing methods is critical to the applicability of the 237 dynamic calculation. Some queuing methods (e.g. CQF, Section 6.6) 238 make it easy to configure bounds on the network's capacity, and to 239 make independent calculations for each DetNet flow. Some other 240 queuing methods (e.g. strict priority with the credit-based shaper 241 defined in [IEEE8021Q] section 8.6.8.2) can be used for dynamic 242 DetNet flow creation, but yield poorer latency and buffer space 243 guarantees than when that same queuing method is used for static 244 DetNet flow creation (Section 3.1.1). 246 3.2. Relay node model 248 A model for the operation of a DetNet transit node is required, in 249 order to define the latency and buffer calculations. In Figure 1 we 250 see a breakdown of the per-hop latency experienced by a packet 251 passing through a DetNet transit node, in terms that are suitable for 252 computing both hop-by-hop latency and per-hop buffer requirements. 254 DetNet transit node A DetNet transit node B 255 +-------------------------+ +------------------------+ 256 | Queuing | | Queuing | 257 | Regulator subsystem | | Regulator subsystem | 258 | +-+-+-+-+ +-+-+-+-+ | | +-+-+-+-+ +-+-+-+-+ | 259 -->+ | | | | | | | | | + +------>+ | | | | | | | | | + +---> 260 | +-+-+-+-+ +-+-+-+-+ | | +-+-+-+-+ +-+-+-+-+ | 261 | | | | 262 +-------------------------+ +------------------------+ 263 |<->|<------>|<------->|<->|<---->|<->|<------>|<------>|<->|<-- 264 2,3 4 5 6 1 2,3 4 5 6 1 2,3 265 1: Output delay 4: Processing delay 266 2: Link delay 5: Regulation delay 267 3: Frame preemption delay 6: Queuing delay 269 Figure 1: Timing model for DetNet or TSN 271 In Figure 1, we see two DetNet transit nodes that are connected via a 272 link. In this model, the only queues, that we deal with explicitly, 273 are attached to the output port; other queues are modeled as 274 variations in the other delay times. (E.g., an input queue could be 275 modeled as either a variation in the link delay (2) or the processing 276 delay (4).) There are six delays that a packet can experience from 277 hop to hop. 279 1. Output delay 280 The time taken from the selection of a packet for output from a 281 queue to the transmission of the first bit of the packet on the 282 physical link. If the queue is directly attached to the physical 283 port, output delay can be a constant. But, in many 284 implementations, the queuing mechanism in a forwarding ASIC is 285 separated from a multi-port MAC/PHY, in a second ASIC, by a 286 multiplexed connection. This causes variations in the output 287 delay that are hard for the forwarding node to predict or control. 289 2. Link delay 290 The time taken from the transmission of the first bit of the 291 packet to the reception of the last bit, assuming that the 292 transmission is not suspended by a frame preemption event. This 293 delay has two components, the first-bit-out to first-bit-in delay 294 and the first-bit-in to last-bit-in delay that varies with packet 295 size. The former is typically measured by the Precision Time 296 Protocol and is constant (see [RFC8655]). However, a virtual 297 "link" could exhibit a variable link delay. 299 3. Frame preemption delay 300 If the packet is interrupted in order to transmit another packet 301 or packets, (e.g. [IEEE8023] clause 99 frame preemption) an 302 arbitrary delay can result. 304 4. Processing delay 305 This delay covers the time from the reception of the last bit of 306 the packet to the time the packet is enqueued in the regulator 307 (Queuing subsystem, if there is no regulation). This delay can be 308 variable, and depends on the details of the operation of the 309 forwarding node. 311 5. Regulator delay 312 This is the time spent from the insertion of the last bit of a 313 packet into a regulation queue until the time the packet is 314 declared eligible according to its regulation constraints. We 315 assume that this time can be calculated based on the details of 316 regulation policy. If there is no regulation, this time is zero. 318 6. Queuing subsystem delay 319 This is the time spent for a packet from being declared eligible 320 until being selected for output on the next link. We assume that 321 this time is calculable based on the details of the queuing 322 mechanism. If there is no regulation, this time is from the 323 insertion of the packet into a queue until it is selected for 324 output on the next link. 326 Not shown in Figure 1 are the other output queues that we presume are 327 also attached to that same output port as the queue shown, and 328 against which this shown queue competes for transmission 329 opportunities. 331 The initial and final measurement point in this analysis (that is, 332 the definition of a "hop") is the point at which a packet is selected 333 for output. In general, any queue selection method that is suitable 334 for use in a DetNet network includes a detailed specification as to 335 exactly when packets are selected for transmission. Any variations 336 in any of the delay times 1-4 result in a need for additional buffers 337 in the queue. If all delays 1-4 are constant, then any variation in 338 the time at which packets are inserted into a queue depends entirely 339 on the timing of packet selection in the previous node. If the 340 delays 1-4 are not constant, then additional buffers are required in 341 the queue to absorb these variations. Thus: 343 o Variations in output delay (1) require buffers to absorb that 344 variation in the next hop, so the output delay variations of the 345 previous hop (on each input port) must be known in order to 346 calculate the buffer space required on this hop. 348 o Variations in processing delay (4) require additional output 349 buffers in the queues of that same DetNet transit node. Depending 350 on the details of the queueing subsystem delay (6) calculations, 351 these variations need not be visible outside the DetNet transit 352 node. 354 4. Computing End-to-end Delay Bounds 356 4.1. Non-queuing delay bound 358 End-to-end delay bounds can be computed using the delay model in 359 Section 3.2. Here, it is important to be aware that for several 360 queuing mechanisms, the end-to-end delay bound is less than the sum 361 of the per-hop delay bounds. An end-to-end delay bound for one 362 DetNet flow can be computed as 364 end_to_end_delay_bound = non_queuing_delay_bound + 365 queuing_delay_bound 367 The two terms in the above formula are computed as follows. 369 First, at the h-th hop along the path of this DetNet flow, obtain an 370 upperbound per-hop_non_queuing_delay_bound[h] on the sum of the 371 bounds over the delays 1,2,3,4 of Figure 1. These upper bounds are 372 expected to depend on the specific technology of the DetNet transit 373 node at the h-th hop but not on the T-SPEC of this DetNet flow. Then 374 set non_queuing_delay_bound = the sum of per- 375 hop_non_queuing_delay_bound[h] over all hops h. 377 Second, compute queuing_delay_bound as an upper bound to the sum of 378 the queuing delays along the path. The value of queuing_delay_bound 379 depends on the T-SPEC of this DetNet flow and possibly of other flows 380 in the network, as well as the specifics of the queuing mechanisms 381 deployed along the path of this DetNet flow. The computation of 382 queuing_delay_bound is described in Section 4.2 as a separate 383 section. 385 4.2. Queuing delay bound 387 For several queuing mechanisms, queuing_delay_bound is less than the 388 sum of upper bounds on the queuing delays (5,6) at every hop. This 389 occurs with (1) per-flow queuing, and (2) aggregate queuing with 390 regulators, as explained in Section 4.2.1, Section 4.2.2, and 391 Section 6. 393 For other queuing mechanisms the only available value of 394 queuing_delay_bound is the sum of the per-hop queuing delay bounds. 395 In such cases, the computation of per-hop queuing delay bounds must 396 account for the fact that the T-SPEC of a DetNet flow is no longer 397 satisfied at the ingress of a hop, since burstiness increases as one 398 flow traverses one DetNet transit node. 400 4.2.1. Per-flow queuing mechanisms 402 With such mechanisms, each flow uses a separate queue inside every 403 node. The service for each queue is abstracted with a guaranteed 404 rate and a latency. For every DetNet flow, a per-node delay bound as 405 well as an end-to-end delay bound can be computed from the traffic 406 specification of this DetNet flow at its source and from the values 407 of rates and latencies at all nodes along its path. The per-flow 408 queuing is used in IntServ. Details of calculation for IntServ are 409 described in Section 6.5. 411 4.2.2. Aggregate queuing mechanisms 413 With such mechanisms, multiple flows are aggregated into macro-flows 414 and there is one FIFO queue per macro-flow. A practical example is 415 the credit-based shaper defined in section 8.6.8.2 of [IEEE8021Q] 416 where a macro-flow is called a "class". One key issue in this 417 context is how to deal with the burstiness cascade: individual flows 418 that share a resource dedicated to a macro-flow may see their 419 burstiness increase, which may in turn cause increased burstiness to 420 other flows downstream of this resource. Computing delay upper 421 bounds for such cases is difficult, and in some conditions impossible 422 [charny2000delay][bennett2002delay]. Also, when bounds are obtained, 423 they depend on the complete configuration, and must be recomputed 424 when one flow is added. (The dynamic calculation, Section 3.1.2.) 426 A solution to deal with this issue for the DetNet flows is to reshape 427 them at every hop. This can be done with per-flow regulators (e.g. 428 leaky bucket shapers), but this requires per-flow queuing and defeats 429 the purpose of aggregate queuing. An alternative is the interleaved 430 regulator, which reshapes individual DetNet flows without per-flow 431 queuing ([Specht2016UBS], [IEEE8021Qcr]). With an interleaved 432 regulator, the packet at the head of the queue is regulated based on 433 its (flow) regulation constraints; it is released at the earliest 434 time at which this is possible without violating the constraint. One 435 key feature of per-flow or interleaved regulator is that, it does not 436 increase worst-case latency bounds [le_boudec2018theory]. 437 Specifically, when an interleaved regulator is appended to a FIFO 438 subsystem, it does not increase the worst-case delay of the latter. 440 Figure 2 shows an example of a network with 5 nodes, aggregate 441 queuing mechanism and interleaved regulators as in Figure 1. An end- 442 to-end delay bound for DetNet flow f, traversing nodes 1 to 5, is 443 calculated as follows: 445 end_to_end_latency_bound_of_flow_f = C12 + C23 + C34 + S4 447 In the above formula, Cij is a bound on the delay of the queuing 448 subsystem in node i and interleaved regulator of node j, and S4 is a 449 bound on the delay of the queuing subsystem in node 4 for DetNet flow 450 f. In fact, using the delay definitions in Section 3.2, Cij is a 451 bound on sum of the delays 1,2,3,6 of node i and 4,5 of node j. 452 Similarly, S4 is a bound on sum of the delays 1,2,3,6 of node 4. A 453 practical example of queuing model and delay calculation is presented 454 Section 6.4. 456 f 457 -----------------------------> 458 +---+ +---+ +---+ +---+ +---+ 459 | 1 |---| 2 |---| 3 |---| 4 |---| 5 | 460 +---+ +---+ +---+ +---+ +---+ 461 \__C12_/\__C23_/\__C34_/\_S4_/ 463 Figure 2: End-to-end delay computation example 465 REMARK: The end-to-end delay bound calculation provided here gives a 466 much better upper bound in comparison with end-to-end delay bound 467 computation by adding the delay bounds of each node in the path of a 468 DetNet flow [TSNwithATS]. 470 4.3. Ingress considerations 472 A sender can be a DetNet node which uses exactly the same queuing 473 methods as its adjacent DetNet transit node, so that the delay and 474 buffer bounds calculations at the first hop are indistinguishable 475 from those at a later hop within the DetNet domain. On the other 476 hand, the sender may be DetNet-unaware, in which case some 477 conditioning of the DetNet flow may be necessary at the ingress 478 DetNet transit node. 480 This ingress conditioning typically consists of a FIFO with an output 481 regulator that is compatible with the queuing employed by the DetNet 482 transit node on its output port(s). For some queuing methods, simply 483 requires added extra buffer space in the queuing subsystem. Ingress 484 conditioning requirements for different queuing methods are mentioned 485 in the sections, below, describing those queuing methods. 487 4.4. Interspersed DetNet-unaware transit nodes 489 It is sometimes desirable to build a network that has both DetNet- 490 aware transit nodes and DetNet-uaware transit nodes, and for a DetNet 491 flow to traverse an island of DetNet-unaware transit nodes, while 492 still allowing the network to offer delay and congestion loss 493 guarantees. This is possible under certain conditions. 495 In general, when passing through a DetNet-unaware island, the island 496 may cause delay variation in excess of what would be caused by DetNet 497 nodes. That is, the DetNet flow might be "lumpier" after traversing 498 the DetNet-unaware island. DetNet guarantees for delay and buffer 499 requirements can still be calculated and met if and only if the 500 following are true: 502 1. The latency variation across the DetNet-unaware island must be 503 bounded and calculable. 505 2. An ingress conditioning function (Section 4.3) is required at the 506 re-entry to the DetNet-aware domain. This will, at least, 507 require some extra buffering to accommodate the additional delay 508 variation, and thus further increases the delay bound. 510 The ingress conditioning is exactly the same problem as that of a 511 sender at the edge of the DetNet domain. The requirement for bounds 512 on the latency variation across the DetNet-unaware island is 513 typically the most difficult to achieve. Without such a bound, it is 514 obvious that DetNet cannot deliver its guarantees, so a DetNet- 515 unaware island that cannot offer bounded latency variation cannot be 516 used to carry a DetNet flow. 518 5. Achieving zero congestion loss 520 When the input rate to an output queue exceeds the output rate for a 521 sufficient length of time, the queue must overflow. This is 522 congestion loss, and this is what deterministic networking seeks to 523 avoid. 525 To avoid congestion losses, an upper bound on the backlog present in 526 the regulator and queuing subsystem of Figure 1 must be computed 527 during resource reservation. This bound depends on the set of flows 528 that use these queues, the details of the specific queuing mechanism 529 and an upper bound on the processing delay (4). The queue must 530 contain the packet in transmission plus all other packets that are 531 waiting to be selected for output. 533 A conservative backlog bound, that applies to all systems, can be 534 derived as follows. 536 The backlog bound is counted in data units (bytes, or words of 537 multiple bytes) that are relevant for buffer allocation. Based on 538 the que For every flow or an aggregate of flows, we need one buffer 539 space for the packet in transmission, plus space for the packets that 540 are waiting to be selected for output. Excluding transmission and 541 frame preemption times, the packets are waiting in the queue since 542 reception of the last bit, for a duration equal to the processing 543 delay (4) plus the queuing delays (5,6). 545 Let 547 o total_in_rate be the sum of the line rates of all input ports that 548 send traffic to this output port. The value of total_in_rate is 549 in data units (e.g. bytes) per second. 551 o nb_input_ports be the number input ports that send traffic to this 552 output port 554 o max_packet_length be the maximum packet size for packets that may 555 be sent to this output port. This is counted in data units. 557 o max_delay456 be an upper bound, in seconds, on the sum of the 558 processing delay (4) and the queuing delays (5,6) for any packet 559 at this output port. 561 Then a bound on the backlog of traffic in the queue at this output 562 port is 564 backlog_bound = nb_input_ports * max_packet_length + 565 total_in_rate* max_delay456 567 6. Queuing techniques 569 In this section, for simplicity of delay computation, we assume that 570 the T-SPEC or arrival curve [NetCalBook] for each DetNet flow at 571 source is leaky bucket. Also, at each Detnet transit node, the 572 service for each queue is abstracted with a guaranteed rate and a 573 latency. 575 6.1. Queuing data model 577 Sophisticated queuing mechanisms are available in Layer 3 (L3, see, 578 e.g., [RFC7806] for an overview). In general, we assume that "Layer 579 3" queues, shapers, meters, etc., are precisely the "regulators" 580 shown in Figure 1. The "queuing subsystems" in this figure are not 581 the province solely of bridges; they are an essential part of any 582 DetNet transit node. As illustrated by numerous implementation 583 examples, some of the "Layer 3" mechanisms described in documents 584 such as [RFC7806] are often integrated, in an implementation, with 585 the "Layer 2" mechanisms also implemented in the same node. An 586 integrated model is needed in order to successfully predict the 587 interactions among the different queuing mechanisms needed in a 588 network carrying both DetNet flows and non-DetNet flows. 590 Figure 3 shows the general model for the flow of packets through the 591 queues of a DetNet transit node. The DetNet packets are mapped to a 592 number of regulators. Here, we assume that the PREOF (Packet 593 Replication, Elimination and Ordering Functions) functions are 594 performed before the DetNet packets enter the regulators. All 595 Packets are assigned to a set of queues. Queues compete for the 596 selection of packets to be passed to queues in the queuing subsystem. 597 Packets again are selected for output from the queuing subsystem. 599 | 600 +--------------------------------V----------------------------------+ 601 | Queue assignment | 602 +--+------+----------+---------+-----------+-----+-------+-------+--+ 603 | | | | | | | | 604 +--V-+ +--V-+ +--V--+ +--V--+ +--V--+ | | | 605 |Flow| |Flow| |Flow | |Flow | |Flow | | | | 606 | 0 | | 1 | ... | i | | i+1 | ... | n | | | | 607 | reg| | reg| | reg | | reg | | reg | | | | 608 +--+-+ +--+-+ +--+--+ +--+--+ +--+--+ | | | 609 | | | | | | | | 610 +--V------V----------V--+ +--V-----------V--+ | | | 611 | Trans. selection | | Trans. select. | | | | 612 +----------+------------+ +-----+-----------+ | | | 613 | | | | | 614 +--V--+ +--V--+ +--V--+ +--V--+ +--V--+ 615 | out | | out | | out | | out | | out | 616 |queue| |queue| |queue| |queue| |queue| 617 | 1 | | 2 | | 3 | | 4 | | 5 | 618 +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ 619 | | | | | 620 +----------V----------------------V--------------V-------V-------V--+ 621 | Transmission selection | 622 +---------------------------------+---------------------------------+ 623 | 624 V 626 Figure 3: IEEE 802.1Q Queuing Model: Data flow 628 Some relevant mechanisms are hidden in this figure, and are performed 629 in the queue boxes: 631 o Discarding packets because a queue is full. 633 o Discarding packets marked "yellow" by a metering function, in 634 preference to discarding "green" packets. 636 Ideally, neither of these actions are performed on DetNet packets. 637 Full queues for DetNet packets should occur only when a DetNet flow 638 is misbehaving, and the DetNet QoS does not include "yellow" service 639 for packets in excess of committed rate. 641 The queue assignment function can be quite complex, even in a bridge 642 [IEEE8021Q], since the introduction of per-stream filtering and 643 policing ([IEEE8021Q] clause 8.6.5.1). In addition to the Layer 2 644 priority expressed in the 802.1Q VLAN tag, a DetNet transit node can 645 utilize any of the following information to assign a packet to a 646 particular queue: 648 o Input port. 650 o Selector based on a rotating schedule that starts at regular, 651 time-synchronized intervals and has nanosecond precision. 653 o MAC addresses, VLAN ID, IP addresses, Layer 4 port numbers, DSCP. 654 ([RFC8939], [RFC8964]) (Work items are expected to add MPC and 655 other indicators.) 657 o The queue assignment function can contain metering and policing 658 functions. 660 o MPLS and/or pseudowire ([RFC6658]) labels. 662 The "Transmission selection" function decides which queue is to 663 transfer its oldest packet to the output port when a transmission 664 opportunity arises. 666 6.2. Frame Preemption 668 In [IEEE8021Q] and [IEEE8023], the transmission of a frame can be 669 interrupted by one or more "express" frames, and then the interrupted 670 frame can continue transmission. The frame preemption is modeled as 671 consisting of two MAC/PHY stacks, one for packets that can be 672 interrupted, and one for packets that can interrupt the interruptible 673 packets. Only one layer of frame preemption is supported -- a 674 transmitter cannot have more than one interrupted frame in progress. 675 DetNet flows typically pass through the interrupting MAC. For those 676 DetNet flows with T-SPEC, latency bound can be calculated by the 677 methods provided in the following sections that accounts for the 678 affect of frame preemption, according to the specific queuing 679 mechanism that is used in DetNet nodes. Best-effort queues pass 680 through the interruptible MAC, and can thus be preempted. 682 6.3. Time Aware Shaper 684 In [IEEE8021Q], the notion of time-scheduling queue gates is 685 described in section 8.6.8.4. On each node, the transmission 686 selection for packets is controlled by time-synchronized gates; each 687 output queue is associated with a gate. The gates can be either open 688 or close. The states of the gates are determined by the gate control 689 list (GCL). The GCL specifies the opening and closing times of the 690 gates. The design of GCL should satisfy the requirement of latency 691 upper bounds of all DetNet flows; therefore, those DetNet flows 692 traverse a network should have bounded latency, if the traffic and 693 nodes are conformant. 695 It should be noted that scheduled traffic service relies on a 696 synchronized network and coordinated GCL configuration. Synthesis of 697 GCL on multiple nodes in network is a scheduling problem considering 698 all DetNet flows traversing the network, which is a non-deterministic 699 polynomial-time hard (NP-hard) problem. Also, at this writing, 700 scheduled traffic service supports no more than eight traffic queues, 701 typically using up to seven priority queues and at least one best 702 effort. 704 6.4. Credit-Based Shaper with Asynchronous Traffic Shaping 706 In the considered queuing model, we considered the four traffic 707 classes (Definition 3.268 of [IEEE8021Q]): control-data traffic 708 (CDT), class A, class B, and best effort (BE) in decreasing order of 709 priority. Flows of classes A and B are together referred as AVB 710 flows. This model is a subset of Time-Sensitive Networking as 711 described next. 713 Based on the timing model described in Figure 1, the contention 714 occurs only at the output port of a DetNet transit node; therefore, 715 the focus of the rest of this subsection is on the regulator and 716 queuing subsystem in the output port of a DetNet transit node. Then, 717 the input flows are identified using the information in (Section 5.1 718 of [RFC8939]). Then they are aggregated into eight macro flows based 719 on their traffic classes. We refer to each macro flow as a class. 720 The output port performs aggregate scheduling with eight queues 721 (queuing subsystems): one for CDT, one for class A flows, one for 722 class B flows, and five for BE traffic denoted as BE0-BE4. The 723 queuing policy for each queuing subsystem is FIFO. In addition, each 724 node output port also performs per-flow regulation for AVB flows 725 using an interleaved regulator (IR), called Asynchronous Traffic 726 Shaper [IEEE8021Qcr]. Thus, at each output port of a node, there is 727 one interleaved regulator per-input port and per-class; the 728 interleaved regulator is mapped to the regulator depicted in 729 Figure 1. The detailed picture of scheduling and regulation 730 architecture at a node output port is given by Figure 4. The packets 731 received at a node input port for a given class are enqueued in the 732 respective interleaved regulator at the output port. Then, the 733 packets from all the flows, including CDT and BE flows, are enqueued 734 in queuing subsytem; there is no regulator for such classes. 736 +--+ +--+ +--+ +--+ 737 | | | | | | | | 738 |IR| |IR| |IR| |IR| 739 | | | | | | | | 740 +-++XXX++-+ +-++XXX++-+ 741 | | | | 742 | | | | 743 +---+ +-v-XXX-v-+ +-v-XXX-v-+ +-----+ +-----+ +-----+ +-----+ +-----+ 744 | | | | | | |Class| |Class| |Class| |Class| |Class| 745 |CDT| | Class A | | Class B | | BE4 | | BE3 | | BE2 | | BE1 | | BE0 | 746 | | | | | | | | | | | | | | | | 747 +-+-+ +----+----+ +----+----+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ 748 | | | | | | | | 749 | +-v-+ +-v-+ | | | | | 750 | |CBS| |CBS| | | | | | 751 | +-+-+ +-+-+ | | | | | 752 | | | | | | | | 753 +-v--------v-----------v---------v-------V-------v-------v-------v--+ 754 | Strict Priority selection | 755 +--------------------------------+----------------------------------+ 756 | 757 V 759 Figure 4: The architecture of an output port inside a relay node with 760 interleaved regulators (IRs) and credit-based shaper (CBS) 762 Each of the queuing subsystems for classes A and B, contains Credit- 763 Based Shaper (CBS). The CBS serves a packet from a class according 764 to the available credit for that class. The credit for each class A 765 or B increases based on the idle slope, and decreases based on the 766 send slope, both of which are parameters of the CBS (Section 8.6.8.2 767 of [IEEE8021Q]). The CDT and BE0-BE4 flows are served by separate 768 queuing subsystems. Then, packets from all flows are served by a 769 transmission selection subsystem that serves packets from each class 770 based on its priority. All subsystems are non-preemptive. 771 Guarantees for AVB traffic can be provided only if CDT traffic is 772 bounded; it is assumed that the CDT traffic has leaky bucket arrival 773 curve with two parameters r_h as rate and b_h as bucket size, i.e., 774 the amount of bits entering a node within a time interval t is 775 bounded by r_h t + b_h. 777 Additionally, it is assumed that the AVB flows are also regulated at 778 their source according to leaky bucket arrival curve. At the source, 779 the traffic satisfies its regulation constraint, i.e. the delay due 780 to interleaved regulator at source is ignored. 782 At each DetNet transit node implementing an interleaved regulator, 783 packets of multiple flows are processed in one FIFO queue; the packet 784 at the head of the queue is regulated based on its leaky bucket 785 parameters; it is released at the earliest time at which this is 786 possible without violating the constraint. 788 The regulation parameters for a flow (leaky bucket rate and bucket 789 size) are the same at its source and at all DetNet transit nodes 790 along its path in the case of that all clocks are perfect. However, 791 in reality there is clock nonideality thoughout the DetNet domain 792 even with clock synchronization. This phenomenon causes inaccuracy 793 in the rates configured at the regulators that may lead to network 794 instability. To avoid that, when configuring the regulators, the 795 rates are set as the source rates with some positive margin. 796 [Thomas2020time] describes and provides solutions to this issue. 798 6.4.1. Delay Bound Calculation 800 A delay bound of the queuing subsystem ((4) in Figure 1) for an AVB 801 flow of classes A or B can be computed if the following condition 802 holds: 804 sum of leaky bucket rates of all flows of this class at this 805 transit node <= R, where R is given below for every class. 807 If the condition holds, the delay bounds for a flow of class X (A or 808 B) is d_X and calculated as: 810 d_X = T_X + (b_t_X-L_min_X)/R_X - L_min_X/c 812 where L_min_X is the minimum packet lengths of class X (A or B); c is 813 the output link transmission rate; b_t_X is the sum of the b term 814 (bucket size) for all the flows of the class X. Parameters R_X and 815 T_X are calculated as follows for class A and class B, separately: 817 If the flow is of class A: 819 R_A = I_A (c-r_h)/ c 821 T_A = L_nA + b_h + r_h L_n/c)/(c-r_h) 823 where L_nA is the maximum packet length of class B and BE packets; 824 L_n is the maximum packet length of classes A,B, and BE. 826 If the flow is of class B: 828 R_B = I_B (c-r_h)/ c 830 T_B = (L_BE + L_A + L_nA I_A/(c_h-I_A) + b_h + r_h L_n/c)/(c-r_h) 832 where L_A is the maximum packet length of class A; L_BE is the 833 maximum packet length of class BE. 835 Then, an end-to-end delay bound of class X (A or B)is calculated by 836 the formula Section 4.2.2, where for Cij: 838 Cij = d_X 840 More information of delay analysis in such a DetNet transit node is 841 described in [TSNwithATS]. 843 6.4.2. Flow Admission 845 The delay bound calculation requires some information about each 846 node. For each node, it is required to know the idle slope of CBS 847 for each class A and B (I_A and I_B), as well as the transmission 848 rate of the output link (c). Besides, it is necessary to have the 849 information on each class, i.e. maximum packet length of classes A, 850 B, and BE. Moreover, the leaky bucket parameters of CDT (r_h,b_h) 851 should be known. To admit a flow/flows of classes A and B, their 852 delay requirements should be guaranteed not to be violated. As 853 described in Section 3.1, the two problems, static and dynamic, are 854 addressed separately. In either of the problems, the rate and delay 855 should be guaranteed. Thus, 857 The static admission control: 858 The leaky bucket parameters of all AVB flows are known, 859 therefore, for each AVB flow f, a delay bound can be 860 calculated. The computed delay bound for every AVB flow 861 should not be more than its delay requirement. Moreover, the 862 sum of the rate of each flow (r_f) should not be more than 863 the rate allocated to each class (R). If these two 864 conditions hold, the configuration is declared admissible. 866 The dynamic admission control: 867 For dynamic admission control, we allocate to every node and 868 class A or B, static value for rate (R) and maximum 869 burstiness (b_t). In addition, for every node and every 870 class A and B, two counters are maintained: 872 R_acc is equal to the sum of the leaky-bucket rates of all 873 flows of this class already admitted at this node; At all 874 times, we must have: 876 R_acc <=R, (Eq. 1) 878 b_acc is equal to the sum of the bucket sizes of all flows 879 of this class already admitted at this node; At all times, 880 we must have: 882 b_acc <=b_t. (Eq. 2) 884 A new AVB flow is admitted at this node, if Eqs. (1) and (2) 885 continue to be satisfied after adding its leaky bucket rate 886 and bucket size to R_acc and b_acc. An AVB flow is admitted 887 in the network, if it is admitted at all nodes along its 888 path. When this happens, all variables R_acc and b_acc along 889 its path must be incremented to reflect the addition of the 890 flow. Similarly, when an AVB flow leaves the network, all 891 variables R_acc and b_acc along its path must be decremented 892 to reflect the removal of the flow. 894 The choice of the static values of R and b_t at all nodes and classes 895 must be done in a prior configuration phase; R controls the bandwidth 896 allocated to this class at this node, b_t affects the delay bound and 897 the buffer requirement. R must satisfy the constraints given in 898 Annex L.1 of [IEEE8021Q]. 900 6.5. IntServ 902 Integrated service (IntServ) is an architecture that specifies the 903 elements to guarantee quality of service (QoS) on networks [RFC2212]. 905 The flow, at the source, has a leaky bucket arrival curve with two 906 parameters r as rate and b as bucket size, i.e., the amount of bits 907 entering a node within a time interval t is bounded by r t + b. 909 If a resource reservation on a path is applied, a node provides a 910 guaranteed rate R and maximum service latency of T. This can be 911 interpreted in a way that the bits might have to wait up to T before 912 being served with a rate greater or equal to R. The delay bound of 913 the flow traversing the node is T + b / R. 915 Consider an IntServ path including a sequence of nodes, where the 916 i-th node provides a guaranteed rate R_i and maximum service latency 917 of T_i. Then, the end-to-end delay bound for a flow on this can be 918 calculated as sum(T_i) + b / min(R_i). 920 If more information about the flow is known, e.g. the peak rate, the 921 delay bound is more complicated; the detail is available in 922 Section 1.4.1 of [NetCalBook]. 924 6.6. Cyclic Queuing and Forwarding 926 Annex T of [IEEE8021Q] describes Cyclic Queuing and Forwarding (CQF), 927 which provides bounded latency and zero congestion loss using the 928 time-scheduled gates of [IEEE8021Q] section 8.6.8.4. For a given 929 class of DetNet flows, a set of two or more buffers is provided at 930 the output queue layer of Figure 3. A cycle time T_c is configured 931 for each class of DetNet flows c, and all of the buffer sets in a 932 class of DetNet flows swap buffers simultaneously throughout the 933 DetNet domain at that cycle rate, all in phase. In such a mechanism, 934 the regulator, mentioned in Figure 1, is not required. 936 In the case of two-buffer CQF, each class of DetNet flows c has two 937 buffers, namely buffer1 and buffer2. In a cycle (i) when buffer1 938 accumulates received packets from the node's reception ports, buffer2 939 transmits the already stored packets from the previous cycle (i-1). 940 In the next cycle (i+1), buffer2 stores the received packets and 941 buffer1 transmits the packets received in cycle (i). The duration of 942 each cycle is T_c. 944 The per-hop latency is trivially determined by the cycle time T_c: 945 the packet transmitted from a node at a cycle (i), is transmitted 946 from the next node at cycle (i+1). Hence, the maximum delay 947 experienced by a given packet is from the beginning of cycle (i) to 948 the end of cycle (i+1), or 2T_c; also, the minimum delay is from the 949 end of cycle (i) to the beginning of cycle (i+1), i.e., zero. Then, 950 if the packet traverses h hops, the maximum delay is: 952 (h+1) T_c 954 and the minimum delay is: 956 (h-1) T_c 958 which gives a latency variation of 2T_c. 960 The cycle length T_c should be carefully chosen; it needs to be large 961 enough to accomodate all the DetNet traffic, plus at least one 962 maximum interfering packet, that can be received within one cycle. 963 Also, the value of T_c includes a time interval, called dead time 964 (DT), which is the sum of the delays 1,2,3,4 defined in Figure 1. 965 The value of DT guarantees that the last packet of one cycle in a 966 node is fully delivered to a buffer of the next node is the same 967 cycle. A two-buffer CQF is recommended if DT is small compared to 968 T_c. For a large DT, CQF with more buffers can be used and a cycle 969 identification label can be added to the packets. 971 Ingress conditioning (Section 4.3) may be required if the source of a 972 DetNet flow does not, itself, employ CQF. Since there are no per- 973 flow parameters in the CQF technique, per-hop configuration is not 974 required in the CQF forwarding nodes. 976 7. Example application on DetNet IP network 978 This section provides an example application of this document on a 979 DetNet-enabled IP network. Consider Figure 5, taken from Section 3 980 of [RFC8939], that shows a simple IP network: 982 o The end-system 1 implements IntServ as in Section 6.5 between 983 itself and relay node 1. 985 o Sub-network 1 is a TSN network. The nodes in subnetwork 1 986 implement credit-based shapers with asynchronous traffic shaping 987 as in Section 6.4. 989 o Sub-network 2 is a TSN network. The nodes in subnetwork 2 990 implement cyclic queuing and forwarding with two buffers as in 991 Section 6.6. 993 o The relay nodes 1 and 2 implement credit-based shapers with 994 asynchronous traffic shaping as in Section 6.4. They also perform 995 the aggregation and mapping of IP DetNet flows to TSN streams 996 (Section 4.4 of [I-D.ietf-detnet-ip-over-tsn]). 998 DetNet IP Relay Relay DetNet IP 999 End-System Node 1 Node 2 End-System 1000 1 2 1001 +----------+ +----------+ 1002 | Appl. |<------------ End-to-End Service ----------->| Appl. | 1003 +----------+ ............ ........... +----------+ 1004 | Service |<-: Service :-- DetNet flow --: Service :->| Service | 1005 +----------+ +----------+ +----------+ +----------+ 1006 |Forwarding| |Forwarding| |Forwarding| |Forwarding| 1007 +--------.-+ +-.------.-+ +-.---.----+ +-------.--+ 1008 : Link : \ ,-----. / \ ,-----. / 1009 +......+ +----[ Sub- ]----+ +-[ Sub- ]-+ 1010 [Network] [Network] 1011 `--1--' `--2--' 1013 |<--------------------- DetNet IP --------------------->| 1015 |<--- d1 --->|<--------------- d2_p --------------->|<-- d3_p -->| 1017 Figure 5: A Simple DetNet-Enabled IP Network, taken from RFC8939 1019 Consider a fully centeralized control plane for the network of 1020 Figure 5 as described in Section 3.2 of 1021 [I-D.ietf-detnet-controller-plane-framework]. Suppose end-system 1 1022 wants to create a DetNet flow with traffic specification destined to 1023 end-system 2 with end-to-end delay bound requirement D. Therefore, 1024 the control plane receives a flow establishment request and 1025 calculates a number of valid paths through the network (Section 3.2 1026 of [I-D.ietf-detnet-controller-plane-framework]). To select a proper 1027 path, the control plane needs to compute an end-to-end delay bound at 1028 every node of each selected path p. 1030 The end-to-end delay bound is d1 + d2_p + d3_p, where d1 is the delay 1031 bound from end-system 1 to the entrance of relay node 1, d2_p is the 1032 delay bound for path p from relay node 1 to entrance of the first 1033 node in sub-network 2, and d3_p the delay bound of path p from the 1034 first node in sub-network 2 to end-system 2. The computation of d1 1035 is explained in Section 6.5. Since the relay node 1, sub-network 1 1036 and relay node 2 implement aggregate queuing, we use the results in 1037 Section 4.2.2 and Section 6.4 to compute d2_p for the path p. 1038 Finally, d3_p is computed using the delay bound computation of 1039 Section 6.6. Any path p such that d1 + d2_p + d3_p <= D satisfies 1040 the delay bound requirement of the flow. If there is no such path, 1041 the control plane may compute new set of valid paths and redo the 1042 delay bound computation or do not admit the DetNet flow. 1044 As soon as the control plane selects a path that satisfies the delay 1045 bound constraint, it allocates and reserves the resources in the path 1046 for the DetNet flow (Section 4.2 1047 [I-D.ietf-detnet-controller-plane-framework]). 1049 8. Security considerations 1051 Detailed security considerations for DetNet are cataloged in 1052 [I-D.ietf-detnet-security], and more general security considerations 1053 are described in [RFC8655]. 1055 Security aspects that are unique to DetNet are those whose aim is to 1056 provide the specific QoS aspects of DetNet, specifically bounded end- 1057 to-end delivery latency and zero congestion loss. Achieving such 1058 loss rates and bounded latency may not be possible in the face of a 1059 highly capable adversary, such as the one envisioned by the Internet 1060 Threat Model of BCP 72 [RFC3552] that can arbitrarily drop or delay 1061 any or all traffic. In order to present meaningful security 1062 considerations, we consider a somewhat weaker attacker who does not 1063 control the physical links of the DetNet domain but may have the 1064 ability to control a network node within the boundary of the DetNet 1065 domain. 1067 A security consideration for this document is to secure the resource 1068 reservation signaling for DetNet flows. Any forge or manipulation of 1069 packets during reservation may lead the flow not to be admitted or 1070 face delay bound violation. Security mitigation for this issue is 1071 describedd in Section 7.6 of [I-D.ietf-detnet-security]. 1073 9. IANA considerations 1075 This document has no IANA actions. 1077 10. References 1079 10.1. Normative References 1081 [RFC2212] Shenker, S., Partridge, C., and R. Guerin, "Specification 1082 of Guaranteed Quality of Service", RFC 2212, 1083 DOI 10.17487/RFC2212, September 1997, 1084 . 1086 [RFC6658] Bryant, S., Ed., Martini, L., Swallow, G., and A. Malis, 1087 "Packet Pseudowire Encapsulation over an MPLS PSN", 1088 RFC 6658, DOI 10.17487/RFC6658, July 2012, 1089 . 1091 [RFC7806] Baker, F. and R. Pan, "On Queuing, Marking, and Dropping", 1092 RFC 7806, DOI 10.17487/RFC7806, April 2016, 1093 . 1095 [RFC8655] Finn, N., Thubert, P., Varga, B., and J. Farkas, 1096 "Deterministic Networking Architecture", RFC 8655, 1097 DOI 10.17487/RFC8655, October 2019, 1098 . 1100 [RFC8939] Varga, B., Ed., Farkas, J., Berger, L., Fedyk, D., and S. 1101 Bryant, "Deterministic Networking (DetNet) Data Plane: 1102 IP", RFC 8939, DOI 10.17487/RFC8939, November 2020, 1103 . 1105 [RFC8964] Varga, B., Ed., Farkas, J., Berger, L., Malis, A., Bryant, 1106 S., and J. Korhonen, "Deterministic Networking (DetNet) 1107 Data Plane: MPLS", RFC 8964, DOI 10.17487/RFC8964, January 1108 2021, . 1110 10.2. Informative References 1112 [bennett2002delay] 1113 J.C.R. Bennett, K. Benson, A. Charny, W.F. Courtney, and 1114 J.-Y. Le Boudec, "Delay Jitter Bounds and Packet Scale 1115 Rate Guarantee for Expedited Forwarding", 1116 . 1118 [charny2000delay] 1119 A. Charny and J.-Y. Le Boudec, "Delay Bounds in a Network 1120 with Aggregate Scheduling", . 1123 [I-D.ietf-detnet-controller-plane-framework] 1124 A. Malis, X. Geng, M. Chen, F. Qin, and B. Varga, 1125 "Deterministic Networking (DetNet) Controller Plane 1126 Framework draft-ietf-detnet-controller-plane-framework- 1127 00", . 1130 [I-D.ietf-detnet-ip-over-tsn] 1131 B. Varga, J. Farkas, A. Malis, and S. Bryant, "DetNet Data 1132 Plane: IP over IEEE 802.1 Time Sensitive Networking (TSN) 1133 draft-ietf-detnet-ip-over-tsn-07", 1134 . 1137 [I-D.ietf-detnet-security] 1138 E. Grossman, T. Mizrahi, and A. Hacker, "Deterministic 1139 Networking (DetNet) Security Considerations draft-ietf- 1140 detnet-security-16", . 1143 [IEEE8021Q] 1144 IEEE 802.1, "IEEE Std 802.1Q-2018: IEEE Standard for Local 1145 and metropolitan area networks - Bridges and Bridged 1146 Networks", 2018, 1147 . 1149 [IEEE8021Qcr] 1150 IEEE 802.1, "IEEE P802.1Qcr: IEEE Draft Standard for Local 1151 and metropolitan area networks - Bridges and Bridged 1152 Networks - Amendment: Asynchronous Traffic Shaping", 2017, 1153 . 1155 [IEEE8021TSN] 1156 IEEE 802.1, "IEEE 802.1 Time-Sensitive Networking (TSN) 1157 Task Group", . 1159 [IEEE8023] 1160 IEEE 802.3, "IEEE Std 802.3-2018: IEEE Standard for 1161 Ethernet", 2018, 1162 . 1164 [le_boudec2018theory] 1165 J.-Y. Le Boudec, "A Theory of Traffic Regulators for 1166 Deterministic Networks with Application to Interleaved 1167 Regulators", 1168 . 1170 [NetCalBook] 1171 J.-Y. Le Boudec and P. Thiran, "Network calculus: a theory 1172 of deterministic queuing systems for the internet", 2001, 1173 . 1175 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 1176 Text on Security Considerations", BCP 72, RFC 3552, 1177 DOI 10.17487/RFC3552, July 2003, 1178 . 1180 [RFC8578] Grossman, E., Ed., "Deterministic Networking Use Cases", 1181 RFC 8578, DOI 10.17487/RFC8578, May 2019, 1182 . 1184 [Specht2016UBS] 1185 J. Specht and S. Samii, "Urgency-Based Scheduler for Time- 1186 Sensitive Switched Ethernet Networks", 1187 . 1189 [Thomas2020time] 1190 L. Thomas and J.-Y. Le Boudec, "On Time Synchronization 1191 Issues in Time-Sensitive Networks with Regulators and 1192 Nonideal Clocks", 1193 . 1195 [TSNwithATS] 1196 E. Mohammadpour, E. Stai, M. Mohiuddin, and J.-Y. Le 1197 Boudec, "End-to-end Latency and Backlog Bounds in Time- 1198 Sensitive Networking with Credit Based Shapers and 1199 Asynchronous Traffic Shaping", 1200 . 1202 Authors' Addresses 1204 Norman Finn 1205 Huawei Technologies Co. Ltd 1206 3101 Rio Way 1207 Spring Valley, California 91977 1208 US 1210 Phone: +1 925 980 6430 1211 Email: nfinn@nfinnconsulting.com 1213 Jean-Yves Le Boudec 1214 EPFL 1215 IC Station 14 1216 Lausanne EPFL 1015 1217 Switzerland 1219 Email: jean-yves.leboudec@epfl.ch 1221 Ehsan Mohammadpour 1222 EPFL 1223 IC Station 14 1224 Lausanne EPFL 1015 1225 Switzerland 1227 Email: ehsan.mohammadpour@epfl.ch 1228 Jiayi Zhang 1229 Huawei Technologies Co. Ltd 1230 Q27, No.156 Beiqing Road 1231 Beijing 100095 1232 China 1234 Email: zhangjiayi11@huawei.com 1236 Balazs Varga 1237 Ericsson 1238 Konyves Kalman krt. 11/B 1239 Budapest 1097 1240 Hungary 1242 Email: balazs.a.varga@ericsson.com 1244 Janos Farkas 1245 Ericsson 1246 Konyves Kalman krt. 11/B 1247 Budapest 1097 1248 Hungary 1250 Email: janos.farkas@ericsson.com