idnits 2.17.1 draft-ietf-detnet-bounded-latency-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 17, 2021) is 1075 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Network' is mentioned on line 1016, but not defined == Outdated reference: A later version (-05) exists of draft-ietf-detnet-controller-plane-framework-00 Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DetNet N. Finn 3 Internet-Draft Huawei Technologies Co. Ltd 4 Intended status: Informational J-Y. Le Boudec 5 Expires: November 18, 2021 E. Mohammadpour 6 EPFL 7 J. Zhang 8 Huawei Technologies Co. Ltd 9 B. Varga 10 J. Farkas 11 Ericsson 12 May 17, 2021 14 DetNet Bounded Latency 15 draft-ietf-detnet-bounded-latency-06 17 Abstract 19 This document references specific queuing mechanisms, defined in 20 other documents, that can be used to control packet transmission at 21 each output port and achieve the DetNet qualities of service. This 22 document presents a timing model for sources, destinations, and the 23 DetNet transit nodes that relay packets that is applicable to all of 24 those referenced queuing mechanisms. Using the model presented in 25 this document, it should be possible for an implementor, user, or 26 standards development organization to select a particular set of 27 queuing mechanisms for each device in a DetNet network, and to select 28 a resource reservation algorithm for that network, so that those 29 elements can work together to provide the DetNet service. 31 Status of This Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 Internet-Drafts are working documents of the Internet Engineering 37 Task Force (IETF). Note that other groups may also distribute 38 working documents as Internet-Drafts. The list of current Internet- 39 Drafts is at https://datatracker.ietf.org/drafts/current/. 41 Internet-Drafts are draft documents valid for a maximum of six months 42 and may be updated, replaced, or obsoleted by other documents at any 43 time. It is inappropriate to use Internet-Drafts as reference 44 material or to cite them other than as "work in progress." 46 This Internet-Draft will expire on November 18, 2021. 48 Copyright Notice 50 Copyright (c) 2021 IETF Trust and the persons identified as the 51 document authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal 54 Provisions Relating to IETF Documents 55 (https://trustee.ietf.org/license-info) in effect on the date of 56 publication of this document. Please review these documents 57 carefully, as they describe your rights and restrictions with respect 58 to this document. Code Components extracted from this document must 59 include Simplified BSD License text as described in Section 4.e of 60 the Trust Legal Provisions and are provided without warranty as 61 described in the Simplified BSD License. 63 Table of Contents 65 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 66 2. Terminology and Definitions . . . . . . . . . . . . . . . . . 4 67 3. DetNet bounded latency model . . . . . . . . . . . . . . . . 4 68 3.1. Flow admission . . . . . . . . . . . . . . . . . . . . . 4 69 3.1.1. Static latency calculation . . . . . . . . . . . . . 4 70 3.1.2. Dynamic latency calculation . . . . . . . . . . . . . 5 71 3.2. Relay node model . . . . . . . . . . . . . . . . . . . . 6 72 4. Computing End-to-end Delay Bounds . . . . . . . . . . . . . . 8 73 4.1. Non-queuing delay bound . . . . . . . . . . . . . . . . . 8 74 4.2. Queuing delay bound . . . . . . . . . . . . . . . . . . . 9 75 4.2.1. Per-flow queuing mechanisms . . . . . . . . . . . . . 9 76 4.2.2. Aggregate queuing mechanisms . . . . . . . . . . . . 9 77 4.3. Ingress considerations . . . . . . . . . . . . . . . . . 10 78 4.4. Interspersed DetNet-unaware transit nodes . . . . . . . . 11 79 5. Achieving zero congestion loss . . . . . . . . . . . . . . . 11 80 6. Queuing techniques . . . . . . . . . . . . . . . . . . . . . 12 81 6.1. Queuing data model . . . . . . . . . . . . . . . . . . . 13 82 6.2. Frame Preemption . . . . . . . . . . . . . . . . . . . . 15 83 6.3. Time Aware Shaper . . . . . . . . . . . . . . . . . . . . 15 84 6.4. Credit-Based Shaper with Asynchronous Traffic Shaping . . 16 85 6.4.1. Delay Bound Calculation . . . . . . . . . . . . . . . 18 86 6.4.2. Flow Admission . . . . . . . . . . . . . . . . . . . 19 87 6.5. Guaranteed-Service IntServ . . . . . . . . . . . . . . . 20 88 6.6. Cyclic Queuing and Forwarding . . . . . . . . . . . . . . 21 89 7. Example application on DetNet IP network . . . . . . . . . . 22 90 8. Security considerations . . . . . . . . . . . . . . . . . . . 24 91 9. IANA considerations . . . . . . . . . . . . . . . . . . . . . 24 92 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 24 93 10.1. Normative References . . . . . . . . . . . . . . . . . . 24 94 10.2. Informative References . . . . . . . . . . . . . . . . . 25 95 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 97 1. Introduction 99 The ability for IETF Deterministic Networking (DetNet) or IEEE 802.1 100 Time-Sensitive Networking (TSN, [IEEE8021TSN]) to provide the DetNet 101 services of bounded latency and zero congestion loss depends upon A) 102 configuring and allocating network resources for the exclusive use of 103 DetNet flows; B) identifying, in the data plane, the resources to be 104 utilized by any given packet, and C) the detailed behavior of those 105 resources, especially transmission queue selection, so that latency 106 bounds can be reliably assured. 108 As explained in [RFC8655], DetNet flows are characterized by 1) a 109 maximum bandwidth, guaranteed either by the transmitter or by strict 110 input metering; and 2) a requirement for a guaranteed worst-case end- 111 to-end latency. That latency guarantee, in turn, provides the 112 opportunity for the network to supply enough buffer space to 113 guarantee zero congestion loss. 115 To be used by the applications identified in [RFC8578], it must be 116 possible to calculate, before the transmission of a DetNet flow 117 commences, both the worst-case end-to-end network latency, and the 118 amount of buffer space required at each hop to ensure against 119 congestion loss. 121 This document references specific queuing mechanisms, defined in 122 [RFC8655], that can be used to control packet transmission at each 123 output port and achieve the DetNet qualities of service. This 124 document presents a timing model for sources, destinations, and the 125 DetNet transit nodes that relay packets that is applicable to all of 126 those referenced queuing mechanisms. It furthermore provides end-to- 127 end delay bound and backlog bound computations for such mechanisms 128 that can be used by the control plane to provide DetNet QoS. 130 Using the model presented in this document, it should be possible for 131 an implementor, user, or standards development organization to select 132 a particular set of queuing mechanisms for each device in a DetNet 133 network, and to select a resource reservation algorithm for that 134 network, so that those elements can work together to provide the 135 DetNet service. Section 7 provides an example application of this 136 document to a DetNet IP network with combination of different queuing 137 mechanisms. 139 This document does not specify any resource reservation protocol or 140 control plane function. It does not describe all of the requirements 141 for that protocol or control plane function. It does describe 142 requirements for such resource reservation methods, and for queuing 143 mechanisms that, if met, will enable them to work together. 145 2. Terminology and Definitions 147 This document uses the terms defined in [RFC8655]. 149 3. DetNet bounded latency model 151 3.1. Flow admission 153 This document assumes that following paradigm is used to admit DetNet 154 flows: 156 1. Perform any configuration required by the DetNet transit nodes in 157 the network for aggregates of DetNet flows. This configuration 158 is done beforehand, and not tied to any particular DetNet flow. 160 2. Characterize the new DetNet flow, particularly in terms of 161 required bandwidth. 163 3. Establish the path that the DetNet flow will take through the 164 network from the source to the destination(s). This can be a 165 point-to-point or a point-to-multipoint path. 167 4. Compute the worst-case end-to-end latency for the DetNet flow, 168 using one of the methods, below (Section 3.1.1, Section 3.1.2). 169 In the process, determine whether sufficient resources are 170 available for the DetNet flow to guarantee the required latency 171 and to provide zero congestion loss. 173 5. Assuming that the resources are available, commit those resources 174 to the DetNet flow. This may or may not require adjusting the 175 parameters that control the filtering and/or queuing mechanisms 176 at each hop along the DetNet flow's path. 178 This paradigm can be implemented using peer-to-peer protocols or 179 using a central controller. In some situations, a lack of resources 180 can require backtracking and recursing through this list. 182 Issues such as service preemption of a DetNet flow in favor of 183 another, when resources are scarce, are not considered, here. Also 184 not addressed is the question of how to choose the path to be taken 185 by a DetNet flow. 187 3.1.1. Static latency calculation 189 The static problem: 190 Given a network and a set of DetNet flows, compute an end-to- 191 end latency bound (if computable) for each DetNet flow, and 192 compute the resources, particularly buffer space, required in 193 each DetNet transit node to achieve zero congestion loss. 195 In this calculation, all of the DetNet flows are known before the 196 calculation commences. This problem is of interest to relatively 197 static networks, or static parts of larger networks. It provides 198 bounds on delay and buffer size. The calculations can be extended to 199 provide global optimizations, such as altering the path of one DetNet 200 flow in order to make resources available to another DetNet flow with 201 tighter constraints. 203 The static latency calculation is not limited only to static 204 networks; the entire calculation for all DetNet flows can be repeated 205 each time a new DetNet flow is created or deleted. If some already- 206 established DetNet flow would be pushed beyond its latency 207 requirements by the new DetNet flow, then the new DetNet flow can be 208 refused, or some other suitable action taken. 210 This calculation may be more difficult to perform than that of the 211 dynamic calculation (Section 3.1.2), because the DetNet flows passing 212 through one port on a DetNet transit node affect each others' 213 latency. The effects can even be circular, from a node A to B to C 214 and back to A. On the other hand, the static calculation can often 215 accommodate queuing methods, such as transmission selection by strict 216 priority, that are unsuitable for the dynamic calculation. 218 3.1.2. Dynamic latency calculation 220 The dynamic problem: 221 Given a network whose maximum capacity for DetNet flows is 222 bounded by a set of static configuration parameters applied 223 to the DetNet transit nodes, and given just one DetNet flow, 224 compute the worst-case end-to-end latency that can be 225 experienced by that flow, no matter what other DetNet flows 226 (within the network's configured parameters) might be created 227 or deleted in the future. Also, compute the resources, 228 particularly buffer space, required in each DetNet transit 229 node to achieve zero congestion loss. 231 This calculation is dynamic, in the sense that DetNet flows can be 232 added or deleted at any time, with a minimum of computation effort, 233 and without affecting the guarantees already given to other DetNet 234 flows. 236 The choice of queuing methods is critical to the applicability of the 237 dynamic calculation. Some queuing methods (e.g. CQF, Section 6.6) 238 make it easy to configure bounds on the network's capacity, and to 239 make independent calculations for each DetNet flow. Some other 240 queuing methods (e.g. strict priority with the credit-based shaper 241 defined in [IEEE8021Q] section 8.6.8.2) can be used for dynamic 242 DetNet flow creation, but yield poorer latency and buffer space 243 guarantees than when that same queuing method is used for static 244 DetNet flow creation (Section 3.1.1). 246 3.2. Relay node model 248 A model for the operation of a DetNet transit node is required, in 249 order to define the latency and buffer calculations. In Figure 1 we 250 see a breakdown of the per-hop latency experienced by a packet 251 passing through a DetNet transit node, in terms that are suitable for 252 computing both hop-by-hop latency and per-hop buffer requirements. 254 DetNet transit node A DetNet transit node B 255 +-------------------------+ +------------------------+ 256 | Queuing | | Queuing | 257 | Regulator subsystem | | Regulator subsystem | 258 | +-+-+-+-+ +-+-+-+-+ | | +-+-+-+-+ +-+-+-+-+ | 259 -->+ | | | | | | | | | + +------>+ | | | | | | | | | + +---> 260 | +-+-+-+-+ +-+-+-+-+ | | +-+-+-+-+ +-+-+-+-+ | 261 | | | | 262 +-------------------------+ +------------------------+ 263 |<->|<------>|<------->|<->|<---->|<->|<------>|<------>|<->|<-- 264 2,3 4 5 6 1 2,3 4 5 6 1 2,3 265 1: Output delay 4: Processing delay 266 2: Link delay 5: Regulation delay 267 3: Frame preemption delay 6: Queuing delay 269 Figure 1: Timing model for DetNet or TSN 271 In Figure 1, we see two DetNet transit nodes that are connected via a 272 link. In this model, the only queues, that we deal with explicitly, 273 are attached to the output port; other queues are modeled as 274 variations in the other delay times. (E.g., an input queue could be 275 modeled as either a variation in the link delay (2) or the processing 276 delay (4).) There are six delays that a packet can experience from 277 hop to hop. 279 1. Output delay 280 The time taken from the selection of a packet for output from a 281 queue to the transmission of the first bit of the packet on the 282 physical link. If the queue is directly attached to the physical 283 port, output delay can be a constant. But, in many 284 implementations, the queuing mechanism in a forwarding ASIC is 285 separated from a multi-port MAC/PHY, in a second ASIC, by a 286 multiplexed connection. This causes variations in the output 287 delay that are hard for the forwarding node to predict or control. 289 2. Link delay 290 The time taken from the transmission of the first bit of the 291 packet to the reception of the last bit, assuming that the 292 transmission is not suspended by a frame preemption event. This 293 delay has two components, the first-bit-out to first-bit-in delay 294 and the first-bit-in to last-bit-in delay that varies with packet 295 size. The former is typically measured by the Precision Time 296 Protocol and is constant (see [RFC8655]). However, a virtual 297 "link" could exhibit a variable link delay. 299 3. Frame preemption delay 300 If the packet is interrupted in order to transmit another packet 301 or packets, (e.g. [IEEE8023] clause 99 frame preemption) an 302 arbitrary delay can result. 304 4. Processing delay 305 This delay covers the time from the reception of the last bit of 306 the packet to the time the packet is enqueued in the regulator 307 (Queuing subsystem, if there is no regulation). This delay can be 308 variable, and depends on the details of the operation of the 309 forwarding node. 311 5. Regulator delay 312 This is the time spent from the insertion of the last bit of a 313 packet into a regulation queue until the time the packet is 314 declared eligible according to its regulation constraints. We 315 assume that this time can be calculated based on the details of 316 regulation policy. If there is no regulation, this time is zero. 318 6. Queuing subsystem delay 319 This is the time spent for a packet from being declared eligible 320 until being selected for output on the next link. We assume that 321 this time is calculable based on the details of the queuing 322 mechanism. If there is no regulation, this time is from the 323 insertion of the packet into a queue until it is selected for 324 output on the next link. 326 Not shown in Figure 1 are the other output queues that we presume are 327 also attached to that same output port as the queue shown, and 328 against which this shown queue competes for transmission 329 opportunities. 331 The initial and final measurement point in this analysis (that is, 332 the definition of a "hop") is the point at which a packet is selected 333 for output. In general, any queue selection method that is suitable 334 for use in a DetNet network includes a detailed specification as to 335 exactly when packets are selected for transmission. Any variations 336 in any of the delay times 1-4 result in a need for additional buffers 337 in the queue. If all delays 1-4 are constant, then any variation in 338 the time at which packets are inserted into a queue depends entirely 339 on the timing of packet selection in the previous node. If the 340 delays 1-4 are not constant, then additional buffers are required in 341 the queue to absorb these variations. Thus: 343 o Variations in output delay (1) require buffers to absorb that 344 variation in the next hop, so the output delay variations of the 345 previous hop (on each input port) must be known in order to 346 calculate the buffer space required on this hop. 348 o Variations in processing delay (4) require additional output 349 buffers in the queues of that same DetNet transit node. Depending 350 on the details of the queueing subsystem delay (6) calculations, 351 these variations need not be visible outside the DetNet transit 352 node. 354 4. Computing End-to-end Delay Bounds 356 4.1. Non-queuing delay bound 358 End-to-end delay bounds can be computed using the delay model in 359 Section 3.2. Here, it is important to be aware that for several 360 queuing mechanisms, the end-to-end delay bound is less than the sum 361 of the per-hop delay bounds. An end-to-end delay bound for one 362 DetNet flow can be computed as 364 end_to_end_delay_bound = non_queuing_delay_bound + 365 queuing_delay_bound 367 The two terms in the above formula are computed as follows. 369 First, at the h-th hop along the path of this DetNet flow, obtain an 370 upperbound per-hop_non_queuing_delay_bound[h] on the sum of the 371 bounds over the delays 1,2,3,4 of Figure 1. These upper bounds are 372 expected to depend on the specific technology of the DetNet transit 373 node at the h-th hop but not on the T-SPEC of this DetNet flow. Then 374 set non_queuing_delay_bound = the sum of per- 375 hop_non_queuing_delay_bound[h] over all hops h. 377 Second, compute queuing_delay_bound as an upper bound to the sum of 378 the queuing delays along the path. The value of queuing_delay_bound 379 depends on the T-SPEC of this DetNet flow and possibly of other flows 380 in the network, as well as the specifics of the queuing mechanisms 381 deployed along the path of this DetNet flow. The computation of 382 queuing_delay_bound is described in Section 4.2 as a separate 383 section. 385 4.2. Queuing delay bound 387 For several queuing mechanisms, queuing_delay_bound is less than the 388 sum of upper bounds on the queuing delays (5,6) at every hop. This 389 occurs with (1) per-flow queuing, and (2) aggregate queuing with 390 regulators, as explained in Section 4.2.1, Section 4.2.2, and 391 Section 6. 393 For other queuing mechanisms the only available value of 394 queuing_delay_bound is the sum of the per-hop queuing delay bounds. 395 In such cases, the computation of per-hop queuing delay bounds must 396 account for the fact that the T-SPEC of a DetNet flow is no longer 397 satisfied at the ingress of a hop, since burstiness increases as one 398 flow traverses one DetNet transit node. 400 4.2.1. Per-flow queuing mechanisms 402 With such mechanisms, each flow uses a separate queue inside every 403 node. The service for each queue is abstracted with a guaranteed 404 rate and a latency. For every DetNet flow, a per-node delay bound as 405 well as an end-to-end delay bound can be computed from the traffic 406 specification of this DetNet flow at its source and from the values 407 of rates and latencies at all nodes along its path. The per-flow 408 queuing is used in Guaranteed-Service IntServ. Details of 409 calculation for Guaranteed-Service IntServ are described in 410 Section 6.5. 412 4.2.2. Aggregate queuing mechanisms 414 With such mechanisms, multiple flows are aggregated into macro-flows 415 and there is one FIFO queue per macro-flow. A practical example is 416 the credit-based shaper defined in section 8.6.8.2 of [IEEE8021Q] 417 where a macro-flow is called a "class". One key issue in this 418 context is how to deal with the burstiness cascade: individual flows 419 that share a resource dedicated to a macro-flow may see their 420 burstiness increase, which may in turn cause increased burstiness to 421 other flows downstream of this resource. Computing delay upper 422 bounds for such cases is difficult, and in some conditions impossible 423 [charny2000delay][bennett2002delay]. Also, when bounds are obtained, 424 they depend on the complete configuration, and must be recomputed 425 when one flow is added. (The dynamic calculation, Section 3.1.2.) 427 A solution to deal with this issue for the DetNet flows is to reshape 428 them at every hop. This can be done with per-flow regulators (e.g. 429 leaky bucket shapers), but this requires per-flow queuing and defeats 430 the purpose of aggregate queuing. An alternative is the interleaved 431 regulator, which reshapes individual DetNet flows without per-flow 432 queuing ([Specht2016UBS], [IEEE8021Qcr]). With an interleaved 433 regulator, the packet at the head of the queue is regulated based on 434 its (flow) regulation constraints; it is released at the earliest 435 time at which this is possible without violating the constraint. One 436 key feature of per-flow or interleaved regulator is that, it does not 437 increase worst-case latency bounds [le_boudec2018theory]. 438 Specifically, when an interleaved regulator is appended to a FIFO 439 subsystem, it does not increase the worst-case delay of the latter. 441 Figure 2 shows an example of a network with 5 nodes, aggregate 442 queuing mechanism and interleaved regulators as in Figure 1. An end- 443 to-end delay bound for DetNet flow f, traversing nodes 1 to 5, is 444 calculated as follows: 446 end_to_end_latency_bound_of_flow_f = C12 + C23 + C34 + S4 448 In the above formula, Cij is a bound on the delay of the queuing 449 subsystem in node i and interleaved regulator of node j, and S4 is a 450 bound on the delay of the queuing subsystem in node 4 for DetNet flow 451 f. In fact, using the delay definitions in Section 3.2, Cij is a 452 bound on sum of the delays 1,2,3,6 of node i and 4,5 of node j. 453 Similarly, S4 is a bound on sum of the delays 1,2,3,6 of node 4. A 454 practical example of queuing model and delay calculation is presented 455 Section 6.4. 457 f 458 -----------------------------> 459 +---+ +---+ +---+ +---+ +---+ 460 | 1 |---| 2 |---| 3 |---| 4 |---| 5 | 461 +---+ +---+ +---+ +---+ +---+ 462 \__C12_/\__C23_/\__C34_/\_S4_/ 464 Figure 2: End-to-end delay computation example 466 REMARK: The end-to-end delay bound calculation provided here gives a 467 much better upper bound in comparison with end-to-end delay bound 468 computation by adding the delay bounds of each node in the path of a 469 DetNet flow [TSNwithATS]. 471 4.3. Ingress considerations 473 A sender can be a DetNet node which uses exactly the same queuing 474 methods as its adjacent DetNet transit node, so that the delay and 475 buffer bounds calculations at the first hop are indistinguishable 476 from those at a later hop within the DetNet domain. On the other 477 hand, the sender may be DetNet-unaware, in which case some 478 conditioning of the DetNet flow may be necessary at the ingress 479 DetNet transit node. 481 This ingress conditioning typically consists of a FIFO with an output 482 regulator that is compatible with the queuing employed by the DetNet 483 transit node on its output port(s). For some queuing methods, simply 484 requires added extra buffer space in the queuing subsystem. Ingress 485 conditioning requirements for different queuing methods are mentioned 486 in the sections, below, describing those queuing methods. 488 4.4. Interspersed DetNet-unaware transit nodes 490 It is sometimes desirable to build a network that has both DetNet- 491 aware transit nodes and DetNet-uaware transit nodes, and for a DetNet 492 flow to traverse an island of DetNet-unaware transit nodes, while 493 still allowing the network to offer delay and congestion loss 494 guarantees. This is possible under certain conditions. 496 In general, when passing through a DetNet-unaware island, the island 497 may cause delay variation in excess of what would be caused by DetNet 498 nodes. That is, the DetNet flow might be "lumpier" after traversing 499 the DetNet-unaware island. DetNet guarantees for delay and buffer 500 requirements can still be calculated and met if and only if the 501 following are true: 503 1. The latency variation across the DetNet-unaware island must be 504 bounded and calculable. 506 2. An ingress conditioning function (Section 4.3) is required at the 507 re-entry to the DetNet-aware domain. This will, at least, 508 require some extra buffering to accommodate the additional delay 509 variation, and thus further increases the delay bound. 511 The ingress conditioning is exactly the same problem as that of a 512 sender at the edge of the DetNet domain. The requirement for bounds 513 on the latency variation across the DetNet-unaware island is 514 typically the most difficult to achieve. Without such a bound, it is 515 obvious that DetNet cannot deliver its guarantees, so a DetNet- 516 unaware island that cannot offer bounded latency variation cannot be 517 used to carry a DetNet flow. 519 5. Achieving zero congestion loss 521 When the input rate to an output queue exceeds the output rate for a 522 sufficient length of time, the queue must overflow. This is 523 congestion loss, and this is what deterministic networking seeks to 524 avoid. 526 To avoid congestion losses, an upper bound on the backlog present in 527 the regulator and queuing subsystem of Figure 1 must be computed 528 during resource reservation. This bound depends on the set of flows 529 that use these queues, the details of the specific queuing mechanism 530 and an upper bound on the processing delay (4). The queue must 531 contain the packet in transmission plus all other packets that are 532 waiting to be selected for output. 534 A conservative backlog bound, that applies to all systems, can be 535 derived as follows. 537 The backlog bound is counted in data units (bytes, or words of 538 multiple bytes) that are relevant for buffer allocation. Based on 539 the que For every flow or an aggregate of flows, we need one buffer 540 space for the packet in transmission, plus space for the packets that 541 are waiting to be selected for output. Excluding transmission and 542 frame preemption times, the packets are waiting in the queue since 543 reception of the last bit, for a duration equal to the processing 544 delay (4) plus the queuing delays (5,6). 546 Let 548 o total_in_rate be the sum of the line rates of all input ports that 549 send traffic to this output port. The value of total_in_rate is 550 in data units (e.g. bytes) per second. 552 o nb_input_ports be the number input ports that send traffic to this 553 output port 555 o max_packet_length be the maximum packet size for packets that may 556 be sent to this output port. This is counted in data units. 558 o max_delay456 be an upper bound, in seconds, on the sum of the 559 processing delay (4) and the queuing delays (5,6) for any packet 560 at this output port. 562 Then a bound on the backlog of traffic in the queue at this output 563 port is 565 backlog_bound = nb_input_ports * max_packet_length + 566 total_in_rate* max_delay456 568 6. Queuing techniques 570 In this section, for simplicity of delay computation, we assume that 571 the T-SPEC or arrival curve [NetCalBook] for each DetNet flow at 572 source is leaky bucket. Also, at each Detnet transit node, the 573 service for each queue is abstracted with a guaranteed rate and a 574 latency. 576 6.1. Queuing data model 578 Sophisticated queuing mechanisms are available in Layer 3 (L3, see, 579 e.g., [RFC7806] for an overview). In general, we assume that "Layer 580 3" queues, shapers, meters, etc., are precisely the "regulators" 581 shown in Figure 1. The "queuing subsystems" in this figure are not 582 the province solely of bridges; they are an essential part of any 583 DetNet transit node. As illustrated by numerous implementation 584 examples, some of the "Layer 3" mechanisms described in documents 585 such as [RFC7806] are often integrated, in an implementation, with 586 the "Layer 2" mechanisms also implemented in the same node. An 587 integrated model is needed in order to successfully predict the 588 interactions among the different queuing mechanisms needed in a 589 network carrying both DetNet flows and non-DetNet flows. 591 Figure 3 shows the general model for the flow of packets through the 592 queues of a DetNet transit node. The DetNet packets are mapped to a 593 number of regulators. Here, we assume that the PREOF (Packet 594 Replication, Elimination and Ordering Functions) functions are 595 performed before the DetNet packets enter the regulators. All 596 Packets are assigned to a set of queues. Queues compete for the 597 selection of packets to be passed to queues in the queuing subsystem. 598 Packets again are selected for output from the queuing subsystem. 600 | 601 +--------------------------------V----------------------------------+ 602 | Queue assignment | 603 +--+------+----------+---------+-----------+-----+-------+-------+--+ 604 | | | | | | | | 605 +--V-+ +--V-+ +--V--+ +--V--+ +--V--+ | | | 606 |Flow| |Flow| |Flow | |Flow | |Flow | | | | 607 | 0 | | 1 | ... | i | | i+1 | ... | n | | | | 608 | reg| | reg| | reg | | reg | | reg | | | | 609 +--+-+ +--+-+ +--+--+ +--+--+ +--+--+ | | | 610 | | | | | | | | 611 +--V------V----------V--+ +--V-----------V--+ | | | 612 | Trans. selection | | Trans. select. | | | | 613 +----------+------------+ +-----+-----------+ | | | 614 | | | | | 615 +--V--+ +--V--+ +--V--+ +--V--+ +--V--+ 616 | out | | out | | out | | out | | out | 617 |queue| |queue| |queue| |queue| |queue| 618 | 1 | | 2 | | 3 | | 4 | | 5 | 619 +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ 620 | | | | | 621 +----------V----------------------V--------------V-------V-------V--+ 622 | Transmission selection | 623 +---------------------------------+---------------------------------+ 624 | 625 V 627 Figure 3: IEEE 802.1Q Queuing Model: Data flow 629 Some relevant mechanisms are hidden in this figure, and are performed 630 in the queue boxes: 632 o Discarding packets because a queue is full. 634 o Discarding packets marked "yellow" by a metering function, in 635 preference to discarding "green" packets. 637 Ideally, neither of these actions are performed on DetNet packets. 638 Full queues for DetNet packets should occur only when a DetNet flow 639 is misbehaving, and the DetNet QoS does not include "yellow" service 640 for packets in excess of committed rate. 642 The queue assignment function can be quite complex, even in a bridge 643 [IEEE8021Q], since the introduction of per-stream filtering and 644 policing ([IEEE8021Q] clause 8.6.5.1). In addition to the Layer 2 645 priority expressed in the 802.1Q VLAN tag, a DetNet transit node can 646 utilize any of the following information to assign a packet to a 647 particular queue: 649 o Input port. 651 o Selector based on a rotating schedule that starts at regular, 652 time-synchronized intervals and has nanosecond precision. 654 o MAC addresses, VLAN ID, IP addresses, Layer 4 port numbers, DSCP. 655 ([RFC8939], [RFC8964]) (Work items are expected to add MPC and 656 other indicators.) 658 o The queue assignment function can contain metering and policing 659 functions. 661 o MPLS and/or pseudowire ([RFC6658]) labels. 663 The "Transmission selection" function decides which queue is to 664 transfer its oldest packet to the output port when a transmission 665 opportunity arises. 667 6.2. Frame Preemption 669 In [IEEE8021Q] and [IEEE8023], the transmission of a frame can be 670 interrupted by one or more "express" frames, and then the interrupted 671 frame can continue transmission. The frame preemption is modeled as 672 consisting of two MAC/PHY stacks, one for packets that can be 673 interrupted, and one for packets that can interrupt the interruptible 674 packets. Only one layer of frame preemption is supported -- a 675 transmitter cannot have more than one interrupted frame in progress. 676 DetNet flows typically pass through the interrupting MAC. For those 677 DetNet flows with T-SPEC, latency bound can be calculated by the 678 methods provided in the following sections that accounts for the 679 affect of frame preemption, according to the specific queuing 680 mechanism that is used in DetNet nodes. Best-effort queues pass 681 through the interruptible MAC, and can thus be preempted. 683 6.3. Time Aware Shaper 685 In [IEEE8021Q], the notion of time-scheduling queue gates is 686 described in section 8.6.8.4. On each node, the transmission 687 selection for packets is controlled by time-synchronized gates; each 688 output queue is associated with a gate. The gates can be either open 689 or close. The states of the gates are determined by the gate control 690 list (GCL). The GCL specifies the opening and closing times of the 691 gates. The design of GCL should satisfy the requirement of latency 692 upper bounds of all DetNet flows; therefore, those DetNet flows 693 traverse a network should have bounded latency, if the traffic and 694 nodes are conformant. 696 It should be noted that scheduled traffic service relies on a 697 synchronized network and coordinated GCL configuration. Synthesis of 698 GCL on multiple nodes in network is a scheduling problem considering 699 all DetNet flows traversing the network, which is a non-deterministic 700 polynomial-time hard (NP-hard) problem. Also, at this writing, 701 scheduled traffic service supports no more than eight traffic queues, 702 typically using up to seven priority queues and at least one best 703 effort. 705 6.4. Credit-Based Shaper with Asynchronous Traffic Shaping 707 In the considered queuing model, we considered the four traffic 708 classes (Definition 3.268 of [IEEE8021Q]): control-data traffic 709 (CDT), class A, class B, and best effort (BE) in decreasing order of 710 priority. Flows of classes A and B are together referred as AVB 711 flows. This model is a subset of Time-Sensitive Networking as 712 described next. 714 Based on the timing model described in Figure 1, the contention 715 occurs only at the output port of a DetNet transit node; therefore, 716 the focus of the rest of this subsection is on the regulator and 717 queuing subsystem in the output port of a DetNet transit node. The 718 input flows are identified using the information in (Section 5.1 of 719 [RFC8939]). Then they are aggregated into eight macro flows based on 720 their service requirements; we refer to each macro flow as a class. 721 The output port performs aggregate scheduling with eight queues 722 (queuing subsystems): one for CDT, one for class A flows, one for 723 class B flows, and five for BE traffic denoted as BE0-BE4. The 724 queuing policy for each queuing subsystem is FIFO. In addition, each 725 node output port also performs per-flow regulation for AVB flows 726 using an interleaved regulator (IR), called Asynchronous Traffic 727 Shaper [IEEE8021Qcr]. Thus, at each output port of a node, there is 728 one interleaved regulator per-input port and per-class; the 729 interleaved regulator is mapped to the regulator depicted in 730 Figure 1. The detailed picture of scheduling and regulation 731 architecture at a node output port is given by Figure 4. The packets 732 received at a node input port for a given class are enqueued in the 733 respective interleaved regulator at the output port. Then, the 734 packets from all the flows, including CDT and BE flows, are enqueued 735 in queuing subsytem; there is no regulator for such classes. 737 +--+ +--+ +--+ +--+ 738 | | | | | | | | 739 |IR| |IR| |IR| |IR| 740 | | | | | | | | 741 +-++XXX++-+ +-++XXX++-+ 742 | | | | 743 | | | | 744 +---+ +-v-XXX-v-+ +-v-XXX-v-+ +-----+ +-----+ +-----+ +-----+ +-----+ 745 | | | | | | |Class| |Class| |Class| |Class| |Class| 746 |CDT| | Class A | | Class B | | BE4 | | BE3 | | BE2 | | BE1 | | BE0 | 747 | | | | | | | | | | | | | | | | 748 +-+-+ +----+----+ +----+----+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ 749 | | | | | | | | 750 | +-v-+ +-v-+ | | | | | 751 | |CBS| |CBS| | | | | | 752 | +-+-+ +-+-+ | | | | | 753 | | | | | | | | 754 +-v--------v-----------v---------v-------V-------v-------v-------v--+ 755 | Strict Priority selection | 756 +--------------------------------+----------------------------------+ 757 | 758 V 760 Figure 4: The architecture of an output port inside a relay node with 761 interleaved regulators (IRs) and credit-based shaper (CBS) 763 Each of the queuing subsystems for classes A and B, contains Credit- 764 Based Shaper (CBS). The CBS serves a packet from a class according 765 to the available credit for that class. The credit for each class A 766 or B increases based on the idle slope, and decreases based on the 767 send slope, both of which are parameters of the CBS (Section 8.6.8.2 768 of [IEEE8021Q]). The CDT and BE0-BE4 flows are served by separate 769 queuing subsystems. Then, packets from all flows are served by a 770 transmission selection subsystem that serves packets from each class 771 based on its priority. All subsystems are non-preemptive. 772 Guarantees for AVB traffic can be provided only if CDT traffic is 773 bounded; it is assumed that the CDT traffic has leaky bucket arrival 774 curve with two parameters r_h as rate and b_h as bucket size, i.e., 775 the amount of bits entering a node within a time interval t is 776 bounded by r_h t + b_h. 778 Additionally, it is assumed that the AVB flows are also regulated at 779 their source according to leaky bucket arrival curve. At the source, 780 the traffic satisfies its regulation constraint, i.e. the delay due 781 to interleaved regulator at source is ignored. 783 At each DetNet transit node implementing an interleaved regulator, 784 packets of multiple flows are processed in one FIFO queue; the packet 785 at the head of the queue is regulated based on its leaky bucket 786 parameters; it is released at the earliest time at which this is 787 possible without violating the constraint. 789 The regulation parameters for a flow (leaky bucket rate and bucket 790 size) are the same at its source and at all DetNet transit nodes 791 along its path in the case of that all clocks are perfect. However, 792 in reality there is clock nonideality thoughout the DetNet domain 793 even with clock synchronization. This phenomenon causes inaccuracy 794 in the rates configured at the regulators that may lead to network 795 instability. To avoid that, when configuring the regulators, the 796 rates are set as the source rates with some positive margin. 797 [Thomas2020time] describes and provides solutions to this issue. 799 6.4.1. Delay Bound Calculation 801 A delay bound of the queuing subsystem ((4) in Figure 1) for an AVB 802 flow of classes A or B can be computed if the following condition 803 holds: 805 sum of leaky bucket rates of all flows of this class at this 806 transit node <= R, where R is given below for every class. 808 If the condition holds, the delay bounds for a flow of class X (A or 809 B) is d_X and calculated as: 811 d_X = T_X + (b_t_X-L_min_X)/R_X - L_min_X/c 813 where L_min_X is the minimum packet lengths of class X (A or B); c is 814 the output link transmission rate; b_t_X is the sum of the b term 815 (bucket size) for all the flows of the class X. Parameters R_X and 816 T_X are calculated as follows for class A and class B, separately: 818 If the flow is of class A: 820 R_A = I_A (c-r_h)/ c 822 T_A = L_nA + b_h + r_h L_n/c)/(c-r_h) 824 where L_nA is the maximum packet length of class B and BE packets; 825 L_n is the maximum packet length of classes A,B, and BE. 827 If the flow is of class B: 829 R_B = I_B (c-r_h)/ c 831 T_B = (L_BE + L_A + L_nA I_A/(c_h-I_A) + b_h + r_h L_n/c)/(c-r_h) 833 where L_A is the maximum packet length of class A; L_BE is the 834 maximum packet length of class BE. 836 Then, an end-to-end delay bound of class X (A or B)is calculated by 837 the formula Section 4.2.2, where for Cij: 839 Cij = d_X 841 More information of delay analysis in such a DetNet transit node is 842 described in [TSNwithATS]. 844 6.4.2. Flow Admission 846 The delay bound calculation requires some information about each 847 node. For each node, it is required to know the idle slope of CBS 848 for each class A and B (I_A and I_B), as well as the transmission 849 rate of the output link (c). Besides, it is necessary to have the 850 information on each class, i.e. maximum packet length of classes A, 851 B, and BE. Moreover, the leaky bucket parameters of CDT (r_h,b_h) 852 should be known. To admit a flow/flows of classes A and B, their 853 delay requirements should be guaranteed not to be violated. As 854 described in Section 3.1, the two problems, static and dynamic, are 855 addressed separately. In either of the problems, the rate and delay 856 should be guaranteed. Thus, 858 The static admission control: 859 The leaky bucket parameters of all AVB flows are known, 860 therefore, for each AVB flow f, a delay bound can be 861 calculated. The computed delay bound for every AVB flow 862 should not be more than its delay requirement. Moreover, the 863 sum of the rate of each flow (r_f) should not be more than 864 the rate allocated to each class (R). If these two 865 conditions hold, the configuration is declared admissible. 867 The dynamic admission control: 868 For dynamic admission control, we allocate to every node and 869 class A or B, static value for rate (R) and maximum 870 burstiness (b_t). In addition, for every node and every 871 class A and B, two counters are maintained: 873 R_acc is equal to the sum of the leaky-bucket rates of all 874 flows of this class already admitted at this node; At all 875 times, we must have: 877 R_acc <=R, (Eq. 1) 879 b_acc is equal to the sum of the bucket sizes of all flows 880 of this class already admitted at this node; At all times, 881 we must have: 883 b_acc <=b_t. (Eq. 2) 885 A new AVB flow is admitted at this node, if Eqs. (1) and (2) 886 continue to be satisfied after adding its leaky bucket rate 887 and bucket size to R_acc and b_acc. An AVB flow is admitted 888 in the network, if it is admitted at all nodes along its 889 path. When this happens, all variables R_acc and b_acc along 890 its path must be incremented to reflect the addition of the 891 flow. Similarly, when an AVB flow leaves the network, all 892 variables R_acc and b_acc along its path must be decremented 893 to reflect the removal of the flow. 895 The choice of the static values of R and b_t at all nodes and classes 896 must be done in a prior configuration phase; R controls the bandwidth 897 allocated to this class at this node, b_t affects the delay bound and 898 the buffer requirement. R must satisfy the constraints given in 899 Annex L.1 of [IEEE8021Q]. 901 6.5. Guaranteed-Service IntServ 903 Guaranteed-Service Integrated service (IntServ) is an architecture 904 that specifies the elements to guarantee quality of service (QoS) on 905 networks [RFC2212]. 907 The flow, at the source, has a leaky bucket arrival curve with two 908 parameters r as rate and b as bucket size, i.e., the amount of bits 909 entering a node within a time interval t is bounded by r t + b. 911 If a resource reservation on a path is applied, a node provides a 912 guaranteed rate R and maximum service latency of T. This can be 913 interpreted in a way that the bits might have to wait up to T before 914 being served with a rate greater or equal to R. The delay bound of 915 the flow traversing the node is T + b / R. 917 Consider a Guaranteed-Service IntServ path including a sequence of 918 nodes, where the i-th node provides a guaranteed rate R_i and maximum 919 service latency of T_i. Then, the end-to-end delay bound for a flow 920 on this can be calculated as sum(T_i) + b / min(R_i). 922 The provided delay bound is based on a simple case of Guaranteed- 923 Service IntServ where only a guaranteed rate and maximum service 924 latency and a leaky bucket arrival curve are available. If more 925 information about the flow is known, e.g. the peak rate, the delay 926 bound is more complicated; the detail is available in [RFC2212] and 927 Section 1.4.1 of [NetCalBook]. 929 6.6. Cyclic Queuing and Forwarding 931 Annex T of [IEEE8021Q] describes Cyclic Queuing and Forwarding (CQF), 932 which provides bounded latency and zero congestion loss using the 933 time-scheduled gates of [IEEE8021Q] section 8.6.8.4. For a given 934 class of DetNet flows, a set of two or more buffers is provided at 935 the output queue layer of Figure 3. A cycle time T_c is configured 936 for each class of DetNet flows c, and all of the buffer sets in a 937 class of DetNet flows swap buffers simultaneously throughout the 938 DetNet domain at that cycle rate, all in phase. In such a mechanism, 939 the regulator, mentioned in Figure 1, is not required. 941 In the case of two-buffer CQF, each class of DetNet flows c has two 942 buffers, namely buffer1 and buffer2. In a cycle (i) when buffer1 943 accumulates received packets from the node's reception ports, buffer2 944 transmits the already stored packets from the previous cycle (i-1). 945 In the next cycle (i+1), buffer2 stores the received packets and 946 buffer1 transmits the packets received in cycle (i). The duration of 947 each cycle is T_c. 949 The per-hop latency is trivially determined by the cycle time T_c: 950 the packet transmitted from a node at a cycle (i), is transmitted 951 from the next node at cycle (i+1). Hence, the maximum delay 952 experienced by a given packet is from the beginning of cycle (i) to 953 the end of cycle (i+1), or 2T_c; also, the minimum delay is from the 954 end of cycle (i) to the beginning of cycle (i+1), i.e., zero. Then, 955 if the packet traverses h hops, the maximum delay is: 957 (h+1) T_c 959 and the minimum delay is: 961 (h-1) T_c 963 which gives a latency variation of 2T_c. 965 The cycle length T_c should be carefully chosen; it needs to be large 966 enough to accomodate all the DetNet traffic, plus at least one 967 maximum interfering packet, that can be received within one cycle. 968 Also, the value of T_c includes a time interval, called dead time 969 (DT), which is the sum of the delays 1,2,3,4 defined in Figure 1. 971 The value of DT guarantees that the last packet of one cycle in a 972 node is fully delivered to a buffer of the next node is the same 973 cycle. A two-buffer CQF is recommended if DT is small compared to 974 T_c. For a large DT, CQF with more buffers can be used and a cycle 975 identification label can be added to the packets. 977 Ingress conditioning (Section 4.3) may be required if the source of a 978 DetNet flow does not, itself, employ CQF. Since there are no per- 979 flow parameters in the CQF technique, per-hop configuration is not 980 required in the CQF forwarding nodes. 982 7. Example application on DetNet IP network 984 This section provides an example application of this document on a 985 DetNet-enabled IP network. Consider Figure 5, taken from Section 3 986 of [RFC8939], that shows a simple IP network: 988 o The end-system 1 implements Guaranteed-Service IntServ as in 989 Section 6.5 between itself and relay node 1. 991 o Sub-network 1 is a TSN network. The nodes in subnetwork 1 992 implement credit-based shapers with asynchronous traffic shaping 993 as in Section 6.4. 995 o Sub-network 2 is a TSN network. The nodes in subnetwork 2 996 implement cyclic queuing and forwarding with two buffers as in 997 Section 6.6. 999 o The relay nodes 1 and 2 implement credit-based shapers with 1000 asynchronous traffic shaping as in Section 6.4. They also perform 1001 the aggregation and mapping of IP DetNet flows to TSN streams 1002 (Section 4.4 of [I-D.ietf-detnet-ip-over-tsn]). 1004 DetNet IP Relay Relay DetNet IP 1005 End-System Node 1 Node 2 End-System 1006 1 2 1007 +----------+ +----------+ 1008 | Appl. |<------------ End-to-End Service ----------->| Appl. | 1009 +----------+ ............ ........... +----------+ 1010 | Service |<-: Service :-- DetNet flow --: Service :->| Service | 1011 +----------+ +----------+ +----------+ +----------+ 1012 |Forwarding| |Forwarding| |Forwarding| |Forwarding| 1013 +--------.-+ +-.------.-+ +-.---.----+ +-------.--+ 1014 : Link : \ ,-----. / \ ,-----. / 1015 +......+ +----[ Sub- ]----+ +-[ Sub- ]-+ 1016 [Network] [Network] 1017 `--1--' `--2--' 1019 |<--------------------- DetNet IP --------------------->| 1021 |<--- d1 --->|<--------------- d2_p --------------->|<-- d3_p -->| 1023 Figure 5: A Simple DetNet-Enabled IP Network, taken from RFC8939 1025 Consider a fully centeralized control plane for the network of 1026 Figure 5 as described in Section 3.2 of 1027 [I-D.ietf-detnet-controller-plane-framework]. Suppose end-system 1 1028 wants to create a DetNet flow with traffic specification destined to 1029 end-system 2 with end-to-end delay bound requirement D. Therefore, 1030 the control plane receives a flow establishment request and 1031 calculates a number of valid paths through the network (Section 3.2 1032 of [I-D.ietf-detnet-controller-plane-framework]). To select a proper 1033 path, the control plane needs to compute an end-to-end delay bound at 1034 every node of each selected path p. 1036 The end-to-end delay bound is d1 + d2_p + d3_p, where d1 is the delay 1037 bound from end-system 1 to the entrance of relay node 1, d2_p is the 1038 delay bound for path p from relay node 1 to entrance of the first 1039 node in sub-network 2, and d3_p the delay bound of path p from the 1040 first node in sub-network 2 to end-system 2. The computation of d1 1041 is explained in Section 6.5. Since the relay node 1, sub-network 1 1042 and relay node 2 implement aggregate queuing, we use the results in 1043 Section 4.2.2 and Section 6.4 to compute d2_p for the path p. 1044 Finally, d3_p is computed using the delay bound computation of 1045 Section 6.6. Any path p such that d1 + d2_p + d3_p <= D satisfies 1046 the delay bound requirement of the flow. If there is no such path, 1047 the control plane may compute new set of valid paths and redo the 1048 delay bound computation or do not admit the DetNet flow. 1050 As soon as the control plane selects a path that satisfies the delay 1051 bound constraint, it allocates and reserves the resources in the path 1052 for the DetNet flow (Section 4.2 1053 [I-D.ietf-detnet-controller-plane-framework]). 1055 8. Security considerations 1057 Detailed security considerations for DetNet are cataloged in 1058 [I-D.ietf-detnet-security], and more general security considerations 1059 are described in [RFC8655]. 1061 Security aspects that are unique to DetNet are those whose aim is to 1062 provide the specific QoS aspects of DetNet, specifically bounded end- 1063 to-end delivery latency and zero congestion loss. Achieving such 1064 loss rates and bounded latency may not be possible in the face of a 1065 highly capable adversary, such as the one envisioned by the Internet 1066 Threat Model of BCP 72 [RFC3552] that can arbitrarily drop or delay 1067 any or all traffic. In order to present meaningful security 1068 considerations, we consider a somewhat weaker attacker who does not 1069 control the physical links of the DetNet domain but may have the 1070 ability to control a network node within the boundary of the DetNet 1071 domain. 1073 A security consideration for this document is to secure the resource 1074 reservation signaling for DetNet flows. Any forge or manipulation of 1075 packets during reservation may lead the flow not to be admitted or 1076 face delay bound violation. Security mitigation for this issue is 1077 describedd in Section 7.6 of [I-D.ietf-detnet-security]. 1079 9. IANA considerations 1081 This document has no IANA actions. 1083 10. References 1085 10.1. Normative References 1087 [RFC2212] Shenker, S., Partridge, C., and R. Guerin, "Specification 1088 of Guaranteed Quality of Service", RFC 2212, 1089 DOI 10.17487/RFC2212, September 1997, 1090 . 1092 [RFC6658] Bryant, S., Ed., Martini, L., Swallow, G., and A. Malis, 1093 "Packet Pseudowire Encapsulation over an MPLS PSN", 1094 RFC 6658, DOI 10.17487/RFC6658, July 2012, 1095 . 1097 [RFC7806] Baker, F. and R. Pan, "On Queuing, Marking, and Dropping", 1098 RFC 7806, DOI 10.17487/RFC7806, April 2016, 1099 . 1101 [RFC8655] Finn, N., Thubert, P., Varga, B., and J. Farkas, 1102 "Deterministic Networking Architecture", RFC 8655, 1103 DOI 10.17487/RFC8655, October 2019, 1104 . 1106 [RFC8939] Varga, B., Ed., Farkas, J., Berger, L., Fedyk, D., and S. 1107 Bryant, "Deterministic Networking (DetNet) Data Plane: 1108 IP", RFC 8939, DOI 10.17487/RFC8939, November 2020, 1109 . 1111 [RFC8964] Varga, B., Ed., Farkas, J., Berger, L., Malis, A., Bryant, 1112 S., and J. Korhonen, "Deterministic Networking (DetNet) 1113 Data Plane: MPLS", RFC 8964, DOI 10.17487/RFC8964, January 1114 2021, . 1116 10.2. Informative References 1118 [bennett2002delay] 1119 J.C.R. Bennett, K. Benson, A. Charny, W.F. Courtney, and 1120 J.-Y. Le Boudec, "Delay Jitter Bounds and Packet Scale 1121 Rate Guarantee for Expedited Forwarding", 1122 . 1124 [charny2000delay] 1125 A. Charny and J.-Y. Le Boudec, "Delay Bounds in a Network 1126 with Aggregate Scheduling", . 1129 [I-D.ietf-detnet-controller-plane-framework] 1130 A. Malis, X. Geng, M. Chen, F. Qin, and B. Varga, 1131 "Deterministic Networking (DetNet) Controller Plane 1132 Framework draft-ietf-detnet-controller-plane-framework- 1133 00", . 1136 [I-D.ietf-detnet-ip-over-tsn] 1137 B. Varga, J. Farkas, A. Malis, and S. Bryant, "DetNet Data 1138 Plane: IP over IEEE 802.1 Time Sensitive Networking (TSN) 1139 draft-ietf-detnet-ip-over-tsn-07", 1140 . 1143 [I-D.ietf-detnet-security] 1144 E. Grossman, T. Mizrahi, and A. Hacker, "Deterministic 1145 Networking (DetNet) Security Considerations draft-ietf- 1146 detnet-security-16", . 1149 [IEEE8021Q] 1150 IEEE 802.1, "IEEE Std 802.1Q-2018: IEEE Standard for Local 1151 and metropolitan area networks - Bridges and Bridged 1152 Networks", 2018, 1153 . 1155 [IEEE8021Qcr] 1156 IEEE 802.1, "IEEE P802.1Qcr: IEEE Draft Standard for Local 1157 and metropolitan area networks - Bridges and Bridged 1158 Networks - Amendment: Asynchronous Traffic Shaping", 2017, 1159 . 1161 [IEEE8021TSN] 1162 IEEE 802.1, "IEEE 802.1 Time-Sensitive Networking (TSN) 1163 Task Group", . 1165 [IEEE8023] 1166 IEEE 802.3, "IEEE Std 802.3-2018: IEEE Standard for 1167 Ethernet", 2018, 1168 . 1170 [le_boudec2018theory] 1171 J.-Y. Le Boudec, "A Theory of Traffic Regulators for 1172 Deterministic Networks with Application to Interleaved 1173 Regulators", 1174 . 1176 [NetCalBook] 1177 J.-Y. Le Boudec and P. Thiran, "Network calculus: a theory 1178 of deterministic queuing systems for the internet", 2001, 1179 . 1181 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 1182 Text on Security Considerations", BCP 72, RFC 3552, 1183 DOI 10.17487/RFC3552, July 2003, 1184 . 1186 [RFC8578] Grossman, E., Ed., "Deterministic Networking Use Cases", 1187 RFC 8578, DOI 10.17487/RFC8578, May 2019, 1188 . 1190 [Specht2016UBS] 1191 J. Specht and S. Samii, "Urgency-Based Scheduler for Time- 1192 Sensitive Switched Ethernet Networks", 1193 . 1195 [Thomas2020time] 1196 L. Thomas and J.-Y. Le Boudec, "On Time Synchronization 1197 Issues in Time-Sensitive Networks with Regulators and 1198 Nonideal Clocks", 1199 . 1201 [TSNwithATS] 1202 E. Mohammadpour, E. Stai, M. Mohiuddin, and J.-Y. Le 1203 Boudec, "End-to-end Latency and Backlog Bounds in Time- 1204 Sensitive Networking with Credit Based Shapers and 1205 Asynchronous Traffic Shaping", 1206 . 1208 Authors' Addresses 1210 Norman Finn 1211 Huawei Technologies Co. Ltd 1212 3101 Rio Way 1213 Spring Valley, California 91977 1214 US 1216 Phone: +1 925 980 6430 1217 Email: nfinn@nfinnconsulting.com 1219 Jean-Yves Le Boudec 1220 EPFL 1221 IC Station 14 1222 Lausanne EPFL 1015 1223 Switzerland 1225 Email: jean-yves.leboudec@epfl.ch 1227 Ehsan Mohammadpour 1228 EPFL 1229 IC Station 14 1230 Lausanne EPFL 1015 1231 Switzerland 1233 Email: ehsan.mohammadpour@epfl.ch 1234 Jiayi Zhang 1235 Huawei Technologies Co. Ltd 1236 Q27, No.156 Beiqing Road 1237 Beijing 100095 1238 China 1240 Email: zhangjiayi11@huawei.com 1242 Balazs Varga 1243 Ericsson 1244 Konyves Kalman krt. 11/B 1245 Budapest 1097 1246 Hungary 1248 Email: balazs.a.varga@ericsson.com 1250 Janos Farkas 1251 Ericsson 1252 Konyves Kalman krt. 11/B 1253 Budapest 1097 1254 Hungary 1256 Email: janos.farkas@ericsson.com