idnits 2.17.1 draft-ietf-detnet-bounded-latency-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There are 4 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (1 September 2021) is 966 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Network' is mentioned on line 1016, but not defined == Outdated reference: A later version (-05) exists of draft-ietf-detnet-controller-plane-framework-00 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DetNet N. Finn 3 Internet-Draft Huawei Technologies Co. Ltd 4 Intended status: Informational J-Y. Le Boudec 5 Expires: 5 March 2022 E. Mohammadpour 6 EPFL 7 J. Zhang 8 Huawei Technologies Co. Ltd 9 B. Varga 10 J. Farkas 11 Ericsson 12 1 September 2021 14 DetNet Bounded Latency 15 draft-ietf-detnet-bounded-latency-07 17 Abstract 19 This document references specific queuing mechanisms, defined in 20 other documents, that can be used to control packet transmission at 21 each output port and achieve the DetNet qualities of service. This 22 document presents a timing model for sources, destinations, and the 23 DetNet transit nodes that relay packets that is applicable to all of 24 those referenced queuing mechanisms. Using the model presented in 25 this document, it should be possible for an implementor, user, or 26 standards development organization to select a particular set of 27 queuing mechanisms for each device in a DetNet network, and to select 28 a resource reservation algorithm for that network, so that those 29 elements can work together to provide the DetNet service. 31 Status of This Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 Internet-Drafts are working documents of the Internet Engineering 37 Task Force (IETF). Note that other groups may also distribute 38 working documents as Internet-Drafts. The list of current Internet- 39 Drafts is at https://datatracker.ietf.org/drafts/current/. 41 Internet-Drafts are draft documents valid for a maximum of six months 42 and may be updated, replaced, or obsoleted by other documents at any 43 time. It is inappropriate to use Internet-Drafts as reference 44 material or to cite them other than as "work in progress." 46 This Internet-Draft will expire on 5 March 2022. 48 Copyright Notice 50 Copyright (c) 2021 IETF Trust and the persons identified as the 51 document authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal 54 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 55 license-info) in effect on the date of publication of this document. 56 Please review these documents carefully, as they describe your rights 57 and restrictions with respect to this document. Code Components 58 extracted from this document must include Simplified BSD License text 59 as described in Section 4.e of the Trust Legal Provisions and are 60 provided without warranty as described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 65 2. Terminology and Definitions . . . . . . . . . . . . . . . . . 4 66 3. DetNet bounded latency model . . . . . . . . . . . . . . . . 4 67 3.1. Flow admission . . . . . . . . . . . . . . . . . . . . . 4 68 3.1.1. Static latency calculation . . . . . . . . . . . . . 5 69 3.1.2. Dynamic latency calculation . . . . . . . . . . . . . 5 70 3.2. Relay node model . . . . . . . . . . . . . . . . . . . . 6 71 4. Computing End-to-end Delay Bounds . . . . . . . . . . . . . . 8 72 4.1. Non-queuing delay bound . . . . . . . . . . . . . . . . . 8 73 4.2. Queuing delay bound . . . . . . . . . . . . . . . . . . . 9 74 4.2.1. Per-flow queuing mechanisms . . . . . . . . . . . . . 9 75 4.2.2. Aggregate queuing mechanisms . . . . . . . . . . . . 10 76 4.3. Ingress considerations . . . . . . . . . . . . . . . . . 11 77 4.4. Interspersed DetNet-unaware transit nodes . . . . . . . . 11 78 5. Achieving zero congestion loss . . . . . . . . . . . . . . . 12 79 6. Queuing techniques . . . . . . . . . . . . . . . . . . . . . 13 80 6.1. Queuing data model . . . . . . . . . . . . . . . . . . . 13 81 6.2. Frame Preemption . . . . . . . . . . . . . . . . . . . . 15 82 6.3. Time Aware Shaper . . . . . . . . . . . . . . . . . . . . 15 83 6.4. Credit-Based Shaper with Asynchronous Traffic Shaping . . 16 84 6.4.1. Delay Bound Calculation . . . . . . . . . . . . . . . 18 85 6.4.2. Flow Admission . . . . . . . . . . . . . . . . . . . 19 86 6.5. Guaranteed-Service IntServ . . . . . . . . . . . . . . . 20 87 6.6. Cyclic Queuing and Forwarding . . . . . . . . . . . . . . 21 88 7. Example application on DetNet IP network . . . . . . . . . . 22 89 8. Security considerations . . . . . . . . . . . . . . . . . . . 24 90 9. IANA considerations . . . . . . . . . . . . . . . . . . . . . 24 91 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 24 92 10.1. Normative References . . . . . . . . . . . . . . . . . . 24 93 10.2. Informative References . . . . . . . . . . . . . . . . . 25 94 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 96 1. Introduction 98 The ability for IETF Deterministic Networking (DetNet) or IEEE 802.1 99 Time-Sensitive Networking (TSN, [IEEE8021TSN]) to provide the DetNet 100 services of bounded latency and zero congestion loss depends upon A) 101 configuring and allocating network resources for the exclusive use of 102 DetNet flows; B) identifying, in the data plane, the resources to be 103 utilized by any given packet, and C) the detailed behavior of those 104 resources, especially transmission queue selection, so that latency 105 bounds can be reliably assured. 107 As explained in [RFC8655], DetNet flows are characterized by 1) a 108 maximum bandwidth, guaranteed either by the transmitter or by strict 109 input metering; and 2) a requirement for a guaranteed worst-case end- 110 to-end latency. That latency guarantee, in turn, provides the 111 opportunity for the network to supply enough buffer space to 112 guarantee zero congestion loss. It is assumed in this document that 113 the paths of DetNet flows are fixed. 115 To be used by the applications identified in [RFC8578], it must be 116 possible to calculate, before the transmission of a DetNet flow 117 commences, both the worst-case end-to-end network latency, and the 118 amount of buffer space required at each hop to ensure against 119 congestion loss. 121 This document references specific queuing mechanisms, defined in 122 [RFC8655], that can be used to control packet transmission at each 123 output port and achieve the DetNet qualities of service. This 124 document presents a timing model for sources, destinations, and the 125 DetNet transit nodes that relay packets that is applicable to all of 126 those referenced queuing mechanisms. It furthermore provides end-to- 127 end delay bound and backlog bound computations for such mechanisms 128 that can be used by the control plane to provide DetNet QoS. 130 Using the model presented in this document, it should be possible for 131 an implementor, user, or standards development organization to select 132 a particular set of queuing mechanisms for each device in a DetNet 133 network, and to select a resource reservation algorithm for that 134 network, so that those elements can work together to provide the 135 DetNet service. Section 7 provides an example application of this 136 document to a DetNet IP network with combination of different queuing 137 mechanisms. 139 This document does not specify any resource reservation protocol or 140 control plane function. It disregards the in-band packets that can 141 be part of the stream such as OAM and necessary retransmissions. It 142 does not describe all of the requirements for that protocol or 143 control plane function. It does describe requirements for such 144 resource reservation methods, and for queuing mechanisms that, if 145 met, will enable them to work together. 147 2. Terminology and Definitions 149 This document uses the terms defined in [RFC8655]. 151 3. DetNet bounded latency model 153 3.1. Flow admission 155 This document assumes that following paradigm is used to admit DetNet 156 flows: 158 1. Perform any configuration required by the DetNet transit nodes in 159 the network for aggregates of DetNet flows. This configuration 160 is done beforehand, and not tied to any particular DetNet flow. 162 2. Characterize the new DetNet flow, particularly in terms of 163 required bandwidth. 165 3. Establish the path that the DetNet flow will take through the 166 network from the source to the destination(s). This can be a 167 point-to-point or a point-to-multipoint path. 169 4. Compute the worst-case end-to-end latency for the DetNet flow, 170 using one of the methods, below (Section 3.1.1, Section 3.1.2). 171 In the process, determine whether sufficient resources are 172 available for the DetNet flow to guarantee the required latency 173 and to provide zero congestion loss. 175 5. Assuming that the resources are available, commit those resources 176 to the DetNet flow. This may or may not require adjusting the 177 parameters that control the filtering and/or queuing mechanisms 178 at each hop along the DetNet flow's path. 180 This paradigm can be implemented using peer-to-peer protocols or 181 using a central controller. In some situations, a lack of resources 182 can require backtracking and recursing through this list. 184 Issues such as service preemption of a DetNet flow in favor of 185 another, when resources are scarce, are not considered, here. Also 186 not addressed is the question of how to choose the path to be taken 187 by a DetNet flow. 189 3.1.1. Static latency calculation 191 The static problem: 192 Given a network and a set of DetNet flows, compute an end-to- 193 end latency bound (if computable) for each DetNet flow, and 194 compute the resources, particularly buffer space, required in 195 each DetNet transit node to achieve zero congestion loss. 197 In this calculation, all of the DetNet flows are known before the 198 calculation commences. This problem is of interest to relatively 199 static networks, or static parts of larger networks. It provides 200 bounds on delay and buffer size. The calculations can be extended to 201 provide global optimizations, such as altering the path of one DetNet 202 flow in order to make resources available to another DetNet flow with 203 tighter constraints. 205 The static latency calculation is not limited only to static 206 networks; the entire calculation for all DetNet flows can be repeated 207 each time a new DetNet flow is created or deleted. If some already- 208 established DetNet flow would be pushed beyond its latency 209 requirements by the new DetNet flow, then the new DetNet flow can be 210 refused, or some other suitable action taken. 212 This calculation may be more difficult to perform than that of the 213 dynamic calculation (Section 3.1.2), because the DetNet flows passing 214 through one port on a DetNet transit node affect each others' 215 latency. The effects can even be circular, from a node A to B to C 216 and back to A. On the other hand, the static calculation can often 217 accommodate queuing methods, such as transmission selection by strict 218 priority, that are unsuitable for the dynamic calculation. 220 3.1.2. Dynamic latency calculation 222 The dynamic problem: 223 Given a network whose maximum capacity for DetNet flows is 224 bounded by a set of static configuration parameters applied 225 to the DetNet transit nodes, and given just one DetNet flow, 226 compute the worst-case end-to-end latency that can be 227 experienced by that flow, no matter what other DetNet flows 228 (within the network's configured parameters) might be created 229 or deleted in the future. Also, compute the resources, 230 particularly buffer space, required in each DetNet transit 231 node to achieve zero congestion loss. 233 This calculation is dynamic, in the sense that DetNet flows can be 234 added or deleted at any time, with a minimum of computation effort, 235 and without affecting the guarantees already given to other DetNet 236 flows. 238 The choice of queuing methods is critical to the applicability of the 239 dynamic calculation. Some queuing methods (e.g. CQF, Section 6.6) 240 make it easy to configure bounds on the network's capacity, and to 241 make independent calculations for each DetNet flow. Some other 242 queuing methods (e.g. strict priority with the credit-based shaper 243 defined in [IEEE8021Q] section 8.6.8.2) can be used for dynamic 244 DetNet flow creation, but yield poorer latency and buffer space 245 guarantees than when that same queuing method is used for static 246 DetNet flow creation (Section 3.1.1). 248 3.2. Relay node model 250 A model for the operation of a DetNet transit node is required, in 251 order to define the latency and buffer calculations. In Figure 1 we 252 see a breakdown of the per-hop latency experienced by a packet 253 passing through a DetNet transit node, in terms that are suitable for 254 computing both hop-by-hop latency and per-hop buffer requirements. 256 DetNet transit node A DetNet transit node B 257 +-------------------------+ +------------------------+ 258 | Queuing | | Queuing | 259 | Regulator subsystem | | Regulator subsystem | 260 | +-+-+-+-+ +-+-+-+-+ | | +-+-+-+-+ +-+-+-+-+ | 261 -->+ | | | | | | | | | + +------>+ | | | | | | | | | + +---> 262 | +-+-+-+-+ +-+-+-+-+ | | +-+-+-+-+ +-+-+-+-+ | 263 | | | | 264 +-------------------------+ +------------------------+ 265 |<->|<------>|<------->|<->|<---->|<->|<------>|<------>|<->|<-- 266 2,3 4 5 6 1 2,3 4 5 6 1 2,3 267 1: Output delay 4: Processing delay 268 2: Link delay 5: Regulation delay 269 3: Frame preemption delay 6: Queuing delay 271 Figure 1: Timing model for DetNet or TSN 273 In Figure 1, we see two DetNet transit nodes that are connected via a 274 link. In this model, the only queues, that we deal with explicitly, 275 are attached to the output port; other queues are modeled as 276 variations in the other delay times. (E.g., an input queue could be 277 modeled as either a variation in the link delay (2) or the processing 278 delay (4).) There are six delays that a packet can experience from 279 hop to hop. 281 1. Output delay 282 The time taken from the selection of a packet for output from a 283 queue to the transmission of the first bit of the packet on the 284 physical link. If the queue is directly attached to the physical 285 port, output delay can be a constant. But, in many 286 implementations, the queuing mechanism in a forwarding ASIC is 287 separated from a multi-port MAC/PHY, in a second ASIC, by a 288 multiplexed connection. This causes variations in the output 289 delay that are hard for the forwarding node to predict or control. 291 2. Link delay 292 The time taken from the transmission of the first bit of the 293 packet to the reception of the last bit, assuming that the 294 transmission is not suspended by a frame preemption event. This 295 delay has two components, the first-bit-out to first-bit-in delay 296 and the first-bit-in to last-bit-in delay that varies with packet 297 size. The former is typically measured by the Precision Time 298 Protocol and is constant (see [RFC8655]). However, a virtual 299 "link" could exhibit a variable link delay. 301 3. Frame preemption delay 302 If the packet is interrupted in order to transmit another packet 303 or packets, (e.g. [IEEE8023] clause 99 frame preemption) an 304 arbitrary delay can result. 306 4. Processing delay 307 This delay covers the time from the reception of the last bit of 308 the packet to the time the packet is enqueued in the regulator 309 (Queuing subsystem, if there is no regulation). This delay can be 310 variable, and depends on the details of the operation of the 311 forwarding node. 313 5. Regulator delay 314 This is the time spent from the insertion of the last bit of a 315 packet into a regulation queue until the time the packet is 316 declared eligible according to its regulation constraints. We 317 assume that this time can be calculated based on the details of 318 regulation policy. If there is no regulation, this time is zero. 320 6. Queuing subsystem delay 321 This is the time spent for a packet from being declared eligible 322 until being selected for output on the next link. We assume that 323 this time is calculable based on the details of the queuing 324 mechanism. If there is no regulation, this time is from the 325 insertion of the packet into a queue until it is selected for 326 output on the next link. 328 Not shown in Figure 1 are the other output queues that we presume are 329 also attached to that same output port as the queue shown, and 330 against which this shown queue competes for transmission 331 opportunities. 333 The initial and final measurement point in this analysis (that is, 334 the definition of a "hop") is the point at which a packet is selected 335 for output. In general, any queue selection method that is suitable 336 for use in a DetNet network includes a detailed specification as to 337 exactly when packets are selected for transmission. Any variations 338 in any of the delay times 1-4 result in a need for additional buffers 339 in the queue. If all delays 1-4 are constant, then any variation in 340 the time at which packets are inserted into a queue depends entirely 341 on the timing of packet selection in the previous node. If the 342 delays 1-4 are not constant, then additional buffers are required in 343 the queue to absorb these variations. Thus: 345 * Variations in output delay (1) require buffers to absorb that 346 variation in the next hop, so the output delay variations of the 347 previous hop (on each input port) must be known in order to 348 calculate the buffer space required on this hop. 350 * Variations in processing delay (4) require additional output 351 buffers in the queues of that same DetNet transit node. Depending 352 on the details of the queueing subsystem delay (6) calculations, 353 these variations need not be visible outside the DetNet transit 354 node. 356 4. Computing End-to-end Delay Bounds 358 4.1. Non-queuing delay bound 360 End-to-end delay bounds can be computed using the delay model in 361 Section 3.2. Here, it is important to be aware that for several 362 queuing mechanisms, the end-to-end delay bound is less than the sum 363 of the per-hop delay bounds. An end-to-end delay bound for one 364 DetNet flow can be computed as 366 end_to_end_delay_bound = non_queuing_delay_bound + 367 queuing_delay_bound 369 The two terms in the above formula are computed as follows. 371 First, at the h-th hop along the path of this DetNet flow, obtain an 372 upperbound per-hop_non_queuing_delay_bound[h] on the sum of the 373 bounds over the delays 1,2,3,4 of Figure 1. These upper bounds are 374 expected to depend on the specific technology of the DetNet transit 375 node at the h-th hop but not on the T-SPEC of this DetNet flow 376 [RFC9016]. Then set non_queuing_delay_bound = the sum of per- 377 hop_non_queuing_delay_bound[h] over all hops h. 379 Second, compute queuing_delay_bound as an upper bound to the sum of 380 the queuing delays along the path. The value of queuing_delay_bound 381 depends on the T-SPEC of this DetNet flow and possibly of other flows 382 in the network, as well as the specifics of the queuing mechanisms 383 deployed along the path of this DetNet flow. The computation of 384 queuing_delay_bound is described in Section 4.2 as a separate 385 section. 387 4.2. Queuing delay bound 389 For several queuing mechanisms, queuing_delay_bound is less than the 390 sum of upper bounds on the queuing delays (5,6) at every hop. This 391 occurs with (1) per-flow queuing, and (2) aggregate queuing with 392 regulators, as explained in Section 4.2.1, Section 4.2.2, and 393 Section 6. 395 For other queuing mechanisms the only available value of 396 queuing_delay_bound is the sum of the per-hop queuing delay bounds. 397 In such cases, the computation of per-hop queuing delay bounds must 398 account for the fact that the T-SPEC of a DetNet flow is no longer 399 satisfied at the ingress of a hop, since burstiness increases as one 400 flow traverses one DetNet transit node. 402 4.2.1. Per-flow queuing mechanisms 404 With such mechanisms, each flow uses a separate queue inside every 405 node. The service for each queue is abstracted with a guaranteed 406 rate and a latency. For every DetNet flow, a per-node delay bound as 407 well as an end-to-end delay bound can be computed from the traffic 408 specification of this DetNet flow at its source and from the values 409 of rates and latencies at all nodes along its path. The per-flow 410 queuing is used in Guaranteed-Service IntServ. Details of 411 calculation for Guaranteed-Service IntServ are described in 412 Section 6.5. 414 4.2.2. Aggregate queuing mechanisms 416 With such mechanisms, multiple flows are aggregated into macro-flows 417 and there is one FIFO queue per macro-flow. A practical example is 418 the credit-based shaper defined in section 8.6.8.2 of [IEEE8021Q] 419 where a macro-flow is called a "class". One key issue in this 420 context is how to deal with the burstiness cascade: individual flows 421 that share a resource dedicated to a macro-flow may see their 422 burstiness increase, which may in turn cause increased burstiness to 423 other flows downstream of this resource. Computing delay upper 424 bounds for such cases is difficult, and in some conditions impossible 425 [CharnyDelay][BennettDelay]. Also, when bounds are obtained, they 426 depend on the complete configuration, and must be recomputed when one 427 flow is added. (The dynamic calculation, Section 3.1.2.) 429 A solution to deal with this issue for the DetNet flows is to reshape 430 them at every hop. This can be done with per-flow regulators (e.g. 431 leaky bucket shapers), but this requires per-flow queuing and defeats 432 the purpose of aggregate queuing. An alternative is the interleaved 433 regulator, which reshapes individual DetNet flows without per-flow 434 queuing ([SpechtUBS], [IEEE8021Qcr]). With an interleaved regulator, 435 the packet at the head of the queue is regulated based on its (flow) 436 regulation constraints; it is released at the earliest time at which 437 this is possible without violating the constraint. One key feature 438 of per-flow or interleaved regulator is that, it does not increase 439 worst-case latency bounds [LeBoudecTheory]. Specifically, when an 440 interleaved regulator is appended to a FIFO subsystem, it does not 441 increase the worst-case delay of the latter. 443 Figure 2 shows an example of a network with 5 nodes, aggregate 444 queuing mechanism and interleaved regulators as in Figure 1. An end- 445 to-end delay bound for DetNet flow f, traversing nodes 1 to 5, is 446 calculated as follows: 448 end_to_end_latency_bound_of_flow_f = C12 + C23 + C34 + S4 450 In the above formula, Cij is a bound on the delay of the queuing 451 subsystem in node i and interleaved regulator of node j, and S4 is a 452 bound on the delay of the queuing subsystem in node 4 for DetNet flow 453 f. In fact, using the delay definitions in Section 3.2, Cij is a 454 bound on sum of the delays 1,2,3,6 of node i and 4,5 of node j. 455 Similarly, S4 is a bound on sum of the delays 1,2,3,6 of node 4. A 456 practical example of queuing model and delay calculation is presented 457 Section 6.4. 459 f 460 -----------------------------> 461 +---+ +---+ +---+ +---+ +---+ 462 | 1 |---| 2 |---| 3 |---| 4 |---| 5 | 463 +---+ +---+ +---+ +---+ +---+ 464 \__C12_/\__C23_/\__C34_/\_S4_/ 466 Figure 2: End-to-end delay computation example 468 REMARK: The end-to-end delay bound calculation provided here gives a 469 tighter upper delay bound in comparison with end-to-end delay bound 470 computation by adding the delay bounds of each node in the path of a 471 DetNet flow [TSNwithATS]. 473 4.3. Ingress considerations 475 A sender can be a DetNet node which uses exactly the same queuing 476 methods as its adjacent DetNet transit node, so that the delay and 477 buffer bounds calculations at the first hop are indistinguishable 478 from those at a later hop within the DetNet domain. On the other 479 hand, the sender may be DetNet-unaware, in which case some 480 conditioning of the DetNet flow may be necessary at the ingress 481 DetNet transit node. 483 This ingress conditioning typically consists of a FIFO with an output 484 regulator that is compatible with the queuing employed by the DetNet 485 transit node on its output port(s). For some queuing methods, simply 486 requires added extra buffer space in the queuing subsystem. Ingress 487 conditioning requirements for different queuing methods are mentioned 488 in the sections, below, describing those queuing methods. 490 4.4. Interspersed DetNet-unaware transit nodes 492 It is sometimes desirable to build a network that has both DetNet- 493 aware transit nodes and DetNet-uaware transit nodes, and for a DetNet 494 flow to traverse an island of DetNet-unaware transit nodes, while 495 still allowing the network to offer delay and congestion loss 496 guarantees. This is possible under certain conditions. 498 In general, when passing through a DetNet-unaware island, the island 499 may cause delay variation in excess of what would be caused by DetNet 500 nodes. That is, the DetNet flow might be "lumpier" after traversing 501 the DetNet-unaware island. DetNet guarantees for delay and buffer 502 requirements can still be calculated and met if and only if the 503 following are true: 505 1. The latency variation across the DetNet-unaware island must be 506 bounded and calculable. 508 2. An ingress conditioning function (Section 4.3) is required at the 509 re-entry to the DetNet-aware domain. This will, at least, 510 require some extra buffering to accommodate the additional delay 511 variation, and thus further increases the delay bound. 513 The ingress conditioning is exactly the same problem as that of a 514 sender at the edge of the DetNet domain. The requirement for bounds 515 on the latency variation across the DetNet-unaware island is 516 typically the most difficult to achieve. Without such a bound, it is 517 obvious that DetNet cannot deliver its guarantees, so a DetNet- 518 unaware island that cannot offer bounded latency variation cannot be 519 used to carry a DetNet flow. 521 5. Achieving zero congestion loss 523 When the input rate to an output queue exceeds the output rate for a 524 sufficient length of time, the queue must overflow. This is 525 congestion loss, and this is what deterministic networking seeks to 526 avoid. 528 To avoid congestion losses, an upper bound on the backlog present in 529 the regulator and queuing subsystem of Figure 1 must be computed 530 during resource reservation. This bound depends on the set of flows 531 that use these queues, the details of the specific queuing mechanism 532 and an upper bound on the processing delay (4). The queue must 533 contain the packet in transmission plus all other packets that are 534 waiting to be selected for output. A conservative backlog bound, 535 that applies to all systems, can be derived as follows. 537 The backlog bound is counted in data units (bytes, or words of 538 multiple bytes) that are relevant for buffer allocation. For every 539 flow or an aggregate of flows, we need one buffer space for the 540 packet in transmission, plus space for the packets that are waiting 541 to be selected for output. 543 Let 545 * total_in_rate be the sum of the line rates of all input ports that 546 send traffic to this output port. The value of total_in_rate is 547 in data units (e.g. bytes) per second. 549 * nb_input_ports be the number input ports that send traffic to this 550 output port 552 * max_packet_length be the maximum packet size for packets that may 553 be sent to this output port. This is counted in data units. 555 * max_delay456 be an upper bound, in seconds, on the sum of the 556 processing delay (4) and the queuing delays (5,6) for any packet 557 at this output port. 559 Then a bound on the backlog of traffic in the queue at this output 560 port is 562 backlog_bound = nb_input_ports * max_packet_length + total_in_rate 563 * max_delay456 565 6. Queuing techniques 567 In this section, for simplicity of delay bound computation, we assume 568 that the T-SPEC or arrival curve [NetCalBook] for each DetNet flow at 569 source is leaky bucket. Also, at each Detnet transit node, the 570 service for each queue is abstracted with a minimum guaranteed rate 571 and a latency [NetCalBook]. 573 6.1. Queuing data model 575 Sophisticated queuing mechanisms are available in Layer 3 (L3, see, 576 e.g., [RFC7806] for an overview). In general, we assume that "Layer 577 3" queues, shapers, meters, etc., are precisely the "regulators" 578 shown in Figure 1. The "queuing subsystems" in this figure are not 579 the province solely of bridges; they are an essential part of any 580 DetNet transit node. As illustrated by numerous implementation 581 examples, some of the "Layer 3" mechanisms described in documents 582 such as [RFC7806] are often integrated, in an implementation, with 583 the "Layer 2" mechanisms also implemented in the same node. An 584 integrated model is needed in order to successfully predict the 585 interactions among the different queuing mechanisms needed in a 586 network carrying both DetNet flows and non-DetNet flows. 588 Figure 3 shows the general model for the flow of packets through the 589 queues of a DetNet transit node. The DetNet packets are mapped to a 590 number of regulators. Here, we assume that the PREOF (Packet 591 Replication, Elimination and Ordering Functions) functions are 592 performed before the DetNet packets enter the regulators. All 593 Packets are assigned to a set of queues. Packets compete for the 594 selection to be passed to queues in the queuing subsystem. Packets 595 again are selected for output from the queuing subsystem. 597 | 598 +--------------------------------V----------------------------------+ 599 | Queue assignment | 600 +--+------+----------+---------+-----------+-----+-------+-------+--+ 601 | | | | | | | | 602 +--V-+ +--V-+ +--V--+ +--V--+ +--V--+ | | | 603 |Flow| |Flow| |Flow | |Flow | |Flow | | | | 604 | 0 | | 1 | ... | i | | i+1 | ... | n | | | | 605 | reg| | reg| | reg | | reg | | reg | | | | 606 +--+-+ +--+-+ +--+--+ +--+--+ +--+--+ | | | 607 | | | | | | | | 608 +--V------V----------V--+ +--V-----------V--+ | | | 609 | Trans. selection | | Trans. select. | | | | 610 +----------+------------+ +-----+-----------+ | | | 611 | | | | | 612 +--V--+ +--V--+ +--V--+ +--V--+ +--V--+ 613 | out | | out | | out | | out | | out | 614 |queue| |queue| |queue| |queue| |queue| 615 | 1 | | 2 | | 3 | | 4 | | 5 | 616 +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ 617 | | | | | 618 +----------V----------------------V--------------V-------V-------V--+ 619 | Transmission selection | 620 +---------------------------------+---------------------------------+ 621 | 622 V 624 Figure 3: IEEE 802.1Q Queuing Model: Data flow 626 Some relevant mechanisms are hidden in this figure, and are performed 627 in the queue boxes: 629 * Discarding packets because a queue is full. 631 * Discarding packets marked "yellow" by a metering function, in 632 preference to discarding "green" packets [RFC2697]. 634 Ideally, neither of these actions are performed on DetNet packets. 635 Full queues for DetNet packets should occur only when a DetNet flow 636 is misbehaving, and the DetNet QoS does not include "yellow" service 637 for packets in excess of committed rate. 639 The queue assignment function can be quite complex, even in a bridge 640 [IEEE8021Q], since the introduction of per-stream filtering and 641 policing ([IEEE8021Q] clause 8.6.5.1). In addition to the Layer 2 642 priority expressed in the 802.1Q VLAN tag, a DetNet transit node can 643 utilize any of the following information to assign a packet to a 644 particular queue: 646 * Input port. 648 * Selector based on a rotating schedule that starts at regular, 649 time-synchronized intervals and has nanosecond precision. 651 * MAC addresses, VLAN ID, IP addresses, Layer 4 port numbers, DSCP 652 [RFC8939], [RFC8964]. 654 * The queue assignment function can contain metering and policing 655 functions. 657 * MPLS and/or pseudowire labels [RFC6658]. 659 The "Transmission selection" function decides which queue is to 660 transfer its oldest packet to the output port when a transmission 661 opportunity arises. 663 6.2. Frame Preemption 665 In [IEEE8021Q] and [IEEE8023], the transmission of a frame can be 666 interrupted by one or more "express" frames, and then the interrupted 667 frame can continue transmission. The frame preemption is modeled as 668 consisting of two MAC/PHY stacks, one for packets that can be 669 interrupted, and one for packets that can interrupt the interruptible 670 packets. Only one layer of frame preemption is supported -- a 671 transmitter cannot have more than one interrupted frame in progress. 672 DetNet flows typically pass through the interrupting MAC. For those 673 DetNet flows with T-SPEC, latency bound can be calculated by the 674 methods provided in the following sections that accounts for the 675 affect of frame preemption, according to the specific queuing 676 mechanism that is used in DetNet nodes. Best-effort queues pass 677 through the interruptible MAC, and can thus be preempted. 679 6.3. Time Aware Shaper 681 In [IEEE8021Q], the notion of time-scheduling queue gates is 682 described in section 8.6.8.4. On each node, the transmission 683 selection for packets is controlled by time-synchronized gates; each 684 output queue is associated with a gate. The gates can be either open 685 or close. The states of the gates are determined by the gate control 686 list (GCL). The GCL specifies the opening and closing times of the 687 gates. The design of GCL should satisfy the requirement of latency 688 upper bounds of all DetNet flows; therefore, those DetNet flows 689 traverse a network should have bounded latency, if the traffic and 690 nodes are conformant. 692 It should be noted that scheduled traffic service relies on a 693 synchronized network and coordinated GCL configuration. Synthesis of 694 GCL on multiple nodes in network is a scheduling problem considering 695 all DetNet flows traversing the network, which is a non-deterministic 696 polynomial-time hard (NP-hard) problem [Sch8021Qbv]. Also, at this 697 writing, scheduled traffic service supports no more than eight 698 traffic queues, typically using up to seven priority queues and at 699 least one best effort. 701 6.4. Credit-Based Shaper with Asynchronous Traffic Shaping 703 In the considered queuing model, we considered the four traffic 704 classes (Definition 3.268 of [IEEE8021Q]): control-data traffic 705 (CDT), class A, class B, and best effort (BE) in decreasing order of 706 priority. Flows of classes A and B are together referred as AVB 707 flows. This model is a subset of Time-Sensitive Networking as 708 described next. 710 Based on the timing model described in Figure 1, the contention 711 occurs only at the output port of a DetNet transit node; therefore, 712 the focus of the rest of this subsection is on the regulator and 713 queuing subsystem in the output port of a DetNet transit node. The 714 input flows are identified using the information in (Section 5.1 of 715 [RFC8939]). Then they are aggregated into eight macro flows based on 716 their service requirements; we refer to each macro flow as a class. 717 The output port performs aggregate scheduling with eight queues 718 (queuing subsystems): one for CDT, one for class A flows, one for 719 class B flows, and five for BE traffic denoted as BE0-BE4. The 720 queuing policy for each queuing subsystem is FIFO. In addition, each 721 node output port also performs per-flow regulation for AVB flows 722 using an interleaved regulator (IR), called Asynchronous Traffic 723 Shaper [IEEE8021Qcr]. Thus, at each output port of a node, there is 724 one interleaved regulator per-input port and per-class; the 725 interleaved regulator is mapped to the regulator depicted in 726 Figure 1. The detailed picture of scheduling and regulation 727 architecture at a node output port is given by Figure 4. The packets 728 received at a node input port for a given class are enqueued in the 729 respective interleaved regulator at the output port. Then, the 730 packets from all the flows, including CDT and BE flows, are enqueued 731 in queuing subsytem; there is no regulator for such classes. 733 +--+ +--+ +--+ +--+ 734 | | | | | | | | 735 |IR| |IR| |IR| |IR| 736 | | | | | | | | 737 +-++XXX++-+ +-++XXX++-+ 738 | | | | 739 | | | | 740 +---+ +-v-XXX-v-+ +-v-XXX-v-+ +-----+ +-----+ +-----+ +-----+ +-----+ 741 | | | | | | |Class| |Class| |Class| |Class| |Class| 742 |CDT| | Class A | | Class B | | BE4 | | BE3 | | BE2 | | BE1 | | BE0 | 743 | | | | | | | | | | | | | | | | 744 +-+-+ +----+----+ +----+----+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ 745 | | | | | | | | 746 | +-v-+ +-v-+ | | | | | 747 | |CBS| |CBS| | | | | | 748 | +-+-+ +-+-+ | | | | | 749 | | | | | | | | 750 +-v--------v-----------v---------v-------V-------v-------v-------v--+ 751 | Strict Priority selection | 752 +--------------------------------+----------------------------------+ 753 | 754 V 756 Figure 4: The architecture of an output port inside a relay node with 757 interleaved regulators (IRs) and credit-based shaper (CBS) 759 Each of the queuing subsystems for classes A and B, contains Credit- 760 Based Shaper (CBS). The CBS serves a packet from a class according 761 to the available credit for that class. The credit for each class A 762 or B increases based on the idleslope (as guaranteed rate), and 763 decreases based on the sendslope (typically equal to the difference 764 between the guaranteed and the output link rates), both of which are 765 parameters of the CBS (Section 8.6.8.2 of [IEEE8021Q]). The CDT and 766 BE0-BE4 flows are served by separate queuing subsystems. Then, 767 packets from all flows are served by a transmission selection 768 subsystem that serves packets from each class based on its priority. 769 All subsystems are non-preemptive. Guarantees for AVB traffic can be 770 provided only if CDT traffic is bounded; it is assumed that the CDT 771 traffic has leaky bucket arrival curve with two parameters r_h as 772 rate and b_h as bucket size, i.e., the amount of bits entering a node 773 within a time interval t is bounded by r_h * t + b_h. 775 Additionally, it is assumed that the AVB flows are also regulated at 776 their source according to leaky bucket arrival curve. At the source, 777 the traffic satisfies its regulation constraint, i.e. the delay due 778 to interleaved regulator at source is ignored. 780 At each DetNet transit node implementing an interleaved regulator, 781 packets of multiple flows are processed in one FIFO queue; the packet 782 at the head of the queue is regulated based on its leaky bucket 783 parameters; it is released at the earliest time at which this is 784 possible without violating the constraint. 786 The regulation parameters for a flow (leaky bucket rate and bucket 787 size) are the same at its source and at all DetNet transit nodes 788 along its path in the case of that all clocks are perfect. However, 789 in reality there is clock nonideality thoughout the DetNet domain 790 even with clock synchronization. This phenomenon causes inaccuracy 791 in the rates configured at the regulators that may lead to network 792 instability. To avoid that, when configuring the regulators, the 793 rates are set as the source rates with some positive margin. 794 [ThomasTime] describes and provides solutions to this issue. 796 6.4.1. Delay Bound Calculation 798 A delay bound of the queuing subsystem ((4) in Figure 1) for an AVB 799 flow of classes A or B can be computed if the following condition 800 holds: 802 sum of leaky bucket rates of all flows of this class at this 803 transit node <= R, where R is given below for every class. 805 If the condition holds, the delay bounds for a flow of class X (A or 806 B) is d_X and calculated as: 808 d_X = T_X + (b_t_X-L_min_X)/R_X - L_min_X/c 810 where L_min_X is the minimum packet lengths of class X (A or B); c is 811 the output link transmission rate; b_t_X is the sum of the b term 812 (bucket size) for all the flows of the class X. Parameters R_X and 813 T_X are calculated as follows for class A and class B, separately: 815 If the flow is of class A: 817 R_A = I_A * (c-r_h)/ c 819 T_A = L_nA + b_h + r_h * L_n/c)/(c-r_h) 821 where I_A is the idle slope for class A; L_nA is the maximum packet 822 length of class B and BE packets; L_n is the maximum packet length of 823 classes A,B, and BE; r_h as rate and b_h as bucket size of CDT 824 traffic leaky bucket arrival curve. 826 If the flow is of class B: 828 R_B = I_B * (c-r_h)/ c 830 T_B = (L_BE + L_A + L_nA * I_A/(c_h-I_A) + b_h + r_h * L_n/ 831 c)/(c-r_h) 833 I_B is the idle slope for class B; L_A is the maximum packet length 834 of class A; L_BE is the maximum packet length of class BE. where 836 Then, an end-to-end delay bound of class X (A or B)is calculated by 837 the formula Section 4.2.2, where for Cij: 839 Cij = d_X 841 More information of delay analysis in such a DetNet transit node is 842 described in [TSNwithATS]. 844 6.4.2. Flow Admission 846 The delay bound calculation requires some information about each 847 node. For each node, it is required to know the idle slope of CBS 848 for each class A and B (I_A and I_B), as well as the transmission 849 rate of the output link (c). Besides, it is necessary to have the 850 information on each class, i.e. maximum packet length of classes A, 851 B, and BE. Moreover, the leaky bucket parameters of CDT (r_h,b_h) 852 should be known. To admit a flow/flows of classes A and B, their 853 delay requirements should be guaranteed not to be violated. As 854 described in Section 3.1, the two problems, static and dynamic, are 855 addressed separately. In either of the problems, the rate and delay 856 should be guaranteed. Thus, 858 The static admission control: 859 The leaky bucket parameters of all AVB flows are known, 860 therefore, for each AVB flow f, a delay bound can be 861 calculated. The computed delay bound for every AVB flow 862 should not be more than its delay requirement. Moreover, the 863 sum of the rate of each flow (r_f) should not be more than 864 the rate allocated to each class (R). If these two 865 conditions hold, the configuration is declared admissible. 867 The dynamic admission control: 868 For dynamic admission control, we allocate to every node and 869 class A or B, static value for rate (R) and maximum bucket 870 size (b_t). In addition, for every node and every class A 871 and B, two counters are maintained: 873 R_acc is equal to the sum of the leaky-bucket rates of all 874 flows of this class already admitted at this node; At all 875 times, we must have: 877 R_acc <=R, (Eq. 1) 879 b_acc is equal to the sum of the bucket sizes of all flows 880 of this class already admitted at this node; At all times, 881 we must have: 883 b_acc <=b_t. (Eq. 2) 885 A new AVB flow is admitted at this node, if Eqs. (1) and (2) 886 continue to be satisfied after adding its leaky bucket rate 887 and bucket size to R_acc and b_acc. An AVB flow is admitted 888 in the network, if it is admitted at all nodes along its 889 path. When this happens, all variables R_acc and b_acc along 890 its path must be incremented to reflect the addition of the 891 flow. Similarly, when an AVB flow leaves the network, all 892 variables R_acc and b_acc along its path must be decremented 893 to reflect the removal of the flow. 895 The choice of the static values of R and b_t at all nodes and classes 896 must be done in a prior configuration phase; R controls the bandwidth 897 allocated to this class at this node, b_t affects the delay bound and 898 the buffer requirement. R must satisfy the constraints given in 899 Annex L.1 of [IEEE8021Q]. 901 6.5. Guaranteed-Service IntServ 903 Guaranteed-Service Integrated service (IntServ) is an architecture 904 that specifies the elements to guarantee quality of service (QoS) on 905 networks [RFC2212]. 907 The flow, at the source, has a leaky bucket arrival curve with two 908 parameters r as rate and b as bucket size, i.e., the amount of bits 909 entering a node within a time interval t is bounded by r * t + b. 911 If a resource reservation on a path is applied, a node provides a 912 guaranteed rate R and maximum service latency of T. This can be 913 interpreted in a way that the bits might have to wait up to T before 914 being served with a rate greater or equal to R. The delay bound of 915 the flow traversing the node is T + b / R. 917 Consider a Guaranteed-Service IntServ path including a sequence of 918 nodes, where the i-th node provides a guaranteed rate R_i and maximum 919 service latency of T_i. Then, the end-to-end delay bound for a flow 920 on this can be calculated as sum(T_i) + b / min(R_i). 922 The provided delay bound is based on a simple case of Guaranteed- 923 Service IntServ where only a guaranteed rate and maximum service 924 latency and a leaky bucket arrival curve are available. If more 925 information about the flow is known, e.g. the peak rate, the delay 926 bound is more complicated; the detail is available in [RFC2212] and 927 Section 1.4.1 of [NetCalBook]. 929 6.6. Cyclic Queuing and Forwarding 931 Annex T of [IEEE8021Q] describes Cyclic Queuing and Forwarding (CQF), 932 which provides bounded latency and zero congestion loss using the 933 time-scheduled gates of [IEEE8021Q] section 8.6.8.4. For a given 934 class of DetNet flows, a set of two or more buffers is provided at 935 the output queue layer of Figure 3. A cycle time T_c is configured 936 for each class of DetNet flows c, and all of the buffer sets in a 937 class of DetNet flows swap buffers simultaneously throughout the 938 DetNet domain at that cycle rate, all in phase. In such a mechanism, 939 the regulator, mentioned in Figure 1, is not required. 941 In the case of two-buffer CQF, each class of DetNet flows c has two 942 buffers, namely buffer1 and buffer2. In a cycle (i) when buffer1 943 accumulates received packets from the node's reception ports, buffer2 944 transmits the already stored packets from the previous cycle (i-1). 945 In the next cycle (i+1), buffer2 stores the received packets and 946 buffer1 transmits the packets received in cycle (i). The duration of 947 each cycle is T_c. 949 The per-hop latency is trivially determined by the cycle time T_c: 950 the packet transmitted from a node at a cycle (i), is transmitted 951 from the next node at cycle (i+1). Hence, the maximum delay 952 experienced by a given packet is from the beginning of cycle (i) to 953 the end of cycle (i+1), or 2T_c; also, the minimum delay is from the 954 end of cycle (i) to the beginning of cycle (i+1), i.e., zero. Then, 955 if the packet traverses h hops, the maximum delay is: 957 (h+1) T_c 959 and the minimum delay is: 961 (h-1) T_c 963 which gives a latency variation of 2T_c. 965 The cycle length T_c should be carefully chosen; it needs to be large 966 enough to accomodate all the DetNet traffic, plus at least one 967 maximum packet (or fragment) size from lower priority queues, which 968 might be received within a cycle. Also, the value of T_c includes a 969 time interval, called dead time (DT), which is the sum of the delays 970 1,2,3,4 defined in Figure 1. The value of DT guarantees that the 971 last packet of one cycle in a node is fully delivered to a buffer of 972 the next node is the same cycle. A two-buffer CQF is recommended if 973 DT is small compared to T_c. For a large DT, CQF with more buffers 974 can be used and a cycle identification label can be added to the 975 packets. 977 Ingress conditioning (Section 4.3) may be required if the source of a 978 DetNet flow does not, itself, employ CQF. Since there are no per- 979 flow parameters in the CQF technique, per-hop configuration is not 980 required in the CQF forwarding nodes. 982 7. Example application on DetNet IP network 984 This section provides an example application of this document on a 985 DetNet-enabled IP network. Consider Figure 5, taken from Section 3 986 of [RFC8939], that shows a simple IP network: 988 * The end-system 1 implements Guaranteed-Service IntServ as in 989 Section 6.5 between itself and relay node 1. 991 * Sub-network 1 is a TSN network. The nodes in subnetwork 1 992 implement credit-based shapers with asynchronous traffic shaping 993 as in Section 6.4. 995 * Sub-network 2 is a TSN network. The nodes in subnetwork 2 996 implement cyclic queuing and forwarding with two buffers as in 997 Section 6.6. 999 * The relay nodes 1 and 2 implement credit-based shapers with 1000 asynchronous traffic shaping as in Section 6.4. They also perform 1001 the aggregation and mapping of IP DetNet flows to TSN streams 1002 (Section 4.4 of [RFC9023]). 1004 DetNet IP Relay Relay DetNet IP 1005 End-System Node 1 Node 2 End-System 1006 1 2 1007 +----------+ +----------+ 1008 | Appl. |<------------ End-to-End Service ----------->| Appl. | 1009 +----------+ ............ ........... +----------+ 1010 | Service |<-: Service :-- DetNet flow --: Service :->| Service | 1011 +----------+ +----------+ +----------+ +----------+ 1012 |Forwarding| |Forwarding| |Forwarding| |Forwarding| 1013 +--------.-+ +-.------.-+ +-.---.----+ +-------.--+ 1014 : Link : \ ,-----. / \ ,-----. / 1015 +......+ +----[ Sub- ]----+ +-[ Sub- ]-+ 1016 [Network] [Network] 1017 `--1--' `--2--' 1019 |<--------------------- DetNet IP --------------------->| 1021 |<--- d1 --->|<--------------- d2_p --------------->|<-- d3_p -->| 1023 Figure 5: A Simple DetNet-Enabled IP Network, taken from RFC8939 1025 Consider a fully centeralized control plane for the network of 1026 Figure 5 as described in Section 3.2 of 1027 [I-D.ietf-detnet-controller-plane-framework]. Suppose end-system 1 1028 wants to create a DetNet flow with traffic specification destined to 1029 end-system 2 with end-to-end delay bound requirement D. Therefore, 1030 the control plane receives a flow establishment request and 1031 calculates a number of valid paths through the network (Section 3.2 1032 of [I-D.ietf-detnet-controller-plane-framework]). To select a proper 1033 path, the control plane needs to compute an end-to-end delay bound at 1034 every node of each selected path p. 1036 The end-to-end delay bound is d1 + d2_p + d3_p, where d1 is the delay 1037 bound from end-system 1 to the entrance of relay node 1, d2_p is the 1038 delay bound for path p from relay node 1 to entrance of the first 1039 node in sub-network 2, and d3_p the delay bound of path p from the 1040 first node in sub-network 2 to end-system 2. The computation of d1 1041 is explained in Section 6.5. Since the relay node 1, sub-network 1 1042 and relay node 2 implement aggregate queuing, we use the results in 1043 Section 4.2.2 and Section 6.4 to compute d2_p for the path p. 1044 Finally, d3_p is computed using the delay bound computation of 1045 Section 6.6. Any path p such that d1 + d2_p + d3_p <= D satisfies 1046 the delay bound requirement of the flow. If there is no such path, 1047 the control plane may compute new set of valid paths and redo the 1048 delay bound computation or do not admit the DetNet flow. 1050 As soon as the control plane selects a path that satisfies the delay 1051 bound constraint, it allocates and reserves the resources in the path 1052 for the DetNet flow (Section 4.2 1053 [I-D.ietf-detnet-controller-plane-framework]). 1055 8. Security considerations 1057 Detailed security considerations for DetNet are cataloged in 1058 [RFC9055], and more general security considerations are described in 1059 [RFC8655]. 1061 Security aspects that are unique to DetNet are those whose aim is to 1062 provide the specific QoS aspects of DetNet, specifically bounded end- 1063 to-end delivery latency and zero congestion loss. Achieving such 1064 loss rates and bounded latency may not be possible in the face of a 1065 highly capable adversary, such as the one envisioned by the Internet 1066 Threat Model of BCP 72 [RFC3552] that can arbitrarily drop or delay 1067 any or all traffic. In order to present meaningful security 1068 considerations, we consider a somewhat weaker attacker who does not 1069 control the physical links of the DetNet domain but may have the 1070 ability to control a network node within the boundary of the DetNet 1071 domain. 1073 A security consideration for this document is to secure the resource 1074 reservation signaling for DetNet flows. Any forge or manipulation of 1075 packets during reservation may lead the flow not to be admitted or 1076 face delay bound violation. Security mitigation for this issue is 1077 describedd in Section 7.6 of [RFC9055]. 1079 9. IANA considerations 1081 This document has no IANA actions. 1083 10. References 1085 10.1. Normative References 1087 [RFC2212] Shenker, S., Partridge, C., and R. Guerin, "Specification 1088 of Guaranteed Quality of Service", RFC 2212, 1089 DOI 10.17487/RFC2212, September 1997, 1090 . 1092 [RFC6658] Bryant, S., Ed., Martini, L., Swallow, G., and A. Malis, 1093 "Packet Pseudowire Encapsulation over an MPLS PSN", 1094 RFC 6658, DOI 10.17487/RFC6658, July 2012, 1095 . 1097 [RFC7806] Baker, F. and R. Pan, "On Queuing, Marking, and Dropping", 1098 RFC 7806, DOI 10.17487/RFC7806, April 2016, 1099 . 1101 [RFC8655] Finn, N., Thubert, P., Varga, B., and J. Farkas, 1102 "Deterministic Networking Architecture", RFC 8655, 1103 DOI 10.17487/RFC8655, October 2019, 1104 . 1106 [RFC8939] Varga, B., Ed., Farkas, J., Berger, L., Fedyk, D., and S. 1107 Bryant, "Deterministic Networking (DetNet) Data Plane: 1108 IP", RFC 8939, DOI 10.17487/RFC8939, November 2020, 1109 . 1111 [RFC8964] Varga, B., Ed., Farkas, J., Berger, L., Malis, A., Bryant, 1112 S., and J. Korhonen, "Deterministic Networking (DetNet) 1113 Data Plane: MPLS", RFC 8964, DOI 10.17487/RFC8964, January 1114 2021, . 1116 [RFC9016] Varga, B., Farkas, J., Cummings, R., Jiang, Y., and D. 1117 Fedyk, "Flow and Service Information Model for 1118 Deterministic Networking (DetNet)", RFC 9016, 1119 DOI 10.17487/RFC9016, March 2021, 1120 . 1122 10.2. Informative References 1124 [BennettDelay] 1125 J.C.R. Bennett, K. Benson, A. Charny, W.F. Courtney, and 1126 J.-Y. Le Boudec, "Delay Jitter Bounds and Packet Scale 1127 Rate Guarantee for Expedited Forwarding", 1128 . 1130 [CharnyDelay] 1131 A. Charny and J.-Y. Le Boudec, "Delay Bounds in a Network 1132 with Aggregate Scheduling", . 1135 [I-D.ietf-detnet-controller-plane-framework] 1136 A. Malis, X. Geng, M. Chen, F. Qin, and B. Varga, 1137 "Deterministic Networking (DetNet) Controller Plane 1138 Framework draft-ietf-detnet-controller-plane-framework- 1139 00", . 1142 [IEEE8021Q] 1143 IEEE 802.1, "IEEE Std 802.1Q-2018: IEEE Standard for Local 1144 and metropolitan area networks - Bridges and Bridged 1145 Networks", 2018, 1146 . 1148 [IEEE8021Qcr] 1149 IEEE 802.1, "IEEE P802.1Qcr: IEEE Draft Standard for Local 1150 and metropolitan area networks - Bridges and Bridged 1151 Networks - Amendment: Asynchronous Traffic Shaping", 2017, 1152 . 1154 [IEEE8021TSN] 1155 IEEE 802.1, "IEEE 802.1 Time-Sensitive Networking (TSN) 1156 Task Group", . 1158 [IEEE8023] IEEE 802.3, "IEEE Std 802.3-2018: IEEE Standard for 1159 Ethernet", 2018, 1160 . 1162 [LeBoudecTheory] 1163 J.-Y. Le Boudec, "A Theory of Traffic Regulators for 1164 Deterministic Networks with Application to Interleaved 1165 Regulators", 1166 . 1168 [NetCalBook] 1169 J.-Y. Le Boudec and P. Thiran, "Network calculus: a theory 1170 of deterministic queuing systems for the internet", 2001, 1171 . 1173 [RFC2697] Heinanen, J. and R. Guerin, "A Single Rate Three Color 1174 Marker", RFC 2697, DOI 10.17487/RFC2697, September 1999, 1175 . 1177 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 1178 Text on Security Considerations", BCP 72, RFC 3552, 1179 DOI 10.17487/RFC3552, July 2003, 1180 . 1182 [RFC8578] Grossman, E., Ed., "Deterministic Networking Use Cases", 1183 RFC 8578, DOI 10.17487/RFC8578, May 2019, 1184 . 1186 [RFC9023] Varga, B., Ed., Farkas, J., Malis, A., and S. Bryant, 1187 "Deterministic Networking (DetNet) Data Plane: IP over 1188 IEEE 802.1 Time-Sensitive Networking (TSN)", RFC 9023, 1189 DOI 10.17487/RFC9023, June 2021, 1190 . 1192 [RFC9055] Grossman, E., Ed., Mizrahi, T., and A. Hacker, 1193 "Deterministic Networking (DetNet) Security 1194 Considerations", RFC 9055, DOI 10.17487/RFC9055, June 1195 2021, . 1197 [Sch8021Qbv] 1198 S. Craciunas, R. Oliver, M. Chmelik, and W. Steiner, 1199 "Scheduling Real-Time Communication in IEEE 802.1Qbv Time 1200 Sensitive Networks", 1201 . 1203 [SpechtUBS] 1204 J. Specht and S. Samii, "Urgency-Based Scheduler for Time- 1205 Sensitive Switched Ethernet Networks", 1206 . 1208 [ThomasTime] 1209 L. Thomas and J.-Y. Le Boudec, "On Time Synchronization 1210 Issues in Time-Sensitive Networks with Regulators and 1211 Nonideal Clocks", 1212 . 1214 [TSNwithATS] 1215 E. Mohammadpour, E. Stai, M. Mohiuddin, and J.-Y. Le 1216 Boudec, "End-to-end Latency and Backlog Bounds in Time- 1217 Sensitive Networking with Credit Based Shapers and 1218 Asynchronous Traffic Shaping", 1219 . 1221 Authors' Addresses 1223 Norman Finn 1224 Huawei Technologies Co. Ltd 1225 3101 Rio Way 1226 Spring Valley, California 91977 1227 United States of America 1229 Phone: +1 925 980 6430 1230 Email: nfinn@nfinnconsulting.com 1231 Jean-Yves Le Boudec 1232 EPFL 1233 IC Station 14 1234 CH-1015 Lausanne EPFL 1235 Switzerland 1237 Email: jean-yves.leboudec@epfl.ch 1239 Ehsan Mohammadpour 1240 EPFL 1241 IC Station 14 1242 CH-1015 Lausanne EPFL 1243 Switzerland 1245 Email: ehsan.mohammadpour@epfl.ch 1247 Jiayi Zhang 1248 Huawei Technologies Co. Ltd 1249 Q27, No.156 Beiqing Road 1250 Beijing 1251 100095 1252 China 1254 Email: zhangjiayi11@huawei.com 1256 Balázs Varga 1257 Ericsson 1258 Budapest 1259 Konyves Kálmán krt. 11/B 1260 1097 1261 Hungary 1263 Email: balazs.a.varga@ericsson.com 1265 János Farkas 1266 Ericsson 1267 Budapest 1268 Konyves Kálmán krt. 11/B 1269 1097 1270 Hungary 1272 Email: janos.farkas@ericsson.com