idnits 2.17.1 draft-ietf-detnet-bounded-latency-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There are 2 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (8 April 2022) is 747 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Network' is mentioned on line 1108, but not defined == Outdated reference: A later version (-05) exists of draft-ietf-detnet-controller-plane-framework-01 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DetNet N. Finn 3 Internet-Draft Huawei Technologies Co. Ltd 4 Intended status: Informational J-Y. Le Boudec 5 Expires: 10 October 2022 E. Mohammadpour 6 EPFL 7 J. Zhang 8 Huawei Technologies Co. Ltd 9 B. Varga 10 Ericsson 11 8 April 2022 13 DetNet Bounded Latency 14 draft-ietf-detnet-bounded-latency-10 16 Abstract 18 This document presents a timing model for sources, destinations, and 19 DetNet transit nodes. Using the model, it provides a methodology to 20 compute end-to-end latency and backlog bounds for various queuing 21 methods. The methodology can be used by the management and control 22 planes and by resource reservation algorithms to provide bounded 23 latency and zero congestion loss for the DetNet service. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on 10 October 2022. 42 Copyright Notice 44 Copyright (c) 2022 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 49 license-info) in effect on the date of publication of this document. 50 Please review these documents carefully, as they describe your rights 51 and restrictions with respect to this document. Code Components 52 extracted from this document must include Revised BSD License text as 53 described in Section 4.e of the Trust Legal Provisions and are 54 provided without warranty as described in the Revised BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 59 2. Terminology and Definitions . . . . . . . . . . . . . . . . . 4 60 3. DetNet bounded latency model . . . . . . . . . . . . . . . . 4 61 3.1. Flow admission . . . . . . . . . . . . . . . . . . . . . 4 62 3.1.1. Static latency-calculation . . . . . . . . . . . . . 5 63 3.1.2. Dynamic latency-calculation . . . . . . . . . . . . . 6 64 3.2. Relay node model . . . . . . . . . . . . . . . . . . . . 7 65 4. Computing End-to-end Delay Bounds . . . . . . . . . . . . . . 9 66 4.1. Non-queuing delay bound . . . . . . . . . . . . . . . . . 9 67 4.2. Queuing delay bound . . . . . . . . . . . . . . . . . . . 10 68 4.2.1. Per-flow queuing mechanisms . . . . . . . . . . . . . 11 69 4.2.2. Aggregate queuing mechanisms . . . . . . . . . . . . 11 70 4.3. Ingress considerations . . . . . . . . . . . . . . . . . 12 71 4.4. Interspersed DetNet-unaware transit nodes . . . . . . . . 13 72 5. Achieving zero congestion loss . . . . . . . . . . . . . . . 13 73 6. Queuing techniques . . . . . . . . . . . . . . . . . . . . . 14 74 6.1. Queuing data model . . . . . . . . . . . . . . . . . . . 15 75 6.2. Frame Preemption . . . . . . . . . . . . . . . . . . . . 17 76 6.3. Time-Aware Shaper . . . . . . . . . . . . . . . . . . . . 17 77 6.4. Credit-Based Shaper with Asynchronous Traffic Shaping . . 18 78 6.4.1. Delay Bound Calculation . . . . . . . . . . . . . . . 20 79 6.4.2. Flow Admission . . . . . . . . . . . . . . . . . . . 21 80 6.5. Guaranteed-Service IntServ . . . . . . . . . . . . . . . 22 81 6.6. Cyclic Queuing and Forwarding . . . . . . . . . . . . . . 23 82 7. Example application on DetNet IP network . . . . . . . . . . 24 83 8. Security considerations . . . . . . . . . . . . . . . . . . . 26 84 9. IANA considerations . . . . . . . . . . . . . . . . . . . . . 27 85 10. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 27 86 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 27 87 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 27 88 12.1. Normative References . . . . . . . . . . . . . . . . . . 27 89 12.2. Informative References . . . . . . . . . . . . . . . . . 28 90 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 30 92 1. Introduction 94 The ability for IETF Deterministic Networking (DetNet) or IEEE 802.1 95 Time-Sensitive Networking [IEEE8021TSN] to provide the DetNet 96 services of bounded latency and zero congestion loss depends upon 98 A) configuring and allocating network resources for the exclusive 99 use of DetNet flows; 101 B) identifying, in the data plane, the resources to be utilized by 102 any given packet; 104 C) the detailed behavior of those resources, especially 105 transmission queue selection, so that latency bounds can be 106 reliably assured. 108 As explained in [RFC8655], DetNet flows are notably characterized by 110 1. a maximum bandwidth, guaranteed either by the transmitter or by 111 strict input metering; 113 2. a requirement for a guaranteed worst-case end-to-end latency. 115 That latency guarantee, in turn, provides the opportunity for the 116 network to supply enough buffer space to guarantee zero congestion 117 loss. It is assumed in this document that the paths of DetNet flows 118 are fixed. Before the transmission of a DetNet flow, it is possible 119 to calculate end-to-end latency bounds and the amount of buffer space 120 required at each hop to ensure zero congestion loss; this can be used 121 by the applications identified in [RFC8578]. 123 This document presents a timing model for sources, destinations, and 124 the DetNet transit nodes; using this model, it provides a methodology 125 to compute end-to-end latency and backlog bounds for various queuing 126 mechanisms that can be used by the management and control planes to 127 provide DetNet qualities of service. The methodology used in this 128 document account for the possibility of packet reordering within a 129 DetNet node. The bounds on the amount of packet reordering is out of 130 the scope of this document and can be found in 131 [PacketReorderingBounds]. Moreover, this document references 132 specific queuing mechanisms, mentioned in [RFC8655], as proofs of 133 concept that can be used to control packet transmission at each 134 output port and achieve the DetNet quality of service. 136 Using the model presented in this document, it is possible for an 137 implementer, user, or standards development organization to select a 138 set of queuing mechanisms for each device in a DetNet network, and to 139 select a resource reservation algorithm for that network, so that 140 those elements can work together to provide the DetNet service. 141 Section 7 provides an example application of the timing model 142 introduced in this document on a DetNet IP network with a combination 143 of different queuing mechanisms. 145 This document does not specify any resource reservation protocol or 146 control plane function. It does not describe all of the requirements 147 for that protocol or control plane function. It does describe 148 requirements for such resource reservation methods, and for queuing 149 mechanisms that, if met, will enable them to work together. 151 2. Terminology and Definitions 153 This document uses the terms defined in [RFC8655]. Moreover, the 154 following terms are used in this document: 156 T-SPEC 157 TrafficSpecification as defined in Section 5.5 of [RFC9016]. 159 arrival curve 160 An arrival curve function alpha(t) is an upper bound on the number 161 of bits seen at an observation point within any time interval t. 163 CQF 164 Cyclic Queuing and Forwarding. 166 CBS 167 Credit-based Shaper. 169 TSN 170 Time-Sensitive Networking. 172 PREOF 173 A collective name for Packet Replication, Elimination, and 174 Ordering Functions. 176 Packet Ordering Function (POF) 177 A function that reorders packets within a DetNet flow that are 178 received out of order. This function can be implemented by a 179 DetNet edge node, a DetNet relay node, or an end system. 181 3. DetNet bounded latency model 183 3.1. Flow admission 185 This document assumes that the following paradigm is used to admit 186 DetNet flows: 188 1. Perform any configuration required by the DetNet transit nodes in 189 the network for aggregates of DetNet flows. This configuration 190 is done beforehand, and not tied to any particular DetNet flow. 192 2. Characterize the new DetNet flow, particularly in terms of 193 required bandwidth. 195 3. Establish the path that the DetNet flow will take through the 196 network from the source to the destination(s). This can be a 197 point-to-point or a point-to-multipoint path. 199 4. Compute the worst-case end-to-end latency for the DetNet flow, 200 using one of the methods, below (Section 3.1.1, Section 3.1.2). 201 In the process, determine whether sufficient resources are 202 available for the DetNet flow to guarantee the required latency 203 and to provide zero congestion loss. 205 5. Assuming that the resources are available, commit those resources 206 to the DetNet flow. This may or may not require adjusting the 207 parameters that control the filtering and/or queuing mechanisms 208 at each hop along the DetNet flow's path. 210 This paradigm can be implemented using peer-to-peer protocols or 211 using a central controller. In some situations, a lack of resources 212 can require backtracking and recursing through the above list. 214 Issues such as service preemption of a DetNet flow in favor of 215 another, when resources are scarce, are not considered here. Also 216 not addressed is the question of how to choose the path to be taken 217 by a DetNet flow. 219 3.1.1. Static latency-calculation 221 The static problem: 222 Given a network and a set of DetNet flows, compute an end-to- 223 end latency bound (if computable) for each DetNet flow, and 224 compute the resources, particularly buffer space, required in 225 each DetNet transit node to achieve zero congestion loss. 227 In this calculation, all of the DetNet flows are known before the 228 calculation commences. This problem is of interest to relatively 229 static networks, or static parts of larger networks. It provides 230 bounds on latency and buffer size. The calculations can be extended 231 to provide global optimizations, such as altering the path of one 232 DetNet flow in order to make resources available to another DetNet 233 flow with tighter constraints. 235 This calculation may be more difficult to perform than the dynamic 236 calculation (Section 3.1.2), because the DetNet flows passing through 237 one port on a DetNet transit node affect each other's latency. The 238 effects can even be circular, from a node A to B to C and back to A. 239 On the other hand, the static calculation can often accommodate 240 queuing methods, such as transmission selection by strict priority, 241 that are unsuitable for the dynamic calculation. 243 3.1.2. Dynamic latency-calculation 245 The dynamic problem: 246 Given a network whose maximum capacity for DetNet flows is 247 bounded by a set of static configuration parameters applied 248 to the DetNet transit nodes, and given just one DetNet flow, 249 compute the worst-case end-to-end latency that can be 250 experienced by that flow, no matter what other DetNet flows 251 (within the network's configured parameters) might be created 252 or deleted in the future. Also, compute the resources, 253 particularly buffer space, required in each DetNet transit 254 node to achieve zero congestion loss. 256 This calculation is dynamic, in the sense that DetNet flows can be 257 added or deleted at any time, with a minimum of computation effort, 258 and without affecting the guarantees already given to other DetNet 259 flows. 261 Dynamic latency-calculation can be done based on the static one 262 described in Section 3.1.1; when a new DetNet flow is created or 263 deleted, the entire calculation for all DetNet flows is repeated. If 264 an already-established DetNet flow would be pushed beyond its latency 265 requirements by the new DetNet flow request, then the new DetNet flow 266 request can be refused, or some other suitable action taken. 268 The choice of queuing methods is critical to the applicability of the 269 dynamic calculation. Some queuing methods (e.g., CQF, Section 6.6) 270 make it easy to configure bounds on the network's capacity, and to 271 make independent calculations for each DetNet flow. Some other 272 queuing methods (e.g., strict priority with the credit-based shaper 273 defined in [IEEE8021Q] section 8.6.8.2) can be used for dynamic 274 DetNet flow creation, but yield poorer latency and buffer space 275 guarantees than when that same queuing method is used for static 276 DetNet flow creation (Section 3.1.1). 278 3.2. Relay node model 280 A model for the operation of a DetNet transit node is required, in 281 order to define the latency and buffer calculations. In Figure 1 we 282 see a breakdown of the per-hop latency experienced by a packet 283 passing through a DetNet transit node, in terms that are suitable for 284 computing both hop-by-hop latency and per-hop buffer requirements. 286 DetNet transit node A DetNet transit node B 287 +-------------------------+ +------------------------+ 288 | Queuing | | Queuing | 289 | Regulator subsystem | | Regulator subsystem | 290 | +-+-+-+-+ +-+-+-+-+ | | +-+-+-+-+ +-+-+-+-+ | 291 -->+ | | | | | | | | | + +------>+ | | | | | | | | | + +---> 292 | +-+-+-+-+ +-+-+-+-+ | | +-+-+-+-+ +-+-+-+-+ | 293 | | | | 294 +-------------------------+ +------------------------+ 295 |<->|<------>|<------->|<->|<---->|<->|<------>|<------>|<->|<-- 296 2,3 4 5 6 1 2,3 4 5 6 1 2,3 297 1: Output delay 4: Processing delay 298 2: Link delay 5: Regulation delay 299 3: Frame preemption delay 6: Queuing delay 301 Figure 1: Timing model for DetNet or TSN 303 In Figure 1, we see two DetNet transit nodes that are connected via a 304 link. In this model, the only queues, that we deal with explicitly, 305 are attached to the output port; other queues are modeled as 306 variations in the other delay times (e.g., an input queue could be 307 modeled as either a variation in the link delay (2) or the processing 308 delay (4).) There are six delays that a packet can experience from 309 hop to hop. 311 1. Output delay 312 The time taken from the selection of a packet for output from a 313 queue to the transmission of the first bit of the packet on the 314 physical link. If the queue is directly attached to the physical 315 port, output delay can be a constant. But, in many 316 implementations, the queuing mechanism in a forwarding ASIC is 317 separated from a multi-port MAC/PHY, in a second ASIC, by a 318 multiplexed connection. This causes variations in the output 319 delay that are hard for the forwarding node to predict or control. 321 2. Link delay 322 The time taken from the transmission of the first bit of the 323 packet to the reception of the last bit, assuming that the 324 transmission is not suspended by a frame preemption event. This 325 delay has two components, the first-bit-out to first-bit-in delay 326 and the first-bit-in to last-bit-in delay that varies with packet 327 size. The former is typically measured by the Precision Time 328 Protocol and is constant (see [RFC8655]). However, a virtual 329 "link" could exhibit a variable link delay. 331 3. Frame preemption delay 332 If the packet is interrupted in order to transmit another packet 333 or packets, (e.g., [IEEE8023] clause 99 frame preemption) an 334 arbitrary delay can result. 336 4. Processing delay 337 This delay covers the time from the reception of the last bit of 338 the packet to the time the packet is enqueued in the regulator 339 (queuing subsystem, if there is no regulator) as shown in 340 Figure 1. This delay can be variable, and depends on the details 341 of the operation of the forwarding node. 343 5. Regulator delay 344 A regulator, also known as shaper in [RFC2475], delays some or all 345 of the packets in a traffic stream in order to bring the stream 346 into compliance with an arrival curve; an arrival curve 'alpha(t)' 347 is an upper bound on the number of bits observed within any 348 interval t. The regulator delay is the time spent from the 349 insertion of the last bit of a packet into a regulation queue 350 until the time the packet is declared eligible according to its 351 regulation constraints. We assume that this time can be 352 calculated based on the details of regulation policy. If there is 353 no regulation, this time is zero. 355 6. Queuing subsystem delay 356 This is the time spent for a packet from being declared eligible 357 until being selected for output on the next link. We assume that 358 this time is calculable based on the details of the queuing 359 mechanism. If there is no regulation, this time is from the 360 insertion of the packet into a queue until it is selected for 361 output on the next link. 363 Not shown in Figure 1 are the other output queues that we presume are 364 also attached to that same output port as the queue shown, and 365 against which this shown queue competes for transmission 366 opportunities. 368 In this analysis, the measurement is from the point at which a packet 369 is selected for output in a node to the point at which it is selected 370 for output in the next downstream node (that is the definition of a 371 "hop"). In general, any queue selection method that is suitable for 372 use in a DetNet network includes a detailed specification as to 373 exactly when packets are selected for transmission. Any variations 374 in any of the delay times 1-4 result in a need for additional buffers 375 in the queue. If all delays 1-4 are constant, then any variation in 376 the time at which packets are inserted into a queue depends entirely 377 on the timing of packet selection in the previous node. If the 378 delays 1-4 are not constant, then additional buffers are required in 379 the queue to absorb these variations. Thus: 381 * Variations in output delay (1) require buffers to absorb that 382 variation in the next hop, so the output delay variations of the 383 previous hop (on each input port) must be known in order to 384 calculate the buffer space required on this hop. 386 * Variations in processing delay (4) require additional output 387 buffers in the queues of that same DetNet transit node. Depending 388 on the details of the queuing subsystem delay (6) calculations, 389 these variations need not be visible outside the DetNet transit 390 node. 392 4. Computing End-to-end Delay Bounds 394 4.1. Non-queuing delay bound 396 End-to-end latency bounds can be computed using the delay model in 397 Section 3.2. Here, it is important to be aware that for several 398 queuing mechanisms, the end-to-end latency bound is less than the sum 399 of the per-hop latency bounds. An end-to-end latency bound for one 400 DetNet flow can be computed as 402 end_to_end_delay_bound = non_queuing_delay_bound + 403 queuing_delay_bound 405 The two terms in the above formula are computed as follows. 407 First, at the h-th hop along the path of this DetNet flow, obtain an 408 upper-bound per-hop_non_queuing_delay_bound[h] on the sum of the 409 bounds over the delays 1,2,3,4 of Figure 1. These upper bounds are 410 expected to depend on the specific technology of the DetNet transit 411 node at the h-th hop but not on the T-SPEC of this DetNet flow 412 [RFC9016]. Then set non_queuing_delay_bound = the sum of per- 413 hop_non_queuing_delay_bound[h] over all hops h. 415 Second, compute queuing_delay_bound as an upper bound to the sum of 416 the queuing delays along the path. The value of queuing_delay_bound 417 depends on the information on the arrival curve of this DetNet flow 418 and possibly of other flows in the network, as well as the specifics 419 of the queuing mechanisms deployed along the path of this DetNet 420 flow. Note that arrival curve of DetNet flow at source is 421 immediately specified by the T-SPEC of this flow. The computation of 422 queuing_delay_bound is described in Section 4.2 as a separate 423 section. 425 4.2. Queuing delay bound 427 For several queuing mechanisms, queuing_delay_bound is less than the 428 sum of upper bounds on the queuing delays (5,6) at every hop. This 429 occurs with (1) per-flow queuing, and (2) aggregate queuing with 430 regulators, as explained in Section 4.2.1, Section 4.2.2, and 431 Section 6. For other queuing mechanisms the only available value of 432 queuing_delay_bound is the sum of the per-hop queuing delay bounds. 434 The computation of per-hop queuing delay bounds must account for the 435 fact that the arrival curve of a DetNet flow is no longer satisfied 436 at the ingress of a hop, since burstiness increases as one flow 437 traverses one DetNet transit node. If a regulator is placed at a 438 hop, an arrival curve of a DetNet flow at the entrance of the queuing 439 subsystem of this hop is the one configured at the regulator (also 440 called shaping curve in [NetCalBook]); otherwise, an arrival curve of 441 the flow can be derived using the delay-jitter of the flow from the 442 last regulation point (the last regulator in the path of the flow if 443 there is any, otherwise the source of the flow) to the ingress of the 444 hop; more formally, assume a DetNet flow has arrival curve at the 445 last regulation point equal to 'alpha(t)', and the delay-jitter from 446 the last regulation point to the ingress of the hop is 'V'. Then, 447 the arrival curve at the ingress of the hop is 'alpha(t+V)'. 449 For example, consider a DetNet flow with T-SPEC "Interval: tau, 450 MaxPacketsPerInterval: K, MaxPayloadSize: L" at source. Then, a 451 leaky-bucket arrival curve for such flow at source is alpha(t)=r * t+ 452 b, t>0; alpha(0)=0, where r is the rate and b is the bucket size, 453 computed as 455 r = K * (L+L') / tau, 457 b = K * (L+L'). 459 where L' is the size of any added networking technology-specific 460 encapsulation (e.g., MPLS label(s), UDP, and IP headers). Now, if 461 the flow has delay-jitter of 'V' from the last regulation point to 462 the ingress of a hop, an arrival curve at this point is r * t + b + r 463 * V, implying that the burstiness is increased by r*V. A more 464 detailed information on arrival curves is available in [NetCalBook]. 466 4.2.1. Per-flow queuing mechanisms 468 With such mechanisms, each flow uses a separate queue inside every 469 node. The service for each queue is abstracted with a guaranteed 470 rate and a latency. For every DetNet flow, a per-node latency bound 471 as well as an end-to-end latency bound can be computed from the 472 traffic specification of this DetNet flow at its source and from the 473 values of rates and latencies at all nodes along its path. An 474 instance of per-flow queuing is IntServ's Guaranteed-Service, for 475 which the details of latency bound calculation are presented in 476 Section 6.5. 478 4.2.2. Aggregate queuing mechanisms 480 With such mechanisms, multiple flows are aggregated into macro-flows 481 and there is one FIFO queue per macro-flow. A practical example is 482 the credit-based shaper defined in section 8.6.8.2 of [IEEE8021Q] 483 where a macro-flow is called a "class". One key issue in this 484 context is how to deal with the burstiness cascade: individual flows 485 that share a resource dedicated to a macro-flow may see their 486 burstiness increase, which may in turn cause increased burstiness to 487 other flows downstream of this resource. Computing delay upper 488 bounds for such cases is difficult, and in some conditions impossible 489 [CharnyDelay][BennettDelay]. Also, when bounds are obtained, they 490 depend on the complete configuration, and must be recomputed when one 491 flow is added. (The dynamic calculation, Section 3.1.2.) 493 A solution to deal with this issue for the DetNet flows is to reshape 494 them at every hop. This can be done with per-flow regulators (e.g., 495 leaky bucket shapers), but this requires per-flow queuing and defeats 496 the purpose of aggregate queuing. An alternative is the interleaved 497 regulator, which reshapes individual DetNet flows without per-flow 498 queuing ([SpechtUBS], [IEEE8021Qcr]). With an interleaved regulator, 499 the packet at the head of the queue is regulated based on its (flow) 500 regulation constraints; it is released at the earliest time at which 501 this is possible without violating the constraint. One key feature 502 of per-flow or interleaved regulator is that, it does not increase 503 worst-case latency bounds [LeBoudecTheory]. Specifically, when an 504 interleaved regulator is appended to a FIFO subsystem, it does not 505 increase the worst-case delay of the latter; in Figure 1, when the 506 order of packets from output of queuing subsystem at node A to the 507 entrance of regulator at node B is preserved, then the regulator does 508 not increase the worst-case latency bounds; this is made possible if 509 all the systems are FIFO or a DetNet packet-ordering function (POF) 510 is implemented just before the regulator. This property does not 511 hold if packet reordering occurs from the output of a queuing 512 subsystem to the entrance of next downstream interleaved regulator, 513 e.g., at a non-FIFO switching fabric. 515 Figure 2 shows an example of a network with 5 nodes, aggregate 516 queuing mechanism and interleaved regulators as in Figure 1. An end- 517 to-end delay bound for DetNet flow f, traversing nodes 1 to 5, is 518 calculated as follows: 520 end_to_end_latency_bound_of_flow_f = C12 + C23 + C34 + S4 522 In the above formula, Cij is a bound on the delay of the queuing 523 subsystem in node i and interleaved regulator of node j, and S4 is a 524 bound on the delay of the queuing subsystem in node 4 for DetNet flow 525 f. In fact, using the delay definitions in Section 3.2, Cij is a 526 bound on sum of the delays 1,2,3,6 of node i and 4,5 of node j. 527 Similarly, S4 is a bound on sum of the delays 1,2,3,6 of node 4. A 528 practical example of queuing model and delay calculation is presented 529 Section 6.4. 531 f 532 -----------------------------> 533 +---+ +---+ +---+ +---+ +---+ 534 | 1 |---| 2 |---| 3 |---| 4 |---| 5 | 535 +---+ +---+ +---+ +---+ +---+ 536 \__C12_/\__C23_/\__C34_/\_S4_/ 538 Figure 2: End-to-end delay computation example 540 REMARK: If packet reordering does not occur, the end-to-end latency 541 bound calculation provided here gives a tighter latency upper-bound 542 than would be obtained by adding the latency bounds of each node in 543 the path of a DetNet flow [TSNwithATS]. 545 4.3. Ingress considerations 547 A sender can be a DetNet node which uses exactly the same queuing 548 methods as its adjacent DetNet transit node, so that the latency and 549 buffer bounds calculations at the first hop are indistinguishable 550 from those at a later hop within the DetNet domain. On the other 551 hand, the sender may be DetNet-unaware, in which case some 552 conditioning of the DetNet flow may be necessary at the ingress 553 DetNet transit node. 555 This ingress conditioning typically consists of a FIFO with an output 556 regulator that is compatible with the queuing employed by the DetNet 557 transit node on its output port(s). For some queuing methods, this 558 simply requires added buffer space in the queuing subsystem. Ingress 559 conditioning requirements for different queuing methods are mentioned 560 in the sections, below, describing those queuing methods. 562 4.4. Interspersed DetNet-unaware transit nodes 564 It is sometimes desirable to build a network that has both DetNet- 565 aware transit nodes and DetNet-unaware transit nodes, and for a 566 DetNet flow to traverse an island of DetNet-unaware transit nodes, 567 while still allowing the network to offer delay and congestion loss 568 guarantees. This is possible under certain conditions. 570 In general, when passing through a DetNet-unaware island, the island 571 may cause delay variation in excess of what would be caused by DetNet 572 nodes. That is, the DetNet flow might be "lumpier" after traversing 573 the DetNet-unaware island. DetNet guarantees for delay and buffer 574 requirements can still be calculated and met if and only if the 575 following are true: 577 1. The latency variation across the DetNet-unaware island must be 578 bounded and calculable. 580 2. An ingress conditioning function (Section 4.3) is required at the 581 re-entry to the DetNet-aware domain. This will, at least, 582 require some extra buffering to accommodate the additional delay 583 variation, and thus further increases the latency bound. 585 The ingress conditioning is exactly the same problem as that of a 586 sender at the edge of the DetNet domain. The requirement for bounds 587 on the latency variation across the DetNet-unaware island is 588 typically the most difficult to achieve. Without such a bound, it is 589 obvious that DetNet cannot deliver its guarantees, so a DetNet- 590 unaware island that cannot offer bounded latency variation cannot be 591 used to carry a DetNet flow. 593 5. Achieving zero congestion loss 595 When the input rate to an output queue exceeds the output rate for a 596 sufficient length of time, the queue must overflow. This is 597 congestion loss, and this is what deterministic networking seeks to 598 avoid. 600 To avoid congestion losses, an upper bound on the backlog present in 601 the regulator and queuing subsystem of Figure 1 must be computed 602 during resource reservation. This bound depends on the set of flows 603 that use these queues, the details of the specific queuing mechanism 604 and an upper bound on the processing delay (4). The queue must 605 contain the packet in transmission plus all other packets that are 606 waiting to be selected for output. A conservative backlog bound, 607 that applies to all systems, can be derived as follows. 609 The backlog bound is counted in data units (bytes, or words of 610 multiple bytes) that are relevant for buffer allocation. For every 611 flow or an aggregate of flows, we need one buffer space for the 612 packet in transmission, plus space for the packets that are waiting 613 to be selected for output. 615 Let 617 * total_in_rate be the sum of the line rates of all input ports that 618 send traffic to this output port. The value of total_in_rate is 619 in data units (e.g., bytes) per second. 621 * nb_input_ports be the number input ports that send traffic to this 622 output port 624 * max_packet_length be the maximum packet size for packets that may 625 be sent to this output port. This is counted in data units. 627 * max_delay456 be an upper bound, in seconds, on the sum of the 628 processing delay (4) and the queuing delays (5,6) for any packet 629 at this output port. 631 Then a bound on the backlog of traffic in the queue at this output 632 port is 634 backlog_bound = (nb_input_ports * max_packet_length) + 635 (total_in_rate * max_delay456) 637 The above bound is over the backlog caused by the traffic entering 638 the queue from the input ports of a DetNet node. If the DetNet node 639 also generates packets (e.g., creation of new packets, replication of 640 arriving packets), the bound must accordingly incorporate the 641 introduced backlog. 643 6. Queuing techniques 645 In this section, we present a general queuing data model as well as 646 some examples of queuing mechanisms. For simplicity of latency bound 647 computation, we assume leaky-bucket arrival curve for each DetNet 648 flow at source. Also, at each DetNet transit node, the service for 649 each queue is abstracted with a minimum guaranteed rate and a latency 650 [NetCalBook]. 652 6.1. Queuing data model 654 Sophisticated queuing mechanisms are available in Layer 3 (L3, see, 655 e.g., [RFC7806] for an overview). In general, we assume that "Layer 656 3" queues, shapers, meters, etc., are precisely the "regulators" 657 shown in Figure 1. The "queuing subsystems" in this figure are FIFO. 658 They are not the province solely of bridges; they are an essential 659 part of any DetNet transit node. As illustrated by numerous 660 implementation examples, some of the "Layer 3" mechanisms described 661 in documents such as [RFC7806] are often integrated, in an 662 implementation, with the "Layer 2" mechanisms also implemented in the 663 same node. An integrated model is needed in order to successfully 664 predict the interactions among the different queuing mechanisms 665 needed in a network carrying both DetNet flows and non-DetNet flows. 667 Figure 3 shows the general model for the flow of packets through the 668 queues of a DetNet transit node. The DetNet packets are mapped to a 669 number of regulators. Here, we assume that the PREOF (Packet 670 Replication, Elimination and Ordering Functions) are performed before 671 the DetNet packets enter the regulators. All Packets are assigned to 672 a set of queues. Packets compete for the selection to be passed to 673 queues in the queuing subsystem. Packets again are selected for 674 output from the queuing subsystem. 676 | 677 +--------------------------------V----------------------------------+ 678 | Queue assignment | 679 +--+------+----------+---------+-----------+-----+-------+-------+--+ 680 | | | | | | | | 681 +--V-+ +--V-+ +--V--+ +--V--+ +--V--+ | | | 682 |Flow| |Flow| |Flow | |Flow | |Flow | | | | 683 | 0 | | 1 | ... | i | | i+1 | ... | n | | | | 684 | reg| | reg| | reg | | reg | | reg | | | | 685 +--+-+ +--+-+ +--+--+ +--+--+ +--+--+ | | | 686 | | | | | | | | 687 +--V------V----------V--+ +--V-----------V--+ | | | 688 | Trans. selection | | Trans. select. | | | | 689 +----------+------------+ +-----+-----------+ | | | 690 | | | | | 691 +--V--+ +--V--+ +--V--+ +--V--+ +--V--+ 692 | out | | out | | out | | out | | out | 693 |queue| |queue| |queue| |queue| |queue| 694 | 1 | | 2 | | 3 | | 4 | | 5 | 695 +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ 696 | | | | | 697 +----------V----------------------V--------------V-------V-------V--+ 698 | Transmission selection | 699 +---------------------------------+---------------------------------+ 700 | 701 V 703 Figure 3: IEEE 802.1Q Queuing Model: Data flow 705 Some relevant mechanisms are hidden in this figure, and are performed 706 in the queue boxes: 708 * Discarding packets because a queue is full. 710 * Discarding packets marked "yellow" by a metering function, in 711 preference to discarding "green" packets [RFC2697]. 713 Ideally, neither of these actions are performed on DetNet packets. 714 Full queues for DetNet packets occurs only when a DetNet flow is 715 misbehaving, and the DetNet QoS does not include "yellow" service for 716 packets in excess of committed rate. 718 The queue assignment function can be quite complex, even in a bridge 719 [IEEE8021Q], since the introduction of per-stream filtering and 720 policing ([IEEE8021Q] clause 8.6.5.1). In addition to the Layer 2 721 priority expressed in the 802.1Q VLAN tag, a DetNet transit node can 722 utilize the information from the non-exhaustive list below to assign 723 a packet to a particular queue: 725 * Input port. 727 * Selector based on a rotating schedule that starts at regular, 728 time-synchronized intervals and has nanosecond precision. 730 * MAC addresses, VLAN ID, IP addresses, Layer 4 port numbers, DSCP 731 [RFC8939], [RFC8964]. 733 * The queue assignment function can contain metering and policing 734 functions. 736 * MPLS and/or pseudo-wire labels [RFC6658]. 738 The "Transmission selection" function decides which queue is to 739 transfer its oldest packet to the output port when a transmission 740 opportunity arises. 742 6.2. Frame Preemption 744 In [IEEE8021Q] and [IEEE8023], the transmission of a frame can be 745 interrupted by one or more "express" frames, and then the interrupted 746 frame can continue transmission. The frame preemption is modeled as 747 consisting of two MAC/PHY stacks, one for packets that can be 748 interrupted, and one for packets that can interrupt the interruptible 749 packets. Only one layer of frame preemption is supported -- a 750 transmitter cannot have more than one interrupted frame in progress. 751 DetNet flows typically pass through the interrupting MAC. For those 752 DetNet flows with T-SPEC, latency bounds can be calculated by the 753 methods provided in the following sections that account for the 754 effect of frame preemption, according to the specific queuing 755 mechanism that is used in DetNet nodes. Best-effort queues pass 756 through the interruptible MAC, and can thus be preempted. 758 6.3. Time-Aware Shaper 760 In [IEEE8021Q], the notion of time-scheduling queue gates is 761 described in section 8.6.8.4. On each node, the transmission 762 selection for packets is controlled by time-synchronized gates; each 763 output queue is associated with a gate. The gates can be either open 764 or closed. The states of the gates are determined by the gate 765 control list (GCL). The GCL specifies the opening and closing times 766 of the gates. The design of GCL must satisfy the requirement of 767 latency upper bounds of all DetNet flows; therefore, those DetNet 768 flows that traverse a network that uses this kind of shaper must have 769 bounded latency, if the traffic and nodes are conformant. 771 Note that scheduled traffic service relies on a synchronized network 772 and coordinated GCL configuration. Synthesis of GCL on multiple 773 nodes in network is a scheduling problem considering all DetNet flows 774 traversing the network, which is a non-deterministic polynomial-time 775 hard (NP-hard) problem [Sch8021Qbv]. Also, at this writing, 776 scheduled traffic service supports no more than eight traffic queues, 777 typically using up to seven priority queues and at least one best 778 effort. 780 6.4. Credit-Based Shaper with Asynchronous Traffic Shaping 782 In this queuing model, it is assumed that the DetNet nodes are FIFO. 783 We consider the four traffic classes (Definition 3.268 of 784 [IEEE8021Q]): control-data traffic (CDT), class A, class B, and best 785 effort (BE) in decreasing order of priority. Flows of classes A and 786 B are DetNet flows that are less critical than CDT (such as studio 787 audio and video traffic, as in IEEE 802.1BA Audio-Video-Bridging). 788 This model is a subset of Time-Sensitive Networking as described 789 next. 791 Based on the timing model described in Figure 1, contention occurs 792 only at the output port of a DetNet transit node; therefore, the 793 focus of the rest of this subsection is on the regulator and queuing 794 subsystem in the output port of a DetNet transit node. The input 795 flows are identified using the information in (Section 5.1 of 796 [RFC8939]). Then they are aggregated into eight macro flows based on 797 their service requirements; we refer to each macro flow as a class. 798 The output port performs aggregate scheduling with eight queues 799 (queuing subsystems): one for CDT, one for class A flows, one for 800 class B flows, and five for BE traffic denoted as BE0-BE4. The 801 queuing policy for each queuing subsystem is FIFO. In addition, each 802 node output port also performs per-flow regulation for class A and B 803 flows using an interleaved regulator (IR), called Asynchronous 804 Traffic Shaper [IEEE8021Qcr]. Thus, at each output port of a node, 805 there is one interleaved regulator per-input port and per-class; the 806 interleaved regulator is mapped to the regulator depicted in 807 Figure 1. The detailed picture of scheduling and regulation 808 architecture at a node output port is given by Figure 4. The packets 809 received at a node input port for a given class are enqueued in the 810 respective interleaved regulator at the output port. Then, the 811 packets from all the flows, including CDT and BE flows, are enqueued 812 in queuing subsystem; there is no regulator for CDT and BE flows. 814 +--+ +--+ +--+ +--+ 815 | | | | | | | | 816 |IR| |IR| |IR| |IR| 817 | | | | | | | | 818 +-++XXX++-+ +-++XXX++-+ 819 | | | | 820 | | | | 821 +---+ +-v-XXX-v-+ +-v-XXX-v-+ +-----+ +-----+ +-----+ +-----+ +-----+ 822 | | | | | | |Class| |Class| |Class| |Class| |Class| 823 |CDT| | Class A | | Class B | | BE4 | | BE3 | | BE2 | | BE1 | | BE0 | 824 | | | | | | | | | | | | | | | | 825 +-+-+ +----+----+ +----+----+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ 826 | | | | | | | | 827 | +-v-+ +-v-+ | | | | | 828 | |CBS| |CBS| | | | | | 829 | +-+-+ +-+-+ | | | | | 830 | | | | | | | | 831 +-v--------v-----------v---------v-------V-------v-------v-------v--+ 832 | Strict Priority selection | 833 +--------------------------------+----------------------------------+ 834 | 835 V 837 Figure 4: The architecture of an output port inside a relay node with 838 interleaved regulators (IRs) and credit-based shaper (CBS) 840 Each of the queuing subsystems for classes A and B, contains a 841 Credit-Based Shaper (CBS). The CBS serves a packet from a class 842 according to the available credit for that class. As described in 843 Section 8.6.8.2 and Annex L.1 of [IEEE8021Q], the credit for each 844 class A or B increases based on the idle slope (as guaranteed rate), 845 and decreases based on the sendslope (typically equal to the 846 difference between the guaranteed and the output link rates), both of 847 which are parameters of the CBS. The CDT and BE0-BE4 flows are 848 served by separate queuing subsystems. Then, packets from all flows 849 are served by a transmission selection subsystem that serves packets 850 from each class based on its priority. All subsystems are non- 851 preemptive. Guarantees for classes A and B traffic can be provided 852 only if CDT traffic is bounded; it is assumed that the CDT traffic 853 has a leaky bucket arrival curve with two parameters r_h as rate and 854 b_h as bucket size, i.e., the amount of bits entering a node within a 855 time interval t is bounded by r_h * t + b_h. 857 Additionally, it is assumed that the classes A and B flows are also 858 regulated at their source according to a leaky bucket arrival curve. 859 At the source, the traffic satisfies its regulation constraint, i.e., 860 the delay due to interleaved regulator at the source is ignored. 862 At each DetNet transit node implementing an interleaved regulator, 863 packets of multiple flows are processed in one FIFO queue; the packet 864 at the head of the queue is regulated based on its leaky bucket 865 parameters; it is released at the earliest time at which this is 866 possible without violating the constraint. 868 The regulation parameters for a flow (leaky bucket rate and bucket 869 size) are the same at its source and at all DetNet transit nodes 870 along its path in the case where all clocks are perfect. However, in 871 reality there is clock non-ideality throughout the DetNet domain even 872 with clock synchronization. This phenomenon causes inaccuracy in the 873 rates configured at the regulators that may lead to network 874 instability. To avoid that, when configuring the regulators, the 875 rates are set as the source rates with some positive margin. 876 [ThomasTime] describes and provides solutions to this issue. 878 6.4.1. Delay Bound Calculation 880 A delay bound of the queuing subsystem ((4) in Figure 1) of a given 881 DetNet node for a flow of classes A or B can be computed if the 882 following condition holds: 884 sum of leaky bucket rates of all flows of this class at this 885 transit node <= R, where R is given below for every class. 887 If the condition holds, the delay bounds for a flow of class X (A or 888 B) is d_X and calculated as: 890 d_X = T_X + (b_t_X-L_min_X)/R_X - L_min_X/c 892 where L_min_X is the minimum packet lengths of class X (A or B); c is 893 the output link transmission rate; b_t_X is the sum of the b term 894 (bucket size) for all the flows of the class X. Parameters R_X and 895 T_X are calculated as follows for class A and class B, separately: 897 If the flow is of class A: 899 R_A = I_A * (c-r_h)/ c 901 T_A = (L_nA + b_h + r_h * L_n/c)/(c-r_h) 903 where I_A is the idle slope for class A; L_nA is the maximum packet 904 length of class B and BE packets; L_n is the maximum packet length of 905 classes A,B, and BE; r_h is the rate and b_h is the bucket size of 906 CDT traffic leaky bucket arrival curve. 908 If the flow is of class B: 910 R_B = I_B * (c-r_h)/ c 912 T_B = (L_BE + L_A + L_nA * I_A/(c_h-I_A) + b_h + r_h * L_n/ 913 c)/(c-r_h) 915 where I_B is the idle slope for class B; L_A is the maximum packet 916 length of class A; L_BE is the maximum packet length of class BE. 918 Then, as discussed in Section 4.2.2; an interleaved regulator does 919 not increase the delay bound of the upstream queuing subsystem; 920 therefore an end-to-end delay bound for a DetNet flow of class X (A 921 or B) is the sum of d_X_i for all node i in the path the flow, where 922 d_X_i is the delay bound of queuing subsystem in node i which is 923 computed as above. According to the notation in Section 4.2.2, the 924 delay bound of queuing subsystem in a node i and interleaved 925 regulator in node j, i.e., Cij, is: 927 Cij = d_X_i 929 More information of delay analysis in such a DetNet transit node is 930 described in [TSNwithATS]. 932 6.4.2. Flow Admission 934 The delay bound calculation requires some information about each 935 node. For each node, it is required to know the idle slope of CBS 936 for each class A and B (I_A and I_B), as well as the transmission 937 rate of the output link (c). Besides, it is necessary to have the 938 information on each class, i.e., maximum packet length of classes A, 939 B, and BE. Moreover, the leaky bucket parameters of CDT (r_h,b_h) 940 must be known. To admit a flow/flows of classes A and B, their delay 941 requirements must be guaranteed not to be violated. As described in 942 Section 3.1, the two problems, static and dynamic, are addressed 943 separately. In either of the problems, the rate and delay must be 944 guaranteed. Thus, 946 The static admission control: 947 The leaky bucket parameters of all class A or B flows are 948 known, therefore, for each class A or B flow f, a delay bound 949 can be calculated. The computed delay bound for every class 950 A or B flow must not be more than its delay requirement. 951 Moreover, the sum of the rate of each flow (r_f) must not be 952 more than the rate allocated to each class (R). If these two 953 conditions hold, the configuration is declared admissible. 955 The dynamic admission control: 956 For dynamic admission control, we allocate to every node and 957 class A or B, static value for rate (R) and maximum bucket 958 size (b_t). In addition, for every node and every class A 959 and B, two counters are maintained: 961 R_acc is equal to the sum of the leaky-bucket rates of all 962 flows of this class already admitted at this node; At all 963 times, we must have: 965 R_acc <=R, (Eq. 1) 967 b_acc is equal to the sum of the bucket sizes of all flows 968 of this class already admitted at this node; At all times, 969 we must have: 971 b_acc <=b_t. (Eq. 2) 973 A new class A or B flow is admitted at this node, if Eqs. (1) 974 and (2) continue to be satisfied after adding its leaky 975 bucket rate and bucket size to R_acc and b_acc. A class A or 976 B flow is admitted in the network, if it is admitted at all 977 nodes along its path. When this happens, all variables R_acc 978 and b_acc along its path must be incremented to reflect the 979 addition of the flow. Similarly, when a class A or B flow 980 leaves the network, all variables R_acc and b_acc along its 981 path must be decremented to reflect the removal of the flow. 983 The choice of the static values of R and b_t at all nodes and classes 984 must be done in a prior configuration phase; R controls the bandwidth 985 allocated to this class at this node, b_t affects the delay bound and 986 the buffer requirement. The value of R must be set such that 988 R <= I_X*(c-r_h)/c 990 where I_X is the idleslope of credit-based shaper for class X={A,B}, 991 c is the transmission rate of the output link and r_h is the leaky- 992 bucket rate of the CDT class. 994 6.5. Guaranteed-Service IntServ 996 Guaranteed-Service Integrated service (IntServ) is an architecture 997 that specifies the elements to guarantee quality of service (QoS) on 998 networks [RFC2212]. 1000 The flow, at the source, has a leaky bucket arrival curve with two 1001 parameters r as rate and b as bucket size, i.e., the amount of bits 1002 entering a node within a time interval t is bounded by r * t + b. 1004 If a resource reservation on a path is applied, a node provides a 1005 guaranteed rate R and maximum service latency of T. This can be 1006 interpreted in a way that the bits might have to wait up to T before 1007 being served with a rate greater or equal to R. The delay bound of 1008 the flow traversing the node is T + b / R. 1010 Consider a Guaranteed-Service IntServ path including a sequence of 1011 nodes, where the i-th node provides a guaranteed rate R_i and maximum 1012 service latency of T_i. Then, the end-to-end delay bound for a flow 1013 on this can be calculated as sum(T_i) + b / min(R_i). 1015 The provided delay bound is based on a simple case of Guaranteed- 1016 Service IntServ where only a guaranteed rate and maximum service 1017 latency and a leaky bucket arrival curve are available. If more 1018 information about the flow is known, e.g., the peak rate, the delay 1019 bound is more complicated; the details are available in [RFC2212] and 1020 Section 1.4.1 of [NetCalBook]. 1022 6.6. Cyclic Queuing and Forwarding 1024 Annex T of [IEEE8021Q] describes Cyclic Queuing and Forwarding (CQF), 1025 which provides bounded latency and zero congestion loss using the 1026 time-scheduled gates of [IEEE8021Q] section 8.6.8.4. For a given 1027 class of DetNet flows, a set of two or more buffers is provided at 1028 the output queue layer of Figure 3. A cycle time T_c is configured 1029 for each class of DetNet flows c, and all of the buffer sets in a 1030 class of DetNet flows swap buffers simultaneously throughout the 1031 DetNet domain at that cycle rate, all in phase. In such a mechanism, 1032 the regulator, mentioned in Figure 1, is not required. 1034 In the case of two-buffer CQF, each class of DetNet flows c has two 1035 buffers, namely buffer1 and buffer2. In a cycle (i) when buffer1 1036 accumulates received packets from the node's reception ports, buffer2 1037 transmits the already stored packets from the previous cycle (i-1). 1038 In the next cycle (i+1), buffer2 stores the received packets and 1039 buffer1 transmits the packets received in cycle (i). The duration of 1040 each cycle is T_c. 1042 The cycle time T_c must be carefully chosen; it needs to be large 1043 enough to accommodate all the DetNet traffic, plus at least one 1044 maximum packet (or fragment) size from lower priority queues, which 1045 might be received within a cycle. Also, the value of T_c includes a 1046 time interval, called dead time (DT), which is the sum of the delays 1047 1,2,3,4 defined in Figure 1. The value of DT guarantees that the 1048 last packet of one cycle in a node is fully delivered to a buffer of 1049 the next node in the same cycle. A two-buffer CQF is recommended if 1050 DT is small compared to T_c. For a large DT, CQF with more buffers 1051 can be used, and a cycle identification label can be added to the 1052 packets. 1054 The per-hop latency is determined by the cycle time T_c: a packet 1055 transmitted from a node at a cycle (i), is transmitted from the next 1056 node at cycle (i+1). Then, if the packet traverses h hops, the 1057 maximum latency experienced by the packet is from the beginning of 1058 cycle (i) to the end of cycle (i+h); also, the minimum latency is 1059 from the end of cycle (i) before the DT, to the beginning of cycle 1060 (i+h). Then, the maximum latency is: 1062 (h+1) T_c 1064 and the minimum latency is: 1066 (h-1) T_c + DT. 1068 Ingress conditioning (Section 4.3) may be required if the source of a 1069 DetNet flow does not, itself, employ CQF. Since there are no per- 1070 flow parameters in the CQF technique, per-hop configuration is not 1071 required in the CQF forwarding nodes. 1073 7. Example application on DetNet IP network 1075 This section provides an example application of the timing model 1076 presented in this document to control the admission of a DetNet flow 1077 on a DetNet-enabled IP network. Consider Figure 5, taken from 1078 Section 3 of [RFC8939], that shows a simple IP network: 1080 * The end-system 1 implements Guaranteed-Service IntServ as in 1081 Section 6.5 between itself and relay node 1. 1083 * Sub-network 1 is a TSN network. The nodes in subnetwork 1 1084 implement credit-based shapers with asynchronous traffic shaping 1085 as in Section 6.4. 1087 * Sub-network 2 is a TSN network. The nodes in subnetwork 2 1088 implement cyclic queuing and forwarding with two buffers as in 1089 Section 6.6. 1091 * The relay nodes 1 and 2 implement credit-based shapers with 1092 asynchronous traffic shaping as in Section 6.4. They also perform 1093 the aggregation and mapping of IP DetNet flows to TSN streams 1094 (Section 4.4 of [RFC9023]). 1096 DetNet IP Relay Relay DetNet IP 1097 End-System Node 1 Node 2 End-System 1098 1 2 1099 +----------+ +----------+ 1100 | Appl. |<------------ End-to-End Service ----------->| Appl. | 1101 +----------+ ............ ........... +----------+ 1102 | Service |<-: Service :-- DetNet flow --: Service :->| Service | 1103 +----------+ +----------+ +----------+ +----------+ 1104 |Forwarding| |Forwarding| |Forwarding| |Forwarding| 1105 +--------.-+ +-.------.-+ +-.---.----+ +-------.--+ 1106 : Link : \ ,-----. / \ ,-----. / 1107 +......+ +----[ Sub- ]----+ +-[ Sub- ]-+ 1108 [Network] [Network] 1109 `--1--' `--2--' 1111 |<--------------------- DetNet IP --------------------->| 1113 |<--- d1 --->|<--------------- d2_p --------------->|<-- d3_p -->| 1115 Figure 5: A Simple DetNet-Enabled IP Network, taken from RFC8939 1117 Consider a fully centralized control plane for the network of 1118 Figure 5 as described in Section 3.2 of 1119 [I-D.ietf-detnet-controller-plane-framework]. Suppose end-system 1 1120 wants to create a DetNet flow with traffic specification destined to 1121 end-system 2 with end-to-end delay bound requirement D. Therefore, 1122 the control plane receives a flow establishment request and 1123 calculates a number of valid paths through the network (Section 3.2 1124 of [I-D.ietf-detnet-controller-plane-framework]). To select a proper 1125 path, the control plane needs to compute an end-to-end delay bound at 1126 every node of each selected path p. 1128 The end-to-end delay bound is d1 + d2_p + d3_p, where d1 is the delay 1129 bound from end-system 1 to the entrance of relay node 1, d2_p is the 1130 delay bound for path p from relay node 1 to entrance of the first 1131 node in sub-network 2, and d3_p the delay bound of path p from the 1132 first node in sub-network 2 to end-system 2. The computation of d1 1133 is explained in Section 6.5. Since the relay node 1, sub-network 1 1134 and relay node 2 implement aggregate queuing, we use the results in 1135 Section 4.2.2 and Section 6.4 to compute d2_p for the path p. 1136 Finally, d3_p is computed using the delay bound computation of 1137 Section 6.6. Any path p such that d1 + d2_p + d3_p <= D satisfies 1138 the delay bound requirement of the flow. If there is no such path, 1139 the control plane may compute new set of valid paths and redo the 1140 delay bound computation or reject the DetNet flow. 1142 As soon as the control plane selects a path that satisfies the delay 1143 bound constraint, it allocates and reserves the resources in the path 1144 for the DetNet flow (Section 4.2 1145 [I-D.ietf-detnet-controller-plane-framework]). 1147 8. Security considerations 1149 Detailed security considerations for DetNet are cataloged in 1150 [RFC9055], and more general security considerations are described in 1151 [RFC8655]. 1153 Security aspects that are unique to DetNet are those whose aim is to 1154 provide the specific QoS aspects of DetNet, specifically bounded end- 1155 to-end delivery latency and zero congestion loss. Achieving such 1156 loss rates and bounded latency may not be possible in the face of a 1157 highly capable adversary, such as the one envisioned by the Internet 1158 Threat Model of BCP 72 [RFC3552] that can arbitrarily drop or delay 1159 any or all traffic. In order to present meaningful security 1160 considerations, we consider a somewhat weaker attacker who does not 1161 control the physical links of the DetNet domain but may have the 1162 ability to control or change the behavior of some resources within 1163 the boundary of the DetNet domain. 1165 Latency bound calculations use parameters that reflect physical 1166 quantities. If an attacker finds a way to change the physical 1167 quantities, unknown to the control and management planes, the latency 1168 calculations fail and may result in latency violation and/or 1169 congestion losses. An example of such attacks is to make some 1170 traffic sources under the control of the attacker send more traffic 1171 than their assumed T-SPECs. This type of attack is typically avoided 1172 by ingress conditioning at the edge of a DetNet domain. However, it 1173 must be insured that such ingress conditioning is done per-flow and 1174 that the buffers are segregated such that if one flow exceeds its 1175 T-SPEC, it does not cause buffer overflow for other flows. 1177 Some queuing mechanisms require time synchronization and operate 1178 correctly only if the time synchronization works correctly. In the 1179 case of CQF, the correct alignments of cycles can fail if an attack 1180 against time synchronization fools a node into having an incorrect 1181 offset. Some of these attacks can be prevented by cryptographic 1182 authentication as in Annex K of [IEEE1588] for the Precision Time 1183 Protocol (PTP). However, the attacks that change the physical 1184 latency of the links used by the time synchronization protocol are 1185 still possible even if the time synchronization protocol is protected 1186 by authentication and cryptography [DelayAttack]. Such attacks can 1187 be detected only by their effects on latency bound violations and 1188 congestion losses, which do not occur in normal DetNet operation. 1190 9. IANA considerations 1192 This document has no IANA actions. 1194 10. Acknowledgement 1196 We would like to thank Lou Berger, Tony Przygienda, John Scudder, 1197 Watson Ladd, Yoshifumi Nishida, Ralf Weber, Robert Sparks, Gyan 1198 Mishra, Martin Duke, Eric Vyncke, Lars Eggert, Roman Danyliw, and 1199 Paul Wouters for their useful feedback on this document. 1201 11. Contributors 1203 RFC 7322 limits the number of authors listed on the front page to a 1204 maximum of 5. The editor wishes to thank and acknowledge the 1205 following author for contributing text to this document 1207 Janos Farkas 1208 Ericsson 1209 Email: janos.farkas@ericsson.com 1211 12. References 1213 12.1. Normative References 1215 [IEEE8021Q] 1216 IEEE 802.1, "IEEE Std 802.1Q-2018: IEEE Standard for Local 1217 and metropolitan area networks - Bridges and Bridged 1218 Networks", 2018, 1219 . 1221 [RFC2212] Shenker, S., Partridge, C., and R. Guerin, "Specification 1222 of Guaranteed Quality of Service", RFC 2212, 1223 DOI 10.17487/RFC2212, September 1997, 1224 . 1226 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 1227 and W. Weiss, "An Architecture for Differentiated 1228 Services", RFC 2475, DOI 10.17487/RFC2475, December 1998, 1229 . 1231 [RFC6658] Bryant, S., Ed., Martini, L., Swallow, G., and A. Malis, 1232 "Packet Pseudowire Encapsulation over an MPLS PSN", 1233 RFC 6658, DOI 10.17487/RFC6658, July 2012, 1234 . 1236 [RFC7806] Baker, F. and R. Pan, "On Queuing, Marking, and Dropping", 1237 RFC 7806, DOI 10.17487/RFC7806, April 2016, 1238 . 1240 [RFC8655] Finn, N., Thubert, P., Varga, B., and J. Farkas, 1241 "Deterministic Networking Architecture", RFC 8655, 1242 DOI 10.17487/RFC8655, October 2019, 1243 . 1245 [RFC8939] Varga, B., Ed., Farkas, J., Berger, L., Fedyk, D., and S. 1246 Bryant, "Deterministic Networking (DetNet) Data Plane: 1247 IP", RFC 8939, DOI 10.17487/RFC8939, November 2020, 1248 . 1250 [RFC8964] Varga, B., Ed., Farkas, J., Berger, L., Malis, A., Bryant, 1251 S., and J. Korhonen, "Deterministic Networking (DetNet) 1252 Data Plane: MPLS", RFC 8964, DOI 10.17487/RFC8964, January 1253 2021, . 1255 [RFC9016] Varga, B., Farkas, J., Cummings, R., Jiang, Y., and D. 1256 Fedyk, "Flow and Service Information Model for 1257 Deterministic Networking (DetNet)", RFC 9016, 1258 DOI 10.17487/RFC9016, March 2021, 1259 . 1261 12.2. Informative References 1263 [BennettDelay] 1264 J.C.R. Bennett, K. Benson, A. Charny, W.F. Courtney, and 1265 J.-Y. Le Boudec, "Delay Jitter Bounds and Packet Scale 1266 Rate Guarantee for Expedited Forwarding", 1267 . 1269 [CharnyDelay] 1270 A. Charny and J.-Y. Le Boudec, "Delay Bounds in a Network 1271 with Aggregate Scheduling", . 1274 [DelayAttack] 1275 S. Barreto, A. Suresh, and J.-Y. Le Boudec, "Cyber-attack 1276 on packet-based time synchronization protocols: The 1277 undetectable Delay Box", 1278 . 1280 [I-D.ietf-detnet-controller-plane-framework] 1281 A. Malis, X. Geng, M. Chen, F. Qin, and B. Varga, 1282 "Deterministic Networking (DetNet) Controller Plane 1283 Framework draft-ietf-detnet-controller-plane-framework- 1284 01", . 1287 [IEEE1588] IEEE Std 1588-2008, "IEEE Standard for a Precision Clock 1288 Synchronization Protocol for Networked Measurement and 1289 Control Systems", 2008, 1290 . 1292 [IEEE8021Qcr] 1293 IEEE 802.1, "IEEE P802.1Qcr: Bridges and Bridged Networks 1294 - Amendment: Asynchronous Traffic Shaping", 2017, 1295 . 1297 [IEEE8021TSN] 1298 IEEE 802.1, "IEEE 802.1 Time-Sensitive Networking (TSN) 1299 Task Group", . 1301 [IEEE8023] IEEE 802.3, "IEEE Std 802.3-2018: IEEE Standard for 1302 Ethernet", 2018, 1303 . 1305 [LeBoudecTheory] 1306 J.-Y. Le Boudec, "A Theory of Traffic Regulators for 1307 Deterministic Networks with Application to Interleaved 1308 Regulators", 1309 . 1311 [NetCalBook] 1312 J.-Y. Le Boudec and P. Thiran, "Network calculus: a theory 1313 of deterministic queuing systems for the internet", 2001, 1314 . 1316 [PacketReorderingBounds] 1317 E. Mohammadpour, and J.-Y. Le Boudec, "On Packet 1318 Reordering in Time-Sensitive Networks", 1319 . 1321 [RFC2697] Heinanen, J. and R. Guerin, "A Single Rate Three Color 1322 Marker", RFC 2697, DOI 10.17487/RFC2697, September 1999, 1323 . 1325 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 1326 Text on Security Considerations", BCP 72, RFC 3552, 1327 DOI 10.17487/RFC3552, July 2003, 1328 . 1330 [RFC8578] Grossman, E., Ed., "Deterministic Networking Use Cases", 1331 RFC 8578, DOI 10.17487/RFC8578, May 2019, 1332 . 1334 [RFC9023] Varga, B., Ed., Farkas, J., Malis, A., and S. Bryant, 1335 "Deterministic Networking (DetNet) Data Plane: IP over 1336 IEEE 802.1 Time-Sensitive Networking (TSN)", RFC 9023, 1337 DOI 10.17487/RFC9023, June 2021, 1338 . 1340 [RFC9055] Grossman, E., Ed., Mizrahi, T., and A. Hacker, 1341 "Deterministic Networking (DetNet) Security 1342 Considerations", RFC 9055, DOI 10.17487/RFC9055, June 1343 2021, . 1345 [Sch8021Qbv] 1346 S. Craciunas, R. Oliver, M. Chmelik, and W. Steiner, 1347 "Scheduling Real-Time Communication in IEEE 802.1Qbv Time 1348 Sensitive Networks", 1349 . 1351 [SpechtUBS] 1352 J. Specht and S. Samii, "Urgency-Based Scheduler for Time- 1353 Sensitive Switched Ethernet Networks", 1354 . 1356 [ThomasTime] 1357 L. Thomas and J.-Y. Le Boudec, "On Time Synchronization 1358 Issues in Time-Sensitive Networks with Regulators and 1359 Nonideal Clocks", 1360 . 1362 [TSNwithATS] 1363 E. Mohammadpour, E. Stai, M. Mohiuddin, and J.-Y. Le 1364 Boudec, "Latency and Backlog Bounds in Time-Sensitive 1365 Networking with Credit Based Shapers and Asynchronous 1366 Traffic Shaping", 1367 . 1369 Authors' Addresses 1370 Norman Finn 1371 Huawei Technologies Co. Ltd 1372 3101 Rio Way 1373 Spring Valley, California 91977 1374 United States of America 1375 Phone: +1 925 980 6430 1376 Email: nfinn@nfinnconsulting.com 1378 Jean-Yves Le Boudec 1379 EPFL 1380 IC Station 14 1381 CH-1015 Lausanne EPFL 1382 Switzerland 1383 Email: jean-yves.leboudec@epfl.ch 1385 Ehsan Mohammadpour 1386 EPFL 1387 IC Station 14 1388 CH-1015 Lausanne EPFL 1389 Switzerland 1390 Email: ehsan.mohammadpour@epfl.ch 1392 Jiayi Zhang 1393 Huawei Technologies Co. Ltd 1394 Q27, No.156 Beiqing Road 1395 Beijing 1396 100095 1397 China 1398 Email: zhangjiayi11@huawei.com 1400 Balázs Varga 1401 Ericsson 1402 Budapest 1403 Konyves Kálmán krt. 11/B 1404 1097 1405 Hungary 1406 Email: balazs.a.varga@ericsson.com