idnits 2.17.1 draft-stein-srtsn-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 22, 2021) is 1157 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DetNet Working Group Y(J) Stein 3 Internet-Draft RAD 4 Intended status: Informational February 22, 2021 5 Expires: August 26, 2021 7 Segment Routed Time Sensitive Networking 8 draft-stein-srtsn-00 10 Abstract 12 Routers perform two distinct user-plane functionalities, namely 13 forwarding (where the packet should be sent) and scheduling (when the 14 packet should be sent). One forwarding paradigm is segment routing, 15 in which forwarding instructions are encoded in the packet in a stack 16 data structure, rather than programmed into the routers. Time 17 Sensitive Networking and Deterministic Networking provide several 18 mechanisms for scheduling under the assumption that routers are time 19 synchronized. The most effective mechanisms for delay minimization 20 involve per-flow resource allocation. 22 SRTSN is a unified approach to forwarding and scheduling that uses a 23 single stack data structure. Each stack entry consists of a 24 forwarding portion (e.g., IP addresses or suffixes) and a scheduling 25 portion (deadline by which the packet must exit the router). SRTSN 26 thus fully implements network programming for time sensitive flows, 27 by prescribing to each router both to-where and by-when each packet 28 should be sent. 30 Status of This Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at https://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on August 26, 2021. 47 Copyright Notice 49 Copyright (c) 2021 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (https://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with respect 57 to this document. Code Components extracted from this document must 58 include Simplified BSD License text as described in Section 4.e of 59 the Trust Legal Provisions and are provided without warranty as 60 described in the Simplified BSD License. 62 1. Introduction 64 Packet Switched Networks (PSNs) use statistical multiplexing to fully 65 exploit link data rate. On the other hand, statistical multiplexing 66 in general leads to end-to-end propagation latencies significantly 67 higher than the minimum physically possible, due to packets needing 68 to reside in queues waiting for their turn to be transmitted. 70 Recently Time Sensitive Networking (TSN) and Deterministic Networking 71 (DetNet) technologies have been developed to reduce this queueing 72 latency for time sensitive packets [RFC8557]. Novel TSN mechanisms 73 are predicated on the time synchronization of all forwarding elements 74 (Ethernet switches, MPLS Label Switched Routers, or IP routers, to be 75 called here simply routers). Once routers agree on time to high 76 accuracy, it is theoretically possible to arrange for time sensitive 77 packets to experience "green waves", that is, never to wait in 78 queues. For example, scheduling timeslots for particular flows 79 eliminates packet interference, but eliminates the statistical 80 multiplexing advantage of PSNs. In addition, the scheduling 81 calculation and programming of the network to follow this calculation 82 doesn't scale well to large networks. 84 Segment Routing (SR) technologies provide a scalable method of 85 network programming, but until now has not been applied to 86 scheduling. The SR instructions are contained within a packet in the 87 form of a first-in first-out stack dictating the forwarding decisions 88 of successive routers. Segment routing may be used to choose a path 89 sufficiently short to be capable of providing sufficiently low end- 90 to-end latency but does not influence the queueing of individual 91 packets in each router along that path. 93 2. Forwarding and Scheduling 95 Routers (recall that by routers we mean any packet forwarding device) 96 perform two distinct functions on incoming packets, namely forwarding 97 and scheduling. By forwarding we mean obtaining the incoming packet, 98 inspecting the packet's headers, deciding on an output port, and for 99 QoS routing a specific output queue belonging to this output port, 100 based on the header information and a forwarding information base, 101 optionally editing the packet (e.g., decrementing the TTL field or 102 performing a stack operation on a MPLS label), and placing the packet 103 into the selected output queue. 105 Scheduling consists of selecting which output queue and which packet 106 from that output queue will be the next packet to be physically 107 transmitted over the output port. In simple terms one can think of 108 forwarding and scheduling as "which output port" and "which packet" 109 decisions, respectively; that is, forwarding decides to which output 110 port to send each packet, and scheduling decides which packet to send 111 next. 113 Segment routing (as well as connection-oriented mechanisms) slightly 114 simplify the meaning of forwarding to deciding "where" to send the 115 incoming packet, while TSN slightly simplifies the meaning of 116 scheduling to deciding "when" to send the outgoing packet. 118 Routers optionally perform a third user plane operation, namely per 119 output port and/or per flow traffic conditioning. By conditioning we 120 mean policing (discarding packets based on a token bucket algorithm), 121 shaping (delaying packets), (W)RED, etc. Since we will only be 122 interested in per-packet per router behavior we will neglect 123 conditioning, which is either per router (not distinguishing between 124 packets) or per flow (the same for all routers along the path). 126 As aforementioned, forwarding decisions always select an output port, 127 but when there are QoS criteria additionally decide on an output 128 queue belonging to that port. The use of multiple queues per output 129 port is to aid the scheduling, which then becomes a matter of 130 selecting an output queue and always taking the packet at the end of 131 the queue (the packet that has waited the longest). For example, the 132 simplest nontrivial scheduling algorithm is "strict priority". In 133 strict priority packets are assigned to queues according to their 134 priority (as indicated by Priority Code Point or DiffServ Code Point 135 field). The strict priority scheduler always first checks the queue 136 with highest priority; if there is a packet waiting there it is 137 selected for transmission, if not the next highest priority queue is 138 examined and so on. Undesirably strict priority may never reach 139 packets in low priority queues (Best Effort packets), so alternative 140 algorithms, e.g., Weighted Fair Queueing, are used to select from 141 priority queues more fairly. 143 TSN is required for networks transporting time sensitive traffic, 144 that is, packets that are required to be delivered to their final 145 destination by a given time. In the following we will call the time 146 a packet is sent by the end user application (or the time it enters a 147 specific network) the "birth time", the required delivery time to the 148 end-user application (or the time it exists a specific network) the 149 "final deadline" and the difference between these two times (i.e., 150 the maximally allowed end-to-end propagation time though the network) 151 the "delay budget". 153 Unlike strict priority or WFQ algorithms, TSN scheduling algorithms 154 may directly utilize the current time of day. For example, in the 155 TSN scheduling algorithm known as time-aware scheduling (gating), 156 each output queue is controlled by a timed gate. At every time only 157 certain output queues have their gates "open" and can have their 158 packets scheduled, while packets are not scheduled from queues with 159 "closed" gates. By appropriately timing the opening and closing of 160 gates of all routers throughout the network, packets in time 161 sensitive flows may be able to traverse their end-to-end path without 162 ever needlessly waiting in output queues. In fact, time-aware gating 163 may be able to provide a guaranteed upper bound for end-to-end delay. 165 However, time-aware scheduling suffers from two major disadvantages. 166 First, opening the gates of only certain queues for a given time 167 duration, results in this time duration being reserved even if there 168 are very few or even no packets in the corresponding queues. This is 169 precisely the undesirable characteristic of Time Division 170 Multiplexing networks that led to their replacement by Packet 171 Switched Networks. Minimizing time durations increases efficiency, 172 but at the cost of obliging a time sensitive packet that just missed 173 its gate to wait until the next gate opening, endangering its 174 conforming to the delay budget. 176 In order to avoid such problems, one needs to know a priori the birth 177 times of all time sensitive packets, the lengths of all links between 178 routers, and the loading of all routers. Based on this input one can 179 calculate optimal gating schedules for all routers in the network and 180 distribute this information to all the routers. This calculation is 181 computationally expensive and updating all the routers is 182 communicationally expensive. Moreover, admitting a new time- 183 sensitive flow requires recalculation of all the gating schedules and 184 updating all the routers. This recalculation and communications load 185 is practical only for small networks and a relatively small numbers 186 of flows. 188 3. Stack-based Methods for Latency Control 190 One can envision mechanisms for reducing end-to-end propagation 191 latency in a network with time-synchronized routers that do not 192 suffer from the disadvantages of time sensitive scheduling. One such 193 mechanism would be to insert the packet's birth time (time created by 194 end-user application or time entering the network) into the packet's 195 headers. Each router along the way could use this birth time by 196 prioritizing packets with earlier birth times, a policy known as 197 Longest in System (LIS). These times are directly comparable, due to 198 our assuming the synchronization of all routers in the network. This 199 mechanism may indeed lower the propagation delay, but at each router 200 the decision is sub-optimal since a packet that has been in the 201 network longer but that has a longer application delay budget will be 202 sent before a later packet with a tighter delay budget. 204 An improved mechanism would insert into the packet headers the 205 desired final deadline, i.e., the birth time plus the delay budget. 206 Each router along the way could use this final destination time by 207 prioritizing packets with earlier deadlines, a policy known as 208 Earliest Deadline First (EDF). This mechanism may indeed lower the 209 propagation delay, but at each router the decision is sub-optimal 210 since a packet that has been in the network longer but is close to 211 its destination will be transmitted before a later packet which still 212 has a long way to travel. 214 A better solution to the problem involves precalculating individual 215 "local" deadlines for each router, and each router prioritizing 216 packets according to its own local deadline. As an example, a packet 217 sent at time 10:11:12.000 with delay budget of 32 milliseconds (i.e., 218 final deadline time of 10:11:12.032) and that needs to traverse three 219 routers might have in its packet headers three local deadlines, 220 10:11:12:010, 10:11:12.020, and 10:11:12.030. The first router 221 employs EDF using the first local deadline, the second router 222 similarly using the second local deadline, and the ultimate router 223 using the last local deadline. 225 The most efficient data structure for inserting local deadlines into 226 the headers is a "stack", similar to that used in Segment Routing to 227 carry forwarding instructions. The number of deadline values in the 228 stack equals the number of routers the packet needs to traverse in 229 the network, and each deadline value corresponds to a specific 230 router. The Top-of-Stack (ToS) corresponds to the first router's 231 deadline while the Bottom-of-Stack (BoS) refers to the last's. All 232 local deadlines in the stack are later or equal to the current time 233 (upon which all routers agree), and times closer to the ToS are 234 always earlier or equal to times closer to the BoS. 236 The stack may be dynamic (as is the forwarding instruction stack in 237 SR-MPLS) or static with an index (as is the forwarding instruction 238 stack in SRv6). 240 For private networks it is possible for the stack to be inserted by 241 the user equipment that is the source of the packet, in which case 242 the top of stack local deadline corresponds to the first router to be 243 encountered by the packet. However, in such a case the user 244 equipment must also by time synchronized for its time values to be 245 directly compatible. In an improved strategy the stack is inserted 246 into the packet by the ingress router, and thus its deadlines are in 247 concert with time in the network. In such case the first deadline 248 will not explicitly appear in the stack and the initial ToS 249 corresponds to the second router in the network to be traversed by 250 the packet. In either case each router in turn pops from the stack 251 the ToS local deadline and uses that local deadline in its scheduling 252 (e.g., employing EDF). 254 Since the ingress router inserts the deadline stack into the packet 255 headers, no other router needs to be aware of the requirements of the 256 time sensitive flows. Hence admitting a new flow only requires 257 updating the information base of the ingress router. In an efficient 258 implementation the ingress router's information base has deadline 259 offset vectors for each time sensitive flow. Upon receipt of a 260 packet from user equipment, the ingress router first determines if 261 the packet belongs to a time sensitive flow. If so, it adds the 262 current time to the deadline offset vector belonging to the flow and 263 inserts it as a stack into the packet headers. 265 An explicit example is depicted in Figure 1. Here packets of a 266 specific time sensitive flow are required to be received by the 267 remote user equipment within 200 microseconds of being transmitted by 268 the source user equipment. The packets traverse a wireless link with 269 delay 2 microseconds to reach the router R1 (the ingress router). 270 They then travel to router R2 over an optical fiber experiencing a 271 propagation delay of 18 microseconds, from there to router R3 272 experiencing an additional 38 microseconds of fiber delay, from there 273 to router R4 (the egress router) experiencing 16 microseconds of 274 fiber delay. Finally, they travel over a final wireless link taking 275 again 2 microseconds. 277 +----+ 2 +----+ 18 +----+ 38 +----+ 16 +----+ 2 +----+ 278 | UE |-----| R1 |-------| R2 |-------| R3 |-------| R4 |-----| UE | 279 +----+ +----+ +----+ +----+ +----+ +----+ 281 Figure 1: Example with propagation latencies 283 We conclude that the total constant physical propagation time is 284 2+18+38+16+2=76 microseconds. Moreover, assume that we know that in 285 each router there is an additional constant time of 1 microsecond to 286 receive the packet at the line rate and 5 microseconds to process the 287 packet, that is, 6 microseconds per router or 24 microseconds for all 288 four routers. We have thus reached the conclusion that the minimal 289 time to traverse the network is 76+24=100 microseconds 291 Since our delay budget is 200 microseconds, we have spare time of 292 200-100=100 microseconds for the packets to wait in output queues. 293 If we have no further information, we can divide this spare 100 294 microseconds equally among the 4 routers, i.e., 25 microseconds per 295 router. Thus, the packet arrives at the first router after 2 296 microseconds, is received and processed after 2+6=8 microseconds, and 297 is assigned a local deadline to exit the first router of 8+25=33 298 microseconds. The worst case times of arrival and transmission at 299 each point along the path are depicted in Figure 2. Note that in 300 general it may be optimal to divide the spare time in unequal 301 fashion. 303 +----+ 2 +----+ 18 +----+ 38 +----+ 16 +----+ 2 +----+ 304 | UE |-----| R1 |-------| R2 |-------| R3 |-------| R4 |-----| UE | 305 +----+ +----+ +----+ +----+ +----+ +----+ 306 | | | | | | | | | | 307 | | | | | | | | | | 308 0 2 33 51 82 120 151 167 198 200 310 Figure 2: Example with worst case times 312 Assuming that the packet left router 1 the full 33 microseconds after 313 its transmission, it will arrive at router 2 after an additional 18 314 microseconds, that is, after 51 microseconds. After the mandatory 6 315 microseconds of reception and processing and the 25 microseconds 316 allocated for queueing, we reach the local deadline to exit router 2 317 by 82 microseconds. Similarly, the local deadline to exit router 3 318 is 151 microseconds, and the deadline to exit router 4 is 198 319 microseconds. After the final 2 microseconds consumed by the 320 wireless link the packet will arrive at its destination after 200 321 microseconds as required 323 Based on these worst case times the ingress router can now build the 324 deadline offset vector (33, 82, 151, 198) referenced to the time the 325 packet left the source user equipment, or referenced to the time the 326 packet arrives at the ingress router of (31, 80, 149, 196). 328 Now assume that a packet was transmitted at time T and hence arrives 329 at the ingress router at time T + 2 microseconds. The ingress router 330 R1, observing the deadline offset vector referenced to this time, 331 knows that the packet must be released no more than 31 microseconds 332 later, i.e., by T + 33 microseconds. It furthermore inserts a local 333 deadline stack [T+82, T+151, T+198] into the packet headers. 335 The second router R2 receives the packet with the local deadline 336 stack, pops the ToS revealing that it must ensure that the packet 337 exits by T + 82 microseconds. It properly prioritizes and sends the 338 packet with the new stack [T+151, T+198]. Router R3 pops deadline 339 T+151, and sends the packet with local deadline stack containing a 340 single entry [T+198]. The final router pops this final local 341 deadline and ensures that the packet is transmitted before that time 342 The local deadline stacks are depicted in Figure 3. 344 +----+ 2 +----+ 18 +----+ 38 +----+ 16 +----+ 2 +----+ 345 | UE |-----| R1 |-------| R2 |-------| R3 |-------| R4 |-----| UE | 346 +----+ +----+ | +----+ | +----+ | +----+ +----+ 347 | | | | | | | | | | | | | 348 | | | | | | | | | | | | | 349 0 2 33 | 51 82 | 120 151 | 167 198 200 350 | | | 351 V V V 352 +---+ +---+ +---+ 353 | 82| |151| |198| 354 |---| |---| +---+ 355 |151| |198| 356 |---| +---+ 357 |198| 358 +---+ 360 Figure 3: Example with local deadline stacks 362 The precise mechanism just described is by no means the only way to 363 compute local deadlines. Furthermore, combining time-aware 364 scheduling at the ingress router only with EDF at all the other 365 routers can provide "green waves" with provable upper bounds to 366 delay. However, optimizing such a scheme at scale is a challenge. A 367 randomized algorithm for computation of the deadline offset vector is 368 described in [AndrewsZhang]. 370 4. The Time Sensitive Router 372 While a stack is the ideal data structure to hold the local deadlines 373 in the packet, different data structures are used to hold the time 374 sensitive packets (or their descriptors) in the routers. The 375 standard data structure used in routers is the queue which, being a 376 first in first out memory, is suitable for a policy of first-to- 377 arrive first-to-exit, and not for EDF or other stack-based time 378 sensitive mechanisms. More suitable data structures are sorted 379 lists, search trees, and priority heaps. While such data structures 380 are novel in this context, efficient hardware implementations exist. 382 If all the time sensitive flows are of the same priority, then a 383 single such data structure may be used for all time sensitive flows. 384 If there are time sensitive flows of differing priorities, then a 385 separate such data structure is required for each level of priority 386 corresponding to a time sensitive flow, while the conventional queue 387 data structure may be used for priority levels corresponding to flows 388 that are not time sensitive. 390 For example, assume two different priorities of time sensitive flows 391 and a lower priority for Best Effort traffic that is not time 392 sensitive. If applying strict priority the scheduler would first 393 check if the data structure for the highest priority contains any 394 packets. If yes, it transmits the packet with the earliest local 395 deadline. If not, it checks the data structure for the second 396 priority. If it contains any packets it transmits the packet with 397 the earliest deadline. If not, it checks the Best Effort queue. If 398 this queue is nonempty it transmits the next packet in the queue, 399 i.e., the packet that has waited in this queue the longest. 401 Separate prioritization and EDF is not necessarily the optimal 402 strategy. An alternative (which we call Liberal EDF, or LEDF) would 403 be for the scheduler to define a worst case (i.e., maximal) packet 404 transmission time MAXTT (for example, the time taken for a 1500 byte 405 packet to be transmitted at the output port's line rate). Instead of 406 checking whether the data structure for the highest priority contains 407 any packets at all, LEDF checks whether its earliest packet's local 408 deadline is earlier than MAXTT from the current time. If it is, it 409 is transmitted; if it is not the next priority is checked, knowing 410 that even were a maximal size packet to be transmitted the scheduler 411 will still be able to return to the higher priority packet before its 412 local deadline. 414 5. Segment Routed Time Sensitive Networking 416 Since Segment Routing and the TSN mechanism just described both 417 utilize stack data structures it is advantageous to combine their 418 information into a single unified SRTSN stack. Each entry in this 419 stack contain two subentries, the forwarding instruction (e.g., the 420 address of the next router or the label specifying the next link) and 421 a scheduling instruction (the local deadline). 423 Each SRTSN stack entry fully prescribes the forwarding and scheduling 424 behavior of the corresponding router, both to-where and by-when the 425 packet should be sent. The insertion of a stack into packets thus 426 fully implements network programming for time sensitive flows. 428 For example, Figure 4 depicts the previous example but with the 429 unified SRTSN stacks. Ingress router R1 inserts a SRTSN stack with 430 three entries into the packet received. In this example the 431 forwarding sub-entry contains the identifier or address of the next 432 router, except for the Bottom of Stack entry that contains a special 433 BoS code (e.g., identifier zero). The ToS entry thus contains the 434 address of router R3 and the time by which the packet must exit 435 router R2, namely T + 82 microseconds. Router R2 pops this ToS 436 leaving a SRTSN stack with 2 entries. Router R3 pops the new ToS 437 instructing it to forward the packet to router R4 by time T + 151 438 microseconds, leaving a stack with a single entry. Router R4 pops 439 the ToS and sees that it has reached bottom of stack. It then 440 forwards the packet according to the usual rules of the network (for 441 example, according to the IP address in the IP header) by local 442 deadline T + 198 microseconds. 444 +----+ 2 +----+ 18 +----+ 38 +----+ 16 +----+ 2 +----+ 445 | UE |-----| R1 |-------| R2 |-------| R3 |-------| R4 |-----| UE | 446 +----+ +----+ | +----+ | +----+ | +----+ +----+ 447 | | | | | | | | | | | | | 448 | | | | | | | | | | | | | 449 0 2 33 | 51 82 | 120 151 | 167 198 200 450 | | | 451 V V V 452 +-------+ +-------+ +-------+ 453 |R2; 82| |R3; 151| |BoS;198| 454 |-------| |-------| +-------+ 455 |R3; 151| |BoS;198| 456 |-------| +-------+ 457 |BoS;198| 458 +-------+ 460 Figure 4: Example with combined SRTSN stacks 462 6. Stack Entry Format 464 A number of different time formats are in common use in networking 465 applications and can be used to encode the local deadlines. The 466 longest commonly utilized format is 80-bit PTP-80 timestamp defined 467 in IEEE 1588v2 Precision Time Protocol [IEEE1588]. There are two 468 common 64-bit time representations: the NTP-64 timestamp defined in 469 [RFC5905] (32 bits for whole seconds and 32 bits for fractional 470 seconds); and the PTP-64 timestamp (32 bits for whole seconds and 32 471 bits for nanoseconds). Finally, there is the NTP-32 timestamp (16 472 bits of whole seconds and 16 bits of fractional seconds) that is 473 often insufficient due to its low resolution (15 microseconds). 475 However, we needn't be constrained by these common formats, since our 476 wraparound requirements are minimal. As long as we have no ambiguity 477 in times during the flight of a packet, which is usually much less 478 than a second, the timestamp is acceptable. Thus, we can readily use 479 a nonstandard 32-bit timestamp format with say 12 bits of seconds 480 (wraparound over 1 hour) and 20 bits for microseconds, or say 8 bits 481 for whole seconds (wraparound over 4 minutes) and 24 bits of tenths 482 of microseconds. 484 For the forwarding sub-entry we could adopt like SR-MPLS standard 485 32-bit MPLS labels (which contain a 20-bit label and BoS bit), and 486 thus SRTSN stack entries could be 64-bits in size comprising a 32-bit 487 MPLS label and the aforementioned nonstandard 32-bit timestamp. 488 Alternatively, an SR-TSN stack entry could be 96 bits in length 489 comprising a 32-bit MPLS label and either of the standardized 64-bit 490 timestamps. 492 For IPv4 networks one could employ a 32-bit IPv4 address in place of 493 the MPLS label. Thus, using the nonstandard 32-bit timestamp the 494 entire stack entry could be 64 bits. For dynamic stack 495 implementations a BoS bit would have to be included. 497 SRv6 uses 128-bit IPv6 addresses (in addition to a 64-bit header and 498 possibly options), and so 160-bit or 192-bit unified entries are 499 directly derivable. However, when the routers involved are in the 500 same network, address suffixes suffice to uniquely determine the next 501 router. 503 7. Control Plane 505 In the above discussion we assumed that the ingress router knows the 506 deadline offset vector for each time sensitive flow. This vector may 507 be calculated by a centralized management system and sent to the 508 ingress router, or may be calculated by the ingress router itself. 510 In the former case there is central SRTSN orchestrator, which may be 511 based on a Network Management System, or on an SDN controller, or on 512 a Path Computation Element server. The SRTSN orchestrator needs to 513 be know the propagation delays for all the links in the network, 514 which may be determined using time domain reflectometry, or via one- 515 way delay measurement OAM, or retrieved from a network planning 516 system. The orchestrator may additionally know basic parameters of 517 the routers, including minimal residence time, data rate of the 518 ports, etc. When a time sensitive path needs to be set up, the SRTSN 519 orchestrator is given the source and destination and the delay 520 budget. It first determines feasibility by finding the end-to-end 521 delay of the shortest path (shortest being defined in terms of 522 latency, not hop count). It then selects a path (usually, but not 523 necessarily, the shortest one) and calculates the deadline offset 524 vector. The forwarding instructions and offset vector (as well as 525 any other required flow-based information, such as data rate or drop 526 precedence) are then sent to the ingress router. As in segment 527 routing, no other router in the network needs to be informed. 529 In the latter case the ingress router is given the destination and 530 the delay budget. It sends a setup message to the destination as in 531 RSVP-TE, however, in this case arrival and departure timestamps are 532 recorded for every router along the way. The egress router returns 533 the router addresses and timestamps. This process may be repeated 534 several times and minimum gating applied to approximate the link 535 propagation times. Assuming that the path's delay does not exceed 536 the delay budget, the path and deadline offset vector may then be 537 determined. 539 The method of [AndrewsZhang] uses randomization in order to avoid the 540 need for centralized coordination of flows entering the network at 541 different ingress routers. However, this advantage comes at the 542 expense of much higher achievable delay budgets. 544 8. Security Considerations 546 SRTSN concentrates the entire network programming semantics into a 547 single stack, and thus tampering with this stack would have 548 devastating consequences. Since each stack entry must be readable by 549 the corresponding router, encrypting the stack would necessitate key 550 distribution between the ingress router and every router along the 551 path. 553 A simpler mechanism would be for the ingress router to sign the stack 554 with a public key known to all routers in the network, and to append 555 this signature to the stack. If the signature is not present or is 556 incorrect the packet should be discarded. 558 9. IANA Considerations 560 This document requires no IANA actions. 562 10. Informative References 564 [AndrewsZhang] 565 Andrews, M. and L. Zhang, "Minimizing end-to-end delay in 566 high-speed networks with a simple coordinated schedule", 567 Journal of Algorithms 52 57-81, 2003. 569 [IEEE1588] 570 IEEE, "Standard for a Precision Clock Synchronization 571 Protocol for Networked Measurement and Control Systems", 572 IEEE 1588-2008, DOI 10.1109/IEEESTD.2008.4579760, 2008. 574 [RFC5905] Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch, 575 "Network Time Protocol Version 4: Protocol and Algorithms 576 Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010. 578 [RFC8557] Finn, N. and P. Thubert, "Deterministic Networking Problem 579 Statement", RFC 8557, DOI 10.17487/RFC8557, May 2019. 581 Author's Address 583 Yaakov (J) Stein 584 RAD 586 Email: yaakov_s@rad.com