idnits 2.17.1 draft-qiang-detnet-large-scale-detnet-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 5 instances of too long lines in the document, the longest one being 2 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 269 has weird spacing: '...er-flow time...' == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (September 27, 2018) is 2038 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2119' is defined on line 649, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group L. Qiang, Ed. 3 Internet-Draft B. Liu 4 Intended status: Informational T. Eckert, Ed. 5 Expires: March 31, 2019 Huawei 6 L. Geng 7 L. Wang 8 China Mobile 9 September 27, 2018 11 Large-Scale Deterministic Network 12 draft-qiang-detnet-large-scale-detnet-02 14 Abstract 16 This document presents the framework and key methods for Large-scale 17 Deterministic Networks (LDN). It achieves scalability for the number 18 of supportable deterministic traffic flows via Scalable Deterministic 19 Forwarding (SDF) that does not require per-flow state in transit 20 nodes and precise time synchronization among nodes. It achieves 21 Scalable Resource Reservation (SRR) by allowing for it to be 22 decoupled from the forwarding plane nodes, and aggregating resource 23 reservation status in time slots. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on March 31, 2019. 42 Copyright Notice 44 Copyright (c) 2018 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (https://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 60 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 61 1.2. Terminology & Abbreviations . . . . . . . . . . . . . . . 3 62 2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 4 63 2.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 4 64 2.2. Background . . . . . . . . . . . . . . . . . . . . . . . 4 65 2.2.1. Deterministic End-to-End Latency . . . . . . . . . . 4 66 2.2.2. Hop-by-Hop Delay . . . . . . . . . . . . . . . . . . 4 67 2.2.3. Cyclic Forwarding . . . . . . . . . . . . . . . . . . 5 68 2.2.4. Co-Existence with Non-Deterministic Traffic . . . . . 5 69 2.3. System Components . . . . . . . . . . . . . . . . . . . . 6 70 3. Scalable Deterministic Forwarding . . . . . . . . . . . . . . 7 71 3.1. Three Queues . . . . . . . . . . . . . . . . . . . . . . 8 72 3.2. Cycle Mapping . . . . . . . . . . . . . . . . . . . . . . 9 73 3.2.1. Cycle Identifier Carrying . . . . . . . . . . . . . . 9 74 4. Scalable Resource Reservation . . . . . . . . . . . . . . . . 10 75 5. Performance Analysis . . . . . . . . . . . . . . . . . . . . 11 76 5.1. Queueing Delay . . . . . . . . . . . . . . . . . . . . . 11 77 5.2. Jitter . . . . . . . . . . . . . . . . . . . . . . . . . 12 78 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 79 7. Security Considerations . . . . . . . . . . . . . . . . . . . 14 80 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 14 81 9. Normative References . . . . . . . . . . . . . . . . . . . . 14 82 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 84 1. Introduction 86 Deploying deterministic service over large-scale network will face 87 some technical challenges, such as 89 o massive number of deterministic flows vs. per-flow operation and 90 management; 92 o long link propagation may bring in significant jitter; 94 o time synchronization is hard to be achieved among numerous 95 devices, etc. 97 Motivated by these challenges, this document presents a Large-scale 98 Deterministic Network (LDN) system, which consists of Scalable 99 Deterministic Forwarding (SDF) at forwarding plane and Scalable 100 Resource Reservation (SRR) at control plane. The technologies of SDF 101 and SRR can be used independently. 103 As [draft-ietf-detnet-problem-statement] indicates, deterministic 104 forwarding can only apply on flows with well-defined traffic 105 characteristics. The traffic characteristics of DetNet flow has been 106 discussed in [draft-ietf-detnet-architecture], that could be achieved 107 through shaping at Ingress node or up-front commitment by 108 application. This document assumes that DetNet flows follow some 109 specific traffic patterns accordingly. 111 1.1. Requirements Language 113 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 114 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 115 document are to be interpreted as described in RFC 2119. 117 1.2. Terminology & Abbreviations 119 This document uses the terminology defined in 120 [draft-ietf-detnet-architecture]. 122 TSN: Time Sensitive Network 124 CQF: Cyclic Queuing and Forwarding 126 LDN: Large-scale Deterministic Network 128 SDF: Scalable Deterministic Forwarding 130 SRR: Scalable Resource Reservation 132 DSCP: Differentiated Services Code Point 134 EXP: Experimental 136 TC: Traffic Class 138 T: the length of a cycle 140 H: the number of hops 142 K: the size of aggregated resource reservation window 144 2. Overview 146 2.1. Summary 148 The Large-Scale Deterministic Network solution (LDN) consists of two 149 parts: The Scalable Deterministic Forwarding Plane (SDF) as its 150 forwarding plane and the Scalable Resource Reservation (SRR) as its 151 control plane. In the SDF, nodes in the network have synchronized 152 frequency, and each node forwards packets in a slotted fashion based 153 on a cycle identifiers carried in packets. Ingres nodes or senders 154 have a function called gate to shape/condition traffic flows. Except 155 for this gate function, the SDF has no awareness of individual flows. 156 The SRR maintains resource reservation states for deterministic 157 flows, Ingress nodes maintain per-flow states and core nodes 158 aggregate per-flow states in time slots. 160 2.2. Background 162 This section motivates the design choices taken by the proposed 163 solution and gives the necessary background for deterministic delay 164 based forwarding plane designs. 166 2.2.1. Deterministic End-to-End Latency 168 Bounded delay is delay that has a deterministic upper and lower 169 bound. 171 The delay for packets that need to be forwarded with deterministic 172 delay needs to be deterministic on every hop. If any hop in the 173 network introduces non-deterministic delay, then the network itself 174 can not deliver a deterministic delay service anymore. 176 2.2.2. Hop-by-Hop Delay 178 Consider a simple example (without picture), where N has 10 receiving 179 interfaces and one outgoing interface I all of the same speed. There 180 are 10 deterministic traffic flows, each consuming 5% of a links 181 bandwidth, one from each receiving interface to the outgoing 182 interface. 184 Node N sends 'only' 50% deterministic traffic to interface I, so 185 there is no ongoing congestion, but there is added delay. If the 186 arrival time of packets for these 10 flows into N is uncontrolled, 187 then the worst case is for them to all arrive at the same time. One 188 packet has to wait in N until the other 9 packets are sent out on I, 189 resulting in a worst case deterministic delay of 9 packets 190 serialization time. On the next hop node N2 downstream from N, this 191 problem can become worse. Assume N2 has 10 upstream nodes like N, 192 the worst case simultaneous burst of packets is now 100 packets, or a 193 99 packet serialization delay as the worst case upper bounded delay 194 incurred on this hop. 196 To avoid the problem of high upper bound end-to-end delay, traffic 197 needs to be conditioned/interleaved on every hop. This allows to 198 create solutions where the per-hop-delay is bounded purely by the 199 physics of the forwarding plane across the node, but not the 200 accumulated characteristics of prior hop traffic profiles. 202 2.2.3. Cyclic Forwarding 204 The common approach to solve that problem is that of a cyclic hop-by- 205 hop forwarding mechanism. Assume packets forwarded from N1 via N2 to 206 N3 as shown in Figure 1. When N1 sends a packet P to interface I1 207 with a Cycle X, it must be guaranteed by the forwarding mechanism 208 that N2 will forward P via I2 to N3 in a cycle Y. 210 The cycle of a packet can either be deduced by a receiving node from 211 the exact time it was received as is done in SDN/TDMA systems, and/or 212 it can be indicated in the packet. This document solution relies on 213 such markings because they allow to reduce the need for synchronous 214 hop-by-hop transmission timings of packets. 216 In a packet marking based slotted forwarding model, node N1 needs to 217 send packets for cycle X before the latest possible time that will 218 allow for N2 to further forward it in cycle Y to N3. Because of the 219 marking, N1 could even transmit packets for cycle X before all 220 packets for the previous cycle (X-1) have been sent, reducing the 221 synchronization requirements between across nodes. 223 P sent in P sent in P sent in 224 cycle(N1,I1,X) cycle(N2,I2,Y) cycle(N3,I3,Z) 225 +--------+ +--------+ +--------+ 226 | Node N1|------->| Node N2|-------->| Node N3|------> 227 +--------+I1 +--------+I2 +--------+I3 229 Figure 1: Cyclic Forwarding 231 2.2.4. Co-Existence with Non-Deterministic Traffic 233 Traffic with deterministic delay requirements can co-exist with 234 traffic only requiring non-deterministic delay by using packet 235 scheduling where the delay incurred by non-deterministic packets is 236 deterministic for the deterministic traffic (and low). If LDN SDF is 237 deployed together with such non-deterministic delay traffic than such 238 a scheme must be supported by the forwarding plane. A simple 239 approach for the delay incurred on the sending interface of a 240 deterministic node due to non-deterministic traffic is to serve 241 deterministic traffic via a strict, highest-priority queue and 242 include the worst case delay of a currently serialized non- 243 deterministic packet into the deterministic delay budget of the node. 244 Similar considerations apply to the internal processing delays in a 245 node. 247 2.3. System Components 249 The Figure 2 shows an overview of the components considered in this 250 document system and how they interact. 252 A network topology of nodes, Ingress, Core and Egress support a 253 method for cyclic forwarding to enable Scalable Deterministic 254 Forwarding (SDF). This forwarding requires no per-flow state on the 255 nodes. 257 Ingress edge nodes may support the (G)ate function to shape traffic 258 from sources into the desired traffic characteristics, unless the 259 source itself has such function. Per-flow state is required on the 260 ingress edge node. 262 A Scalable Resource Reservation (SRR) works as control plane. It 263 records reserved resources for deterministic flows. Per-flow state 264 is maintained on the ingress edge node, and aggregated state is 265 maintained on core node. 267 Control 268 Plane:SRR 269 per-flow time-based aggregated 270 status status 272 /--\. +--+ +--+ +--+ +--+. /--\ 273 | (G)+-----+GS+--------+ S+------+ S+--------+ S+-----+ | 274 \--/ +--+ +--+ +--+ +--+ \--/ 276 Sender Ingress Core Core Egress Receiver 277 Edge Node Node Node Edge Node 279 Forwarding high link delay propagation tolerant 280 Plane:SDF cycle-based forwarding 282 Figure 2: System Overview 284 3. Scalable Deterministic Forwarding 286 DetNet aims at providing deterministic service over large scale 287 network. In such large scale network, it is difficulty to get 288 precise time synchronization among numerous devices. To reduce 289 requirements, the forwarding mechanism described in this document 290 assumes only frequency synchronization but not time synchronization 291 across nodes: nodes maintain the same clock frequency 1/T, but do not 292 require the same time as shown in Figure 3. 294 <-----T-----> <-----T-----> 295 | | | | | | 296 Node A +-----------+-----------+ Node A +-----------+-----------+ 297 T0 T0 299 | | | | | | 300 Node B +-----------+-----------+ Node B +-----------+-----------+ 301 T0 T0 303 (i) time synchronization (ii) frequency synchronization 305 T: length of a cycle 306 T0: timestamp 308 Figure 3: Time Synchronization & Clock Synchronization 310 IEEE 802.1 CQF is an efficient forwarding mechanism in TSN that 311 guarantees bounded end-to-end latency. CQF is designed for limited 312 scale networks. Time synchronization is required, and the link 313 propagation delay is required to be smaller than a cycle length T. 314 Considering the large scale network deployment, the proposed Scalable 315 Deterministic Forwarding (SDF) permits frequency synchronization and 316 link propagation delay may exceed T. Besides these two points, CQF 317 and the asynchronous forwarding of SDF are very similar. 319 Figure 4 compares CQF and SDF through an example. Suppose Node A is 320 the upstream node of Node B. In CQF, packets sent from Node A at 321 cycle x, will be received by Node B at the same cycle, then further 322 be sent to downstream node by Node B at cycle x+1. Due to long link 323 propagation delay and frequency synchronization, Node B will receive 324 packets from Node A at different cycle denoted by y in the SDF, and 325 Node B swaps the cycles carried in packets with y+1, then sends out 326 those packets at cycle y+1. This cycle mapping (e.g., x --> y+1) can 327 be realized as an adjustment value, and it exists between any pair of 328 neighbor nodes. With this mapping, the receiving node can easily 329 figure out when the received packets should be sent out, the only 330 requirement is to carry the cycle identifier of sending node in the 331 packets. 333 In right part of Figure 4, Node A sends a packet with cycle 334 identifier x in cycle x indicated by the identifier. After received 335 by Node B, the cycle identifier x in the packet will be modified by 336 the adjustment value to get a new cycle identifier y+1. Then the 337 identifier y+1 will replace the original identifier x. Finally, the 338 packet with the cycle identifier y+1 will be sent by Node B in cycle 339 y+1 indicated by the new identifier. 341 | cycle x | cycle x+1 | | cycle x | cycle x+1 | 342 Node A +-----------+-----------+ Node A +-----------+-----------+ 343 \ \ 344 \packet \packet 345 \receiving \receiving 346 \ \ 347 | V | cycle x+1 | | V | cycle y+1| 348 Node B +-----------+-----------+ Node B +-----------+-----------+ 349 cycle x \packet cycle y \packet 350 \sending \sending 351 \ \ 352 \ \ 353 V V 355 (i) CQF (ii) SDF 357 Figure 4: CQF & SDF 359 3.1. Three Queues 361 In CQF each port needs to maintain 2 (or 3) queues: one is used to 362 buffer newly received packets, another one is used to store the 363 packets that are going to be sent out, one more queue may be needed 364 to avoid output starvation [scheduled-queues]. In SDF, at least 3 365 cyclic queues are maintained for each port on a node. A cyclic queue 366 corresponds to a cycle. 368 As Figure 5 illustrated, a node may receive packets sent at two 369 different cycles from a single upstream node due to the absence of 370 time synchronization. Following the cycle mapping (i.e., x --> y+1), 371 packets that carry cycle identifier x should be sent out by Node B at 372 cycle y+1, and packets that carry cycle identifier x+1 should be sent 373 out by Node B at cycle y+2. Therefore, two queues are needed to 374 store the newly received packets, as well as one queue to store the 375 sending packets. In order to absorb more link delay variation (such 376 as on radio interface), more queues may be necessary. 378 | cycle x | cycle x+1 | 379 Node A +-----------+-----------+ 380 \ \ 381 \ \packet 382 \ \receiving 383 | V V | | 384 Node B +-----------+-----------+ 385 cycle y cycle y+1 387 Figure 5: Three Queues in SDF 389 3.2. Cycle Mapping 391 When this packet is received by Node B, some methods are possible how 392 the forwarding plane could operate. In one method, Node B has a 393 mapping determined by the control plane. Packets from (the link 394 from) Node A indicating cycle x are mapping into cycle y+1. This 395 mapping is necessary, because all the packets from one cycle of the 396 sending node need to get into one cycle of the receiving node. This 397 is called "configured cycle mapping". 399 Instead of configuring an explicit cycle mapping such as cycle x -> 400 cycle y+1, the receiving Node B could also have the intelligence in 401 the forwarding plane to recognize the first packet from (the link 402 from) Node A that has a new cycle x number, and map this cycle x to a 403 cycle y after the current cycle. We call this option "self 404 synchronized cycle mapping". 406 3.2.1. Cycle Identifier Carrying 408 In self synchronized cycle mapping, cycle identifier needs to be 409 carried in the SDF packets, so that an appropriate queue can be 410 selected accordingly. That means 2 bits are needed in the three 411 queues model of SDF, in order to identify different cycles between a 412 pair of neighboring nodes. There are several ways to carry this 2 413 bits cycle identifier. This document does not yet aim to propose 414 one, but gives an (incomplete) list of ideas: 416 o DSCP of IPv4 Header 418 o Traffic Class of IPv6 Header 420 o TC of MPLS Header (used to be EXP) 422 o EtherType of Ethernet Header 424 o IPv6 Extension Header 425 o TLV of SRv6 427 o TC of MPLS-SR Header (used to be EXP) 429 o Three labels/adjacency SIDs for MPLS-SR 431 4. Scalable Resource Reservation 433 SDF must work with some resource reservation mechanisms, that can 434 fulfill the role of the Scalable Resource Reservation (SRR). This 435 resource reservation guarantees the necessary network resources 436 (e.g., bandwidth) when deterministic flows are scheduled including 437 the slots through which the traffic travels hop-by-hop. Network 438 nodes have to record how many network resources are reserved for a 439 specific flow from when it starts to when it ends (e.g., 440 ). 441 Maintaining per-flow resource reservation state may be acceptable to 442 edge nodes, but un-acceptable to core nodes. 443 [draft-ietf-detnet-architecture] pointed out that aggregation must be 444 supported for scalability. 446 SRR aggregates per-flow resource reservation states in each time slot 447 following the steps: 449 1. Dividing time into time slots. Then the per-flow resource 450 reservation message can be expressed as accordingly, 452 where flow_identifier is the identifier of a deterministic flow, 453 reserved_resource indicates how much resource is reserved, 454 start_time_slot is the number of time slot from which resource 455 reservation starts (e.g., the time slot that a new resource 456 reservation request generates), num_time_slot indicates how many 457 time slots the resource will be reserved. Note that time slot 458 here is irrelevant to the cycle in SDF. 460 2. Edge node still maintains per-flow resource reservation states. 461 While core node calculates and maintains the sum of 462 reserved_resources (or remaining resources) of each time slot. 463 That is a core node just needs to maintain a variable for each 464 time slot. A core node can maintain K time slots' resource 465 reservation states, i.e., the aggregated resource reservation 466 window of a core node is K. 468 3. New resource reservation request succeed only if there are 469 sufficient resources along the path. That is every related core 470 node's remaining resource is no less than the amount of newly 471 request resource. Otherwise, the resource reservation request 472 failed. Resource is reserved in unit of time slot, and at most K 473 time slots. If a flow wants to consecutively reserve resources 474 after the new resource reservation request expired, edge node/ 475 host can send renewal request. Similar to new resource 476 reservation request, renewal request also needs to carry the flow 477 identifier (the same identifier as the flow identifier carried by 478 the new resource reservation request), the amount of reserved 479 resource (no more than the previous request), as well as the 480 number of time slot that the resource will be reserved. Edge 481 node/host also can active teardown the resource reservation along 482 the path. 484 4. After receiving the the per-flow resource reservation message, 485 core nodes refresh their aggregated resource reservation windows 486 accordingly. As item 2 specifies, core node may record the sum 487 of reserved_resource or the remaining resource (remaining 488 resource = capacity - sum of reserved_resource). If the sum of 489 reserved resources is recorded, then core node should add the 490 newly requested resource to the maintained resource in each 491 related time slot. Otherwise if the remaining resource is 492 recorded, then core node should subtract the newly requested 493 resource to the maintained resource in each related time slot. 495 5. Performance Analysis 497 5.1. Queueing Delay 499 We consider forwarding from an LDN node A via an LDN node B to an LDN 500 node C and call the single-hop LDN delay the time between a packet 501 being sent by A and the time it is re-sent by B. This single-hop 502 delay is composed from the A->B propagation delay and the single-hop 503 queuing delay A->B. 505 |cycle x | 506 Node A +-------\+ 507 \ 508 \ 509 \ 510 |\ cycle y|cycle y+1| 511 Node B +V--------+--------\+ 512 : \ 513 : Queueing Delay :\ 514 :...=2*T ............ V 516 Figure 6: Single-Hop Queueing Delay 518 As Figure 6 shows, cycle x of Node A will be mapped into cycle y+1 of 519 Node B as long as the last packet sent from A->B is received within 520 the cycle y. If the last packet is re-sent out by B at the end of 521 cycle y+1, then the largest single-hop queueing delay is 2*T. 522 Therefore the end-to-end queueing delay's upper bound is 2*T*H, where 523 H is the number of hops. 525 If A did not forward the LDN packet from a prior LDN forwarder but is 526 the actual traffic source, then the packet may have been delayed by a 527 gate function before it was sent to B. The delay of this function is 528 outside of scope for the LDN delay considerations. If B is not 529 forwarding the LDN packet but the final receiver, then the packet may 530 not need to be queued and released in the same fashion to the 531 receiver as it would be queued/released to a downstream LDN node, so 532 if a path has one source followed by N LDN forwarders followed by one 533 receivers, this should be considered to be a path with N-1 LDN hops 534 for the purpose of latency and jitter calculations. 536 5.2. Jitter 538 Considering the simplest scenario one hop forwarding at first, 539 suppose Node A is the upstream node of Node B, the packet sent from 540 Node A at cycle x will be received by Node B at cycle y as Figure 7 541 shows. 543 - The best situation is Node A sends packet at the end of cycle x, 544 and Node B receives packet at the beginning of cycle y, then the 545 delay is denoted by w; 547 - The worst situation is Node A sends packet at the beginning of 548 cycle x, and Node B receives packet at the end of cycle y, then 549 the delay= w + length of cycle x + length of cycle y= w+2*T; 551 - Hence the jitter's upper bound of this simplest scenario= worst 552 case-best case=2*T. 554 |cycle x | |cycle x | 555 Node A +-------\+ Node A +\-------+ 556 :\ \ : 557 : \ -------------\ 558 : \ : \ 559 :w |\ | :w| \ | 560 Node B : +V--------+ Node B : +--------V+ 561 cycle y cycle y 563 (a) best situation (b) worst situation 565 Figure 7: Jitter Analysis for One Hop Forwarding 567 Next considering two hops forwarding as Figure 8 shows. 569 - The best situation is Node A sends packet at the end of cycle x, 570 and Node C receives packet at the beginning of cycle z, then the 571 delay is denoted by w'; 573 - The worst situation is Node A sends packet at the beginning of 574 cycle x, and Node C receives packet at the end of cycle z, then 575 the delay= w' + length of cycle x + length of cycle z= w'+2*T; 577 - Hence the jitter's upper bound = worst case-best case=2*T. 579 |cycle x | 580 Node A +-------\+ 581 \ 582 :\| cycle y | 583 Node B : \---------+ 584 : \ 585 : \--------\ 586 : \ | 587 Node C ......w'......+V--------+ 588 cycle z 590 (a) best situation 592 |cycle x | 593 Node A +\-------+ 594 \ : 595 \ : | cycle y | 596 Node B \ : +---------+ 597 \ : 598 ---:--------------------\ 599 : | \ | 600 Node C :......w'.....+--------V+ 601 cycle z 603 (b) worst situation 605 Figure 8: Jitter Analysis for Two Hops Forwarding 607 And so on. For multi-hop forwarding, the end-to-end delay will 608 increase as the number of hops increases, while the delay variation 609 (jitter) still does not exceed 2*T. 611 6. IANA Considerations 613 This document makes no request of IANA. 615 7. Security Considerations 617 Security issues have been carefully considered in 618 [draft-ietf-detnet-security]. More discussion is TBD. 620 8. Acknowledgements 622 TBD. 624 9. Normative References 626 [draft-ietf-detnet-architecture] 627 "DetNet Architecture", . 630 [draft-ietf-detnet-dp-sol] 631 "DetNet Data Plane Encapsulation", 632 . 635 [draft-ietf-detnet-problem-statement] 636 "DetNet Problem Statement", 637 . 640 [draft-ietf-detnet-security] 641 "DetNet Security Considerations", 642 . 645 [draft-ietf-detnet-use-cases] 646 "DetNet Use Cases", . 649 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 650 Requirement Levels", BCP 14, RFC 2119, 651 DOI 10.17487/RFC2119, March 1997, 652 . 654 [scheduled-queues] 655 "Scheduled queues, UBS, CQF, and Input Gates", 656 . 659 Authors' Addresses 661 Li Qiang (editor) 662 Huawei 663 Beijing 664 China 666 Email: qiangli3@huawei.com 668 Bingyang Liu 669 Huawei 670 Beijing 671 China 673 Email: liubingyang@huawei.com 675 Toerless Eckert (editor) 676 Huawei USA - Futurewei Technologies Inc. 677 2330 Central Expy 678 Santa Clara 95050 679 USA 681 Email: tte+ietf@cs.fau.de 683 Liang Geng 684 China Mobile 685 Beijing 686 China 688 Email: gengliang@chinamobile.com 690 Lei Wang 691 China Mobile 692 Beijing 693 China 695 Email: wangleiyjy@chinamobile.com