<?xml version="1.0" encoding="US-ASCII"?>

<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY RFC2119 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml">
<!ENTITY RFC1195 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.1195.xml">
<!ENTITY RFC2212 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2212.xml">
<!ENTITY RFC2629 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2629.xml">
<!ENTITY RFC6658 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6658.xml">
<!ENTITY RFC7806 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.7806.xml">
<!ENTITY I-D.ietf-detnet-use-cases SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-detnet-use-cases.xml">
<!ENTITY I-D.ietf-detnet-architecture SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.draft-ietf-detnet-architecture-08.xml">
<!ENTITY I-D.ietf-detnet-dp-sol-ip SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.draft-ietf-detnet-dp-sol-ip-00.xml">
<!ENTITY I-D.ietf-detnet-dp-sol-mpls SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.draft-ietf-detnet-dp-sol-mpls-00.xml">
]>

<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<?rfc strict="yes" ?>
<?rfc toc="yes"?>
<?rfc tocdepth="4"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes" ?>
<?rfc compact="yes" ?>
<?rfc subcompact="no" ?>

<rfc category="std" docName="draft-finn-detnet-bounded-latency-02" ipr="trust200902">

<!-- ***** FRONT MATTER ***** -->

<front>

<title abbrev="DetNet Bounded Latency">DetNet Bounded Latency</title>

<author initials="N" surname="Finn" fullname="Norman Finn">
    <organization>
        Huawei Technologies Co. Ltd
    </organization>
    <address>
        <postal>
            <street>3101 Rio Way</street>
            <city>Spring Valley</city>
            <region>California</region>
            <code>91977</code>
            <country>US</country>
        </postal>
        <phone>+1 925 980 6430</phone>
        <email>norman.finn@mail01.huawei.com</email>
    </address>
</author>

<author initials="J-Y" surname="Le Boudec" fullname="Jean-Yves Le Boudec">
    <organization>
        EPFL
    </organization>
    <address>
        <postal>
            <street>IC Station 14</street>
            <city>Lausanne EPFL</city>
            <code>1015</code>
            <country>Switzerland</country>
        </postal>
        <email>jean-yves.leboudec@epfl.ch</email>
    </address>
</author>

<author initials="E" surname="Mohammadpour" fullname="Ehsan Mohammadpour">
    <organization>
        EPFL
    </organization>
    <address>
        <postal>
            <street>IC Station 14</street>
            <city>Lausanne EPFL</city>
            <code>1015</code>
            <country>Switzerland</country>
        </postal>
        <email>ehsan.mohammadpour@epfl.ch</email>
    </address>
</author>

<author initials="J" surname="Zhang" fullname="Jiayi Zhang">
    <organization>
        Huawei Technologies Co. Ltd
    </organization>
    <address>
        <postal>
            <street>Q22, No.156 Beiqing Road</street>
            <city>Beijing</city>
            <code>100095</code>
            <country>China</country>
        </postal>
        <email>zhangjiayi11@huawei.com</email>
    </address>
</author>

<author fullname="Bal&aacute;zs Varga" initials="B." surname="Varga">
   <organization>Ericsson</organization>
   <address>
      <postal>
         <street>Konyves K&aacute;lm&aacute;n krt. 11/B</street>
         <city>Budapest</city>
         <country>Hungary</country>
         <code>1097</code>
      </postal>
      <email>balazs.a.varga@ericsson.com</email>
   </address>
</author>

<author fullname="J&aacute;nos Farkas" initials="J." surname="Farkas">
   <organization>Ericsson</organization>
   <address>
      <postal>
         <street>Konyves K&aacute;lm&aacute;n krt. 11/B</street>
         <city>Budapest</city>
         <country>Hungary</country>
         <code>1097</code>
      </postal>
      <email>janos.farkas@ericsson.com</email>
   </address>
</author>

<date month="October" day="22" year="2018" />

<area>Routing</area>

<workgroup>DetNet</workgroup>

<keyword>DetNet, bounded latency, zero congestion loss</keyword>

<abstract>
    <t>This document presents a parameterized timing model for Deterministic Networking
        (DetNet), so that existing and future standards can achieve the DetNet quality of
        service features of bounded latency and zero congestion loss.  It defines requirements
        for resource reservation protocols or servers.  It calls out queuing mechanisms,
        defined in other documents, that can provide the DetNet quality of service.
    </t>
</abstract>
</front>


<!-- ***** MIDDLE MATTER ***** -->

<middle>
    
<section title="Introduction">
	
    <t>The ability for IETF Deterministic Networking (DetNet) or IEEE 802.1 Time-Sensitive
        Networking (TSN, <xref target="IEEE8021TSN"/>) to provide the DetNet services of bounded latency and zero congestion
        loss depends upon A) configuring and allocating network resources for the exclusive
        use of DetNet/TSN flows; B) identifying, in the data plane, the
        resources to be utilized by any given packet, and C) the detailed behavior
        of those resources, especially transmission queue selection, so that
        latency bounds can be reliably assured.  Thus, DetNet is an example of an
        INTSERV Guaranteed Quality of Service <xref target="RFC2212"/></t>
    
    <t>As explained in <xref target="I-D.ietf-detnet-architecture"/>, DetNet
        flows are characterized by 1) a maximum bandwidth, guaranteed either by the
        transmitter or by strict input metering; and 2) a requirement for a
        guaranteed worst-case end-to-end latency.  That latency guarantee,
        in turn, provides the opportunity for the network to supply enough buffer
        space to guarantee zero congestion
        loss.  To be of use to the applications identified in
        <xref target="I-D.ietf-detnet-use-cases"/>, it must be possible to calculate,
        before the transmission of a DetNet flow commences, both the worst-case
        end-to-end network latency, and the amount of buffer space required at each hop to
        ensure against congestion loss.
    </t><t>
        This document references specific queuing mechanisms, defined in other documents, that can be
        used to control packet transmission at each output port and achieve the DetNet
        qualities of service.
        This document presents a timing model for sources, destinations, and the
        network nodes that relay packets that is applicable to all of those referenced
        queuing mechanisms.  The parameters specified in this model:
    </t><t>
        <list style="symbols">
            <t>
                Characterize a DetNet flow in a way that provides externally measurable
                verification that the sender is conforming to its promised maximum,
                can be implemented reasonably easily by a sending device, and does not
                require excessive over-allocation of resources by the network.
            </t><t>
                Enable reasonably accurate computation of worst-case end-to-end latency,
                in a way that requires as little detailed knowledge as possible of the
                behavior of the Quality of Service (QoS) algorithms implemented in each
                device, including queuing, shaping, metering, policing, and transmission
                selection techniques.
            </t>
        </list>
    </t><t>
        Using the model presented in this document, it should be possible for an
        implementor, user, or standards development organization to select
        a particular set of queuing mechanisms for each device in a DetNet network,
        and to select a resource reservation algorithm for that network, so that
        those elements can work together to provide the DetNet service.
    </t><t>
        This document does not specify any resource reservation protocol or server.
        It does not describe all of the requirements for that protocol or server.
        It does describe requirements for such resource reservation methods,
        and for queuing mechanisms that, if met, will enable them to work together.
    </t>
    <t>
        NOTE:  This draft is not yet complete, but it is sufficiently so to share
        with the Working Group and to obtain opinions and direction.  The present
        intent of is for this draft to become a normative RFC, defining
        how one SHALL/SHOULD provide the DetNet quality of service.  There are
        still a few authors' notes to each other present in this draft.
    </t>
</section>

<section title="Conventions Used in This Document">

   <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in <xref target="RFC2119"/>.</t>

   <t>The lowercase forms with an initial capital "Must", "Must Not", "Shall", "Shall Not", "Should", "Should Not", "May", and "Optional" in this document are to be interpreted in the sense defined in <xref target="RFC2119"/>, but are used where the normative behavior is defined in documents published by SDOs other than the IETF.</t>

</section>

<section title="Terminology and Definitions">
    <t>
	This document uses the terms defined in <xref target="I-D.ietf-detnet-architecture"/>.</t>
	
</section>

<section title="DetNet bounded latency model">
    <section title="Flow creation">
        <t>
            The bounded latency model assumes the use of the following
            paradigm for provisioning a particular DetNet flow:
        </t><t>
            <list style="numbers">
                <t>
                    Perform any configuration required by the relay systems in the network for the
                    classes of service to be offered, including one or more classes of
                    DetNet service.  This configuration is not tied to
                    any particular flow.
                </t><t>
                    Characterize the DetNet flow in terms of limitations on the sender
                    [<xref target="sender_parms"/>] and flow requirements [<xref target="relay_parms"/>].
                </t><t>
                    Establish the path that the DetNet flow will take through the network
                    from the source to the destination(s).  This can be a point-to-point
                    or a point-to-multipoint path.
                </t><t>
                    Select one of the DetNet classes of service for the DetNet flow.
                </t><t>
                    Compute the worst-case end-to-end latency for the DetNet flow.  In the process,
                    determine whether sufficient resources are available for that flow to
                    guarantee the required latency and to provide zero congestion loss.
                </t><t>
                    Assuming that the resources are available, commit those resources to the
                    flow.  This may or may not require adjusting the parameters that control
                    the queuing mechanisms at each hop along the flow's path.
                </t>
            </list>
        </t><t>
            This paradigm can be static and/or dynamic, and can be implemented using peer-to-peer
            protocols or using a central server model.  In some situations, backtracking and
            recursing through this list may be necessary.
        </t><t>
            Issues such as un-provisioning a
            DetNet flow in favor of another when resources are scarce are not considered.
            How the path to be taken by a DetNet flow is chosen is not considered in this document.
        </t>
    </section>
    <section anchor="relay_model" title="Relay system model">
            <t>In <xref target="fig_timing_model"/> we see a breakdown of the per-hop latency experienced by a packet passing through a relay system, in
                terms that are suitable for computing both hop-by-hop latency and per-hop buffer requirements.</t>
            <figure title="Timing model for DetNet or TSN" anchor="fig_timing_model">
                <artwork align="center"><![CDATA[
      DetNet relay node A            DetNet relay node B
   +-------------------------+       +------------------------+
   |              Queuing    |       |              Queuing   |
   |   Regulator subsystem   |       |   Regulator subsystem  |
   |   +-+-+-+-+ +-+-+-+-+   |       |   +-+-+-+-+ +-+-+-+-+  |
-->+   | | | | | | | | | +   +------>+   | | | | | | | | | +  +--->
   |   +-+-+-+-+ +-+-+-+-+   |       |   +-+-+-+-+ +-+-+-+-+  |
   |                         |       |                        |
   +-------------------------+       +------------------------+
   |<->|<------>|<------->|<->|<---->|<->|<------>|<------>|<->|<--
2,3  4      5        6      1    2,3   4      5        6     1   2,3
                1: Output delay       4: Processing delay
                2: Link delay         5: Regulation delay
                3: Preemption delay   6: Queuing delay.
                ]]></artwork>
            </figure>
            <t>In <xref target="fig_timing_model"/>, we see two DetNet relay nodes (typically, bridges
                or routers), with a wired link between them.  In this model, the only queues we deal
                with explicitly are attached to the output port; other queues are modeled as variations
                in the other delay times.  (E.g., an input queue could be modeled as either a variation
                in the link delay [2] or the processing delay [4].)  There are six delays that a packet
                can experience from hop to hop.</t>
            <t><list style="hanging">
                <t hangText="1. Output delay"><vspace blankLines="0"/>
                    The time taken from the selection of a packet for output from a queue to the
                    transmission of the first bit of the packet on the physical link.  If the
                    queue is directly attached to the physical port, output delay can be a constant.
                    But, in many implementations, the queuing mechanism in a forwarding ASIC is
                    separated from a multi-port MAC/PHY, in a second ASIC, by a multiplexed connection.
                    This causes variations in the output delay that are hard for the forwarding node
                    to predict or control.
                </t>
                <t hangText="2. Link delay"><vspace blankLines="0"/>
                    The time taken from the transmission of the first bit of the packet to the
                    reception of the last bit, assuming that the transmission is not suspended by
                    a preemption event.  This delay has two components, the
                    first-bit-out to first-bit-in delay and the first-bit-in to last-bit-in delay
                    that varies with packet size.  The former is typically measured by the Precision Time
                    Protocol and is constant (see <xref target="I-D.ietf-detnet-architecture"/>).  However,
                    a virtual "link" could exhibit a variable link delay.</t>
                <t hangText="3. Preemption delay"><vspace blankLines="0"/>
                    If the packet is interrupted (e.g. <xref target="IEEE8023br"/> and <xref target="IEEE8021Qbu"/> preemption) in
                    order to transmit another packet or packets, an arbitrary delay can result.</t>
                <t hangText="4. Processing delay"><vspace blankLines="0"/>
                    This delay covers the time from the reception of the last bit of the packet to the
					time the packet is enqueued in the regulator (Queuing subsystem, if there is no regulation).
                    This delay can be variable, and depends on the details of the operation of the forwarding node.</t>
	            <t hangText="5. Regulator delay"><vspace blankLines="0"/>
	                This is the time spent from the insertion of the last bit of a packet into a regulation queue until the time
	                the packet is declared eligible according to its regulation constraints. We assume that 
					this time can be calculated based on the details of regulation policy. If there is no regulation, this time is zero.</t>
                <t hangText="6. Queuing subsystem delay"><vspace blankLines="0"/>
                    This is the time spent for a packet from being declared eligible until being
					selected for output on the next link.  We assume that this time is
                    calculable based on the details of the queuing mechanism. If there is no regulation, this time is from the insertion 
					of the packet into a queue until it is selected for output on the next link.</t>
            </list></t>
            <t>Not shown in <xref target="fig_timing_model"/> are the other output queues that we
                presume are also attached to that same output port as the queue shown, and against
                which this shown queue competes for transmission opportunities.</t>
            <t>The initial and final measurement point in this analysis (that is, the definition
                of a "hop") is the point at which a packet is selected for output.  In general,
                any queue selection method that is suitable for use in a DetNet network includes
                a detailed specification as to exactly when packets are selected for transmission.
                Any variations in any of the delay times 1-4 result in a need for additional
                buffers in the queue.  If all delays 1-4 are constant, then any variation in the
                time at which packets are inserted into a queue depends entirely on the timing
                of packet selection in the previous node.  If the delays 1-4 are not constant,
                then additional buffers are required in the queue to absorb these variations.
                Thus:
                <list style="symbols">
                    <t>Variations in output delay (1) require buffers to absorb that variation
                        in the next hop, so the output delay variations of the previous hop (on each
                        input port) must be known in order to calculate the buffer space required
                        on this hop.</t>
                    <t>Variations in processing delay (4) require additional output buffers
                        in the queues of that same Detnet relay node.  Depending on the details
                        of the queueing subsystem delay (6) calculations, these variations need not be
                        visible outside the DetNet relay node.
                    </t>
                </list></t>
        </section>
</section>
<section anchor="e2eLatency" title="Computing End-to-end Latency Bounds">
	<section title="Non-queuing delay bound" anchor="nonqueuing">
    <t>End-to-end latency bounds can be computed using the delay model in <xref target="relay_model"/>. Here it is important
        to be aware that for several queuing mechanisms, the worst-case end-to-end delay is less than the sum of the
        per-hop worst-case delays.
        An end-to-end latency bound for one DetNet flow
        can be computed as
    </t>
    <t>
        <list style="hanging">
            <t> end_to_end_latency_bound = non_queuing_latency + queuing_latency
            </t>
        </list>
    </t>
    <t>The two terms in the above formula are computed as follows. First,
        at the h-th hop along the path of this DetNet flow, obtain an upper
        bound
        per-hop_non_queuing_latency[h] on the sum of delays
        1,2,3,4
        of  <xref target="fig_timing_model"/>. These upper-bounds are expected to
        depend on the specific technology of the node at the h-th hop but not on
        the T-SPEC of this DetNet flow. Then set non_queuing_latency = the sum
        of per-hop_non_queuing_latency[h] over all hops h.
    </t>
	</section>
	<section title="Queuing delay bound" anchor="queuing">
    <t>Second, compute queuing_latency as an upper bound to the sum of the
        queuing delays along the path. The value of queuing_latency depends
        on the T-SPEC of this flow and possibly
        of other flows in the network, as well as the specifics of the queuing
        mechanisms deployed along the path of this flow. </t>
    <t>
        For several queuing mechanisms, 
        queuing_latency is less than the
        sum of upper bounds on the queuing delays (5,6)
        at every
        hop.  This occurs with (1) per-flow queuing, and (2) per-class queuing with regulators, as explained in <xref target="perflow"/>, <xref target="perclass"/>, and <xref target="queue_model"/>.
        </t>
    
    <t>For other queuing mechanisms the only available value of queuing_latency
        is the sum of the per-hop queuing delay bounds.
        In such cases, the computation of per-hop queuing delay bounds must account for the fact that the T-SPEC of a DetNet flow is no longer satisfied at
        the ingress of a hop, since burstiness increases as one flow traverses one DetNet node.
    </t>
		<section title="Per-flow queuing mechanisms" anchor="perflow">
			<t>
				With such mechanisms, each flow uses a separate queue inside every node. The service for each queue is abstracted with a guaranteed rate and a delay. For every flow  the per-node delay bound as well as end-to-end delay bound can be computed from the traffic specification of this flow at its source and from the values of rates and latencies at all nodes along its path. Details of calculation for IntServ are described in <xref target="intserv"/>.
			</t>
		</section>

		<section title="Per-class queuing mechanisms" anchor="perclass">
		    <t>
				With such mechanisms, the flows that have the same class share the same queue. A practical example is the queuing mechanism in Time Sensitive Networking. One key issue in this context is how to deal with the burstiness cascade: individual flows that share a resource dedicated to a class may see their burstiness increase, which may in turn cause increased burstiness to other flows downstream of this resource. Computing latency upper bounds for such cases is difficult, and in some conditions impossible <xref target="charny2000delay"/><xref target="bennett2002delay"/>. Also, when bounds are obtained, they depend on the complete configuration, and must be recomputed when one flow is added.
			</t>
			<t>
				A solution to deal with this issue is to reshape the flows at every hop. This can be done with per-flow regulators (e.g. leaky bucket shapers), but this requires per-flow queuing and defeats the purpose of per-class queuing. An alternative is the interleaved regulator, which reshapes individual flows without per-flow queuing (<xref target="Specht2016UBS"/>, <xref target="IEEE8021Qcr"/>"). With  an interleaved regulator, the packet at the head of the queue is regulated based on
its (flow) regulation constraints; it is released at the earliest time at which this is possible without violating the constraint. One key feature of per-flow or interleaved regulator is that, it does not increase worst-case latency bounds <xref target="le_boudec_theory_2018"/>. Specifically, when an interleaved regulator is appended to a FIFO subsystem, it does not increase the worst-case delay of the latter. 
			</t>
			<t>
				<xref target="fig_detnet_e2e_example"/> shows an example of a network with 5 nodes, per-class queuing mechanism and interleaved regulators as in <xref target="fig_timing_model"/>. 
				An end-to-end delay bound for flow f, traversing nodes 1 to 5, is calculated as follows:
			</t>
			<t>
		    	<list style="hanging">
		        	<t> end_to_end_latency_bound_of_flow_f = C12 + C23 + C34 + S4
		        	</t>
		    	</list>
			</t>
			<t>
				In the above formula, Cij is a bound on the aggregate response time of queuing subsystem in node i and interleaved regulator of node j, 
				and S4 is a bound on the response time of the queuing subsystem in node 4 for flow f. In fact, using the delay definitions in 
				<xref target="relay_model"/>, Cij is a bound on sum of the delays 1,2,3,6 of node i and 4,5 of node j. Similarly, S4 is a bound on 
				sum of the delays 1,2,3,6 of node 4. A practical example of queuing model and delay calculation is presented <xref target="TSNwithATSmodel"/>. 
			</t>
<figure title="End-to-end latency computation example" anchor="fig_detnet_e2e_example">
<artwork align="center"><![CDATA[
                    f
          ----------------------------->
        +---+   +---+   +---+   +---+   +---+
        | 1 |---| 2 |---| 3 |---| 4 |---| 5 |
        +---+   +---+   +---+   +---+   +---+
          \__C12_/\__C23_/\__C34_/\_S4_/


]]></artwork>
		    </figure>
			<t>
				REMARK: The end-to-end delay bound calculation provided here gives a much better upper bound in comparison with end-to-end delay bound 
				computation by adding the delay bounds of each node in the path of a flow <xref target="TSNwithATS"/>.
			</t>
		</section>
	</section>
 

    
</section>
<section anchor="achieving" title="Achieving zero congestion loss">
    <t>
        When the input rate to an output queue exceeds the output rate for a sufficient
        length of time, the queue must overflow.  This is congestion loss, and this is
        what deterministic networking seeks to avoid.
    </t>
    <section title ="A General Formula" anchor="generalBacklog">
    
    <t>
        To avoid congestion losses, an upper bound on the backlog present in the regulator and queuing subsystem of <xref target="fig_timing_model"/>
    must be computed during resource reservation. This bound depends on the set of flows that use these queues,
    the details of the specific queuing mechanism and an 
    upper bound on the processing delay (4). The queue must contain the packet in transmission plus all other packets that
    are waiting to be selected for output.
    </t>
    <t>
    A conservative backlog  bound, that applies to all systems, can be derived as follows.  
    </t>
    
    <t>
    The backlog bound is counted in data units (bytes, or words of multiple bytes) that are relevant for buffer allocation.     
    For every class we need one buffer space for the packet in transmission, plus space for the packets that are waiting to be selected for output.
    Excluding transmission and preemption times, the packets are waiting in the queue since reception of the last bit, for a duration
    equal to the processing delay (4) plus the queuing delays (5,6). 
    </t>
    <t>Let 
    <list style="symbols">
    <t>nb_classes be the number of classes of traffic that may use this output port</t>
    <t> total_in_rate be the sum of the line rates of all input ports that send traffic of any class to this output port. The value of total_in_rate
    is in data units (e.g. bytes) per second. 
    </t>
    <t>nb_input_ports be the number input ports that send traffic of any class to this output port</t>
    <t>max_packet_length be the maximum packet size for packets of any class that may be sent to this output port. This is counted in data units.
    </t>
    <t>max_delay45 be an upper bound, in seconds, on the sum of the processing delay (4) and the queuing delays (5,6) for a packet
    of any class at this ouput port. 
    </t>
    
    </list>      
    </t>
    
    
    <t>Then a bound on the backlog of traffic of all classes 
    in the queue at this output port is</t>
    
       <t>
         <list style="hanging">
           <t> backlog_bound = ( nb_classes + nb_input_ports ) *  max_packet_length  + total_in_rate* max_delay45 
           </t>
         </list>
       </t>

    </section>

 </section>
 <section anchor="queue_model" title="Queuing model">
 
    <section anchor="data_model" title="Queuing data model">

	<t>Sophisticated queuing mechanisms are available in Layer 3 (L3, see, e.g., <xref target="RFC7806"/> for an overview).
        In general, we assume that "Layer 3" queues, shapers, meters, etc., are precisely the "regulators"
        shown in <xref target="fig_timing_model"/>. The "queuing subsystems" in this figure are not the province solely of bridges;
        they are an essential part of any DetNet relay node.  As illustrated by numerous implementation examples, some of the
        "Layer 3" mechanisms described in documents such as <xref target="RFC7806"/> are often integrated,
        in an implementation, with the "Layer 2" mechanisms also implemented in the same system.  An integrated model
        is needed in order to successfully predict the interactions among the different queuing mechanisms
        needed in a network carrying both DetNet flows and non-DetNet flows.
    </t>
    <t><xref target="fig_8021Q_data_model"/> shows the general model for the flow of packets through
    the queues of a DetNet relay node.  Packets are assigned to a class of service.  The classes of
    service are mapped to some number of regulator queues.  Only DetNet/TSN packets pass through
    regulators.  Queues compete for the selection of packets
    to be passed to queues in the queuing subsystem. Packets again are selected for output from the
    queuing subsystem.
    </t>
    <figure title="IEEE 802.1Q Queuing Model: Data flow" anchor="fig_8021Q_data_model">
        <artwork align="center"><![CDATA[
                                 |
+--------------------------------V----------------------------------+
|                    Class of Service Assignment                    |
+--+------+----------+---------+-----------+-----+-------+-------+--+
   |      |          |         |           |     |       |       |
+--V-+ +--V-+     +--V--+   +--V--+     +--V--+  |       |       |
|Flow| |Flow|     |Flow |   |Flow |     |Flow |  |       |       |
|  0 | |  1 | ... |  i  |   | i+1 | ... |  n  |  |       |       |
| reg| | reg|     | reg |   | reg |     | reg |  |       |       |
+--+-+ +--+-+     +--+--+   +--+--+     +--+--+  |       |       |
   |      |          |         |           |     |       |       |
+--V------V----------V--+   +--V-----------V--+  |       |       |
|  Trans.  selection    |   | Trans. select.  |  |       |       |
+----------+------------+   +-----+-----------+  |       |       |
           |                      |              |       |       |
        +--V--+                +--V--+        +--V--+ +--V--+ +--V--+
        | out |                | out |        | out | | out | | out |
        |queue|                |queue|        |queue| |queue| |queue|
        |  1  |                |  2  |        |  3  | |  4  | |  5  |
        +--+--+                +--+--+        +--+--+ +--+--+ +--+--+
           |                      |              |       |       |
+----------V----------------------V--------------V-------V-------V--+
|                      Transmission selection                       |
+----------+----------------------+--------------+-------+-------+--+
           |                      |              |       |       |
           V                      V              V       V       V
     DetNet/TSN queue       DetNet/TSN queue    non-DetNet/TSN queues
]]></artwork>
    </figure>
    <t>Some relevant mechanisms are hidden in this figure, and are performed in the
        queue boxes:
        <list style="symbols">
        <t>
            Discarding packets because a queue is full.
        </t><t>
            Discarding packets marked "yellow" by a metering function, in preference
            to discarding "green" packets.
        </t>
        </list>
    </t><t>
        Ideally, neither of these actions are performed on DetNet packets.  Full queues
        for DetNet packets should occur only when a flow is misbehaving, and the DetNet
        QoS does not include "yellow" service for packets in excess of committed rate.
    </t><t>
        The Class of Service Assignment function can be quite complex, even in a
        bridge <xref target="IEEE8021Q"/>, since the
        introduction of <xref target="IEEE802.1Qci"/>.  In addition to the Layer 2 priority
        expressed in the 802.1Q VLAN tag, a DetNet relay node can utilize any of the following
        information to assign a packet to a particular class of service (queue):
        <list style="symbols">
            <t>
                Input port.
            </t><t>
                Selector based on a rotating schedule that starts at regular, time-synchronized
                intervals and has nanosecond precision.
            </t><t>
                MAC addresses, VLAN ID, IP addresses, Layer 4 port numbers, DSCP.
                (<xref target="I-D.ietf-detnet-dp-sol-ip"/>, <xref target="I-D.ietf-detnet-dp-sol-mpls"/>)
                (Work items are expected to add MPC and other indicators.)
            </t><t>
                The Class of Service Assignment function can contain metering and policing
                functions.
            </t><t>
                MPLS and/or pseudowire (<xref target="RFC6658"/>) labels.
            </t>
        </list>
    </t><t>
        The "Transmission selection" function decides which queue is to transfer its
        oldest packet to the output port when a transmission opportunity arises.
    </t>

    </section>
    <section anchor="preempt_intro" title="Preemption">
        <t>
    In IEEE Std 802.1Q, preemption is modeled as consisting of two MAC/PHY stacks, one for packets that
    can be interrupted, and one for packets that can interrupt the interruptible packets.
    The Class of Service (queue) determines which packets are which.
    Only one layer of preemption is supported.  DetNet flows pass through the interrupting
    MAC.  Only best-effort queues pass through the interruptible MAC, and can thus be preempted.
    </t>
    </section>

    <section anchor="time_schedule_intro" title="Time-scheduled queuing">
        <t>
        In <xref target="IEEE8021Qbv"/>, the notion of time-scheduling queue gates were
        introduced.  On below every output queue (the lower row of queues in
        <xref target="fig_8021Q_data_model"/>) is a gate that permits or denies the
        queue to present data for transmission selection.  The gates are controlled by a
        rotating schedule that can be locked to a clock that is synchronized with other
        relay nodes.  The DetNet class of service can be supplied by queuing mechanisms based
        on time, rather than the regulator model in <xref target="fig_8021Q_data_model"/>.
        These queuing mechanisms are discussed in <xref target="time_based_models"/>, below.
        </t>
    </section>
	<section anchor="TSNwithATSmodel" title="Time-Sensitive Networking with Asynchronous Traffic Shaping">
    	<t>
			Consider a network with a set of nodes (switches and hosts) along with a set of flows between hosts.
			Hosts are sources or destinations of flows. There are four types of flows, namely, control-data traffic (CDT),
			class A, class B, and best effort (BE) in decreasing order of priority. Flows of classes A and B are together
			referred to as AVB flows. It is assumed a subset of TSN functions as described next.
    	</t>
		<t>
			It is also assumed that contention occurs only at the output port of a TSN node. Each node output port performs per-class 
			scheduling with eight classes: one for CDT, one for class A traffic, one for class B traffic, and five for BE traffic 
			denoted as BE0-BE4 (according to TSN standard). In addition, each node output port also performs per-flow regulation for 
			AVB flows using an interleaved regulator (IR), called Asynchronous Traffic Shaper (ATS) in TSN. Thus, at each output port of a node, there is one interleaved regulator per-input 
			port and per-class. The detailed picture of scheduling and regulation architecture at a node output port is given by <xref target="fig_TSN_node"/>. 
			The packets received at a node input port for a given class are enqueued in the respective interleaved regulator at the output port.
			Then, the packets from all the flows, including CDT and BE flows, are enqueued in a class based FIFO system (CBFS) <xref target="TSNwithATS"/>.
		</t>
		<figure title="Architecture of a TSN node output port with interleaved regulators (IRs)" anchor="fig_TSN_node">
		<artwork><![CDATA[

      +--+   +--+ +--+   +--+
      |  |   |  | |  |   |  |
      |IR|   |IR| |IR|   |IR|
      |  |   |  | |  |   |  |
      +-++XXX++-+ +-++XXX++-+
        |     |     |     |
        |     |     |     |
+---+ +-v-XXX-v-+ +-v-XXX-v-+ +-----+ +-----+ +-----+ +-----+ +-----+
|   | |         | |         | |Class| |Class| |Class| |Class| |Class|
|CDT| | Class A | | Class B | | BE4 | | BE3 | | BE2 | | BE1 | | BE0 |
|   | |         | |         | |     | |     | |     | |     | |     |
+-+-+ +----+----+ +----+----+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+
  |        |           |         |       |       |       |       |
  |      +-v-+       +-v-+       |       |       |       |       |
  |      |CBS|       |CBS|       |       |       |       |       |
  |      +-+-+       +-+-+       |       |       |       |       |
  |        |           |         |       |       |       |       |
+-v--------v-----------v---------v-------V-------v-------v-------v--+
|                     Strict Priority selection                     |
+--------------------------------+----------------------------------+
                                 |
                                 V
        ]]></artwork>
		</figure>
		
		<t>
			The CBFS includes two CBS subsystems, one for each class A and B. The CBS serves a packet from a class according to the available credit
			for that class. The credit for each class A or B increases based on the idle slope, and decreases based on the send slope, both of which 
			are parameters of the CBS. The CDT and BE0-BE4 flows in the CBFS are served by separate FIFO subsystems. Then, packets from all flows are
			served by a transmission selection subsystem that serves packets from each class based on its priority. All subsystems are non-preemptive.
			Guarantees for AVB traffic can be provided only if CDT traffic is bounded; it is assumed that the CDT traffic has an affine arrival curve 
			r t + b in each node, i.e. the amount of bits entering a node within a time interval t is bounded by r t + b. 
		</t>
		<t>[[ EM: THE FOLLOWING PARAGRAPH SHOULD BE ALIGNED WITH <xref target="relay_parms"/>. ]]</t>
		<t>
			Additionally, it is assumed that flows are regulated at their source, according to either leaky bucket (LB) or length rate quotient (LRQ). The LB-type regulation 
			forces flow f to conform to an arrival curve r_f t+b_f . The LRQ-type regulation with rate r_f ensures that the time separation between 
			two consecutive packets of sizes l_n and l_n+1 is at least l_n/r_f. Note that if flow f is LRQ-regulated, it satisfies an arrival curve 
			constraint r_f t + L_f where L_f is its maximum packet size (but the converse may not hold). For an LRQ regulated flow, b_f = L_f.
			At the source hosts, the traffic satisfies its regulation constraint, i.e. the delay due to interleaved regulator at hosts is ignored.
		</t>
		<t>
			At each switch implementing an interleaved regulator, packets of multiple flows are processed in one FIFO queue; the packet at the head 
			of the queue is regulated based on its regulation constraints; it is released at the earliest time at which this is possible without violating 
			the constraint. The regulation type and parameters for a flow are the same at its source and at all switches along its path.
		</t>
		<t>
			Details of end-to-end delay bound calculation in such a system is described in <xref target="TSNwithATS"/>.
		</t>
	</section>

    <section title="IntServ" anchor="intserv">
	        <t>
	            In this section, a worst-case queuing latency calculating method is provided.  In deterministic network, the traffic of a flow is constrained by arrival curve.  Queuing mechanisms in a DetNet node can be characterized and constrained by service curve.  By using arrival curve and service curve with Network Calculus theory <xref target="NetCalBook"/>, a tight worst-case queuing latency can be calculated.
	        </t>
	        <t>
	           Considering a DetNet flow at output port, R(s) is the cumulative arrival data until time s.  For any time period t, the incremental arrival data is constrained by an arrival curve a(t)
	        </t>
	        <t>
	           <list style="hanging">
	            <t> R(s+t)-R(s) &lt;= a(t), \any s>=0, t>=0
	            </t>
	        </list> 
	        </t>
	        <t>
	            The scheduling that a relay node performs to a DetNet flow can be abstracted as service curve.  It describes the minimal service the network can offer.  The service curve b(t) of a node is defined as below, if the accumulative input data R and output data R_out of the node satisfies
	        </t>
	        <t>
	           <list style="hanging">
	            <t> R_out(t) >= inf(R(s) + b(t-s) ), \any s &lt;=t
	            </t>
	        </list> 
	        </t>
	        <t>
	            where the operator "inf" calculates the greatest lower bound in period t.
	        </t>
	        <t>
	            By calculating the maximum vertical deviation between arrival curve a(t) and service curve b(t), one can obtain the backlog bound in data unit
	        </t>
	        <t>
	           <list style="hanging">
	            <t> Backlog_bound = sup_t(a(t) - b(t) )
	            </t>
	        </list> 
	        </t>
	        <t>   
	            where operator "sup_t" calculates the minimum upper bound with respect to t. The buffer space at a node should be no less than the backlog bound to achieve zero congestion loss.
	        </t>
	        <t>
                NOTE: <xref target="generalBacklog"/> gives a general formula for computing the buffer requirements.
                This is an alternative calculation based on the arrival curve and service curve.
<!--	            [[Jiayi: Actually, this section 5.1 is to describe the delay bound. However, based on the necessary formula of curves which are used to calculate delay bound, it is directly to give not only the delay bound, but also backlog bound. So I put the backlog bound here. Also, we see the backlog bound should be given in Section 6. Section 6.1 gives a backlog bound in a different form. To my understanding, the formula in section 6.1 is general formula for backlog bound, while the bound above could be more precise for a perticular secheduling since it is derived from curves. I am looking forward for your opinion in how to arrange these contents.]] -->
	        </t>
	        <t>
	           By calculating the maximum horizontal deviation between arrival curve a(t) and service curve b(t), one can obtain the delay bound as below 
	       </t>
	       <t>
	           <list style="hanging">
	            <t> Delay_bound = sup_s( inf_t( t>=0 | a(s) &lt;= b(s+t) )
	            </t>
	        </list> 
	        </t>
	        <t>
	            where the operator " inf_t" calculates the maximum lower bound with respect to t, the operator "sup_s" calculates the minimum upper bound with respect to s. <xref target="fig_curve"/> shows an example of arival curve, service curve, backlog bound h, and delay bound v.
	        </t>
	         <figure title="Computation of backlog bound and delay bound. Note that arrival and service curves are not necessary to be linear." anchor="fig_curve">
	                <artwork align="center"><![CDATA[
    + bit              .        *
    |                 .     *
    |                .  *
    |               *
    |           *  .
    |       *     .
    |   *   |    .        ..  Service curve
    *-----h-|---.         **  Arrival curve
    |       v  .           h  Delay_bound
    |       | .            v  Backlog_bound
    |       |.
    +-------.--------------------+ time
               ]]></artwork>
	            </figure>
<!--        <t>
	            [[ I concern that this formula causes confusion.  But it is accurate. Maybe we can simply describe it in words, as 'maximum horizontal derivation between two curves'.  We may also substitute the operator 'sup' with 'max', and 'inf' with 'min', although it loses some accuracy.]]
	        </t>
 -->
	        <t>
	            Note that in the formula of Delay_bound, the service curve b(t) can describe either per-hop scheduling that a DetNet node offers to a flow, or concatenation of multiple nodes that represents end-to-end scheduling that DetNet path offers to a flow. In the latter case, the obtained delay bound is end-to-end worst case delay. To calculate this, we should at first derive the concatenated service curve. 
	        </t>
	        <t>
	            Consider a flow traverse two DetNet nodes, which offer service curve b1(t) and b2(t) sequentially.  Then concatenation of the two nodes offers a service curve b_concatenated as below
	        </t>
	       <t>
	           <list style="hanging">
	            <t> b_concatenated(t) =inf_s (b1(s) + b2(t-s) ) , \any 0 &lt;=s &lt;=t
	            </t>
	        </list> 
	        </t>
	        <t>
	            The concatenation of service curve can be directly generalized to include more than two nodes.
	        </t>
<!--        <t>
	            [[ If we need to describe a output bound, we can use this paragraph. ]] Assume a flow, constrained by arrival curve a(t), traverses a DetNet node that offers a service curve b(t).  The output flow is constrained by the arrival curve a_out
	        </t>
 -->
	       <t>
	           <list style="hanging">
	            <t> a_out(t) = sup_u( a(t+u) - b(u) ), \any u>=0
	            </t>
	        </list> 
	        </t>
	        <t>
	            In DetNet, the arrival curve and service curve can be characterized by a group of parameters, which will be defined in Section 8.
	        </t>
		
        <t>
            Integrated service (IntServ) is an architecture that specifies the elements to guarantee quality of service (QoS) on networks.  To satisfied guaranteed service, a flow must conform to a traffic specification (T-spec), and reservation is made along a path, only if routers are able to guarantee the required bandwidth and buffer.
        </t>
        <t>
            Consider the traffic model which conforms to token bucket regulator (r, b), with
        </t>
        <t>
        <list style="symbols">
            <t> 
                Token bucket depth (b).
            </t>
            <t>
                Token bucket rate (r).
            </t>
        </list>
        </t>
        <t>
            The traffic specification can be described as an arrival curve a(t)
        </t><t>
        <list style="hanging">
            <t> 
                alpha(t) = b + rt
            </t>
        </list>
        </t>
        <t>   
            This token bucket regulator requires that, during any time window of width t, the number of bit for the flow is limited by alpha(t) = b + rt.
        </t>
        <t>
            If resource reservation on a path is applied, IntServ model on a router can be described as a rate-latency service curve beta(t).
        </t>
        <t>
        <list style="hanging">
            <t> 
                beta(t) = max(0, R(t-T))
            </t>
        </list>
        </t>
        <t>   
            It describes that bits might have to wait up to T before being served with a rate greater or equal to R. 
        </t>
        <t>
            It should be noted that, the guaranteed service rate R is a share of link's bandwidth. The choice of R is related to the specification of flows which will transmit on this node. For example, in strict priority policy, considering a flow with priority j, its share of bandwidth may be R=c-sum(r_i), i&lt;j, where c is the link bandwidth, r_i is the token bucket rate for the flows with priority higher than j. The choice of T is also related to the specification of all the flows traversing this node. For example, in a generalized processor sharing (GPS) node, T = L / R + L_max/c, where L is the maximum packet size for the flow, L_max is the maximum packet size in the node across all flows. Other choice of R and T are also supported, according to the specific scheduling of the node and flows traversing this node.
        </t>
        <t>
            As mentioned previously in this section, delay bound and backlog bound can be easily obtained by comparing arrival curve and service curve. Backlog bound, or buffer bound, is the maximum vertical derivation between curves alpha(t) and beta(t), which is x=b+rT.  Delay bound is the maximum horizontal derivation between curves alpha(t) and beta(t), which is d = T+b/R. Graphical illustration of the IntServ model is shown in <xref target="fig_curve"/>.
        </t>
        <t>
            The output bound, or the next-hop arrival curve, is alpha_out(t) = b + rT + rt, where burstiness of the flow is increased by rT, compared with the arrival curve.
        </t>
<!--    <t>
           [[ It would be more clear if we could draw the two curves.  They are both linear, so we can use slash to draw. ]] 
       </t>
 -->
       <t>
            We can calculate the end-to-end delay bound, for a path including N nodes, among which the i-th node offers service curve beta_i(t),
        </t>
        <t>
        <list style="hanging">
            <t> 
                beta_i(t) = max(0, R_i(t-T_i)), i=1,...,N
            </t>
        </list>
        </t>
        <t>   
            According to [Section 5.1], by concatenating those IntServ nodes, an end-to-end service curve can be computed as
        </t>
        <t>
        <list style="hanging">
            <t> 
                beta_e2e (t) = max(0, R_e2e(t-T_e2e) )
            </t>
        </list>
        </t>
        <t> 
            where
        </t>
        <t>
        <list style="hanging">
            <t> 
                R_e2e = min(R_1,..., R_N)
            </t>
            <t>
                T_e2e = T_1 + ... + T_N
            </t>
        </list>
        </t>
        <t>   
            Similarly, delay bound, backlog bound and output bound can be computed by using the original
            arrival curve alpha(t) and concatenated service curve beta_e2e(t).
        </t>
    </section>
 </section>
 <section anchor="time_based_models" title="Time-based DetNet QoS">
     <section anchor="cqf" title="Cyclic Queuing and Forwarding">
     <t>
         <xref target="IEEE802.1Qci"/> and <xref target="IEEE802.1Qch"/> describe Cyclic Queuing
         and Forwarding (CQF), which provide the bounded latency and zero congestion loss using
         the time-scheduled gates of <xref target="IEEE8021Qbv"/>.  For each different DetNet
         class of service, a set of two or three buffers is provided at the out queue layer of
         <xref target="fig_8021Q_data_model"/>.  A cycle time is configured for each class, and
         all of the buffer sets in a class swap buffers simultaneously throughout the DetNet domain
         at that cycle rate.  The choice of using two or three buffers depends on the link
         lengths and forwarding delay times; two buffers can be used if the delay from hop to
         hop is nearly an integral number of cycle times, and three are required if not.  Flows
         are assigned to a class of service only until the amount of data to be transmitted in
         one cycle would exceed the cycle time for some interface.  Every packet dwells either two
         or three cycles at each hop, so the calculation of worst-case latency and latency variation
         is trivial.
     </t>
     </section>
     <section title="Time Scheduled Queuing">
 <t>
 	<xref target="IEEE8021Qbv"/> specifies a time-aware queue-draining procedure for transmission selection at egress port of a relay node, which supports up to eight traffic classes. Each traffic class has a separate queue, frame transmission from each queue is allowed or prevented by a time gate. This time gate controlled scheduling allows time-sensitive traffic classes to transmit on dedicate time slots. Within the time slots, the transmitting flows can be granted exclusive use of the transmission medium. Generally, this time-aware scheduling is a layer 2 time division multiplexing (TDM) technique.
 </t>
 <t>
 	Consider the static configuration of a deterministic network. To provide end-to-end latency guaranteed service, network nodes can support time-based behavior, which is determined by gate control list (GCL). GCL defines the gate operation, in open or closed state, with associated timing for each traffic class queue. A time slice with gate state "open" is called transmission window. The time-based traffic scheduling must be coordinated among the relay nodes along the path from sender to receiver,
        to control the transmission of time-sensitive traffic.
 </t>
 <t>
 Ideally all network devices are time synchronized and static GCL configurations on all devices along the routed path are coordinated to ensure that length of transmission window fits the assigned frames, and no two time windows for DetNet traffic on the same port overlap. (DetNet flows' windows can overlap with best-effort windows, so that unused DetNet bandwidth is available to best-effort traffic.)  The processing delay, link delay and output delay in transmitting are considered in GCL computation. Transmission window for a certain flow may require that a time offset on consecutive hops be selected to reduce queueing delay as much as possible.

 	In this case, TSN/DetNet frames transmit at the assigned transmission window at every node through the routed path, with zero congestion loss and bounded end-to-end latency. 
 	Then, the worst-case end-to-end latency of flow can be derived from GCL configuration. For a TSN or DetNet frame, denote the transmission window on last hop closes at gate_close_time_last_hop. Assuming talker supports scheduled traffic behavior, it starts the transmission at gate_open_time_on_talker. Then worst case end-to-end delay of this flow is bounded by gate_close_time_last_hop - gate_open_time_on_talker + link_delay_last_hop. 
 </t>
 <t>
 	It should be noted that scheduled traffic service relies on a synchronized network and coordinated GCL configuration. Synthesis of GCL on multiple nodes in network is a scheduling problem considering all TSN/DetNet flows traversing the network, which is a non-deterministic polynomial-time hard (NP-hard) problem. Also, at this writing, scheduled traffic service supports no more than eight traffic classes, typically using up to seven priority classes and at least one best effort class.
 </t>
 </section>

 </section>
<section anchor="parameters" title="Parameters for the bounded latency model">
    <section anchor="sender_parms" title="Sender parameters">
    </section>
    <section anchor="relay_parms" title="Relay system parameters">
        <t>
            [[NWF This section talks about the parameters that must be used hop-by-hop
            by a resource reservation protocol.]]
        </t>
    </section>
</section>


</middle>

<!--  *****BACK MATTER ***** -->

<back>

<references title="Normative References">
	<!--?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml"?-->
	&RFC2119;
    &RFC2212;
	&RFC6658;
	&RFC7806;
	&I-D.ietf-detnet-use-cases;
	&I-D.ietf-detnet-architecture;
	&I-D.ietf-detnet-dp-sol-ip;
    &I-D.ietf-detnet-dp-sol-mpls;


	
</references>
	
<references title="Informative References">
	
	<reference anchor="IEEE8021Q" target="http://standards.ieee.org/getieee802/download/802-1Q-2014.pdf">
        <front>
          <title>IEEE Std 802.1Q-2014: IEEE Standard for Local and metropolitan area networks - Bridges and Bridged Networks</title>
          <author>
            <organization>IEEE 802.1</organization>
          </author>
          <date year="2014" />
        </front>
	</reference>
    
    <reference anchor="IEEE8021Qbv" target="http://standards.ieee.org/getieee802/download/802.1Qbv-2015.zip">
        <front>
            <title>IEEE Std 802.1Qbv-2015: IEEE Standard for Local and metropolitan area networks - Bridges and Bridged Networks - Amendment 25: Enhancements for Scheduled Traffic</title>
            <author>
                <organization>IEEE 802.1</organization>
            </author>
            <date year="2015" />
        </front>
    </reference>

    <reference anchor="IEEE802.1Qch"
        target="http://www.ieee802.org/1/files/private/ch-drafts/">
        <front>
            <title>IEEE Std 802.1Qch-2017 IEEE Standard for Local and metropolitan area networks - Bridges and Bridged Networks Amendment 29: Cyclic Queuing and Forwarding (amendment to 802.1Q-2014)</title>
            <author>
                <organization>IEEE</organization>
            </author>
            <date year="2017" />
        </front>
    </reference>
    
    <reference anchor="IEEE802.1Qci"
        target="http://www.ieee802.org/1/files/private/ci-drafts/">
        <front>
            <title>IEEE Std 802.1Qci-2017 IEEE Standard for Local and metropolitan area networks - Bridges and Bridged Networks - Amendment 30: Per-Stream Filtering and Policing</title>
            <author>
                <organization>IEEE</organization>
            </author>
            <date year="2017" />
        </front>
    </reference>

    <reference anchor="IEEE8021Qcr"
        target="http://www.ieee802.org/1/files/private/cr-drafts/">
        <front>
            <title>IEEE P802.1Qcr: IEEE Draft Standard for Local and metropolitan area networks - Bridges and Bridged Networks - Amendment: Asynchronous Traffic Shaping</title>
          <author>
            <organization>IEEE 802.1</organization>
          </author>
          <date year="2017" />
        </front>
	</reference>

    <reference anchor="IEEE8021Qbu"
        target="http://standards.ieee.org/getieee802/download/802.1Qbu-2016.zip">
        <front>
            <title>IEEE Std 802.1Qbu-2016 IEEE Standard for Local and metropolitan area networks - Bridges and Bridged Networks - Amendment 26: Frame Preemption</title>
            <author>
                <organization>IEEE</organization>
            </author>
            <date year="2016" />
        </front>
    </reference>

	<reference anchor="IEEE8023br" target="http://standards.ieee.org/getieee802/download/802.3br-2016.pdf">
        <front>
          <title>IEEE Std 802.3br-2016: IEEE Standard for Local and metropolitan area networks - Ethernet - Amendment 5: Specification and Management Parameters for Interspersing Express Traffic</title>
          <author>
            <organization>IEEE 802.3</organization>
          </author>
          <date year="2016" />
        </front>
	</reference>

	<reference anchor="IEEE8021TSN" target="http://www.ieee802.org/1/">
        <front>
          <title>IEEE 802.1 Time-Sensitive Networking (TSN) Task Group</title>
          <author>
            <organization>IEEE 802.1</organization>
          </author>
          <date />
        </front>
	</reference>
		
	<reference anchor="TSNwithATS" target="https://arxiv.org/abs/1804.10608/">
        <front>
          <title>End-to-end Latency and Backlog Bounds in Time-Sensitive Networking with Credit Based Shapers and Asynchronous Traffic Shaping</title>
          <author>
            <organization>E. Mohammadpour, E. Stai, M. Mohiuddin, and J.-Y. Le Boudec</organization>
          </author>
          <date />
        </front>
	</reference>
    
    <reference anchor="NetCalBook" target="https://arxiv.org/abs/1804.10608/">
        <front>
          <title>Network calculus: a theory of deterministic queuing systems for the internet</title>
          <author>
            <organization>Le Boudec, Jean-Yves, and Patrick Thiran</organization>
          </author>
          <date year="2001"/>
        </front>
	</reference>
    
	<reference anchor="le_boudec_theory_2018" target="http://arxiv.org/abs/1801.08477/">
        <front>
          <title>A Theory of Traffic Regulators for Deterministic Networks with Application to Interleaved Regulators</title>
          <author>
            <organization>J.-Y. Le Boudec</organization>
          </author>
          <date />
        </front>
	</reference>
	
	<reference anchor="charny2000delay" target="https://link.springer.com/chapter/10.1007/3-540-39939-9_1">
        <front>
          <title>Delay Bounds in a Network with Aggregate Scheduling</title>
          <author>
            <organization>A. Charny and J.-Y. Le Boudec</organization>
          </author>
          <date />
        </front>
	</reference>
	
	<reference anchor="bennett2002delay" target="https://dl.acm.org/citation.cfm?id=581870">
        <front>
          <title>Delay Jitter Bounds and Packet Scale Rate Guarantee for Expedited Forwarding</title>
          <author>
            <organization>J.C.R. Bennett, K. Benson, A. Charny, W.F. Courtney, and J.-Y. Le Boudec</organization>
          </author>
          <date />
        </front>
	</reference>
	
	<reference anchor="Specht2016UBS" target="https://ieeexplore.ieee.org/abstract/document/7557870">
        <front>
          <title>Urgency-Based Scheduler for Time-Sensitive Switched Ethernet Networks</title>
          <author>
            <organization>J. Specht and S. Samii</organization>
          </author>
          <date />
        </front>
	</reference>
	
</references>

</back>
</rfc>
