idnits 2.17.1 draft-ietf-tsvwg-tfrc-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (2 March 2001) is 8456 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: '6' is defined on line 902, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. '1' -- Possible downref: Non-RFC (?) normative reference: ref. '2' -- Possible downref: Non-RFC (?) normative reference: ref. '3' -- Possible downref: Non-RFC (?) normative reference: ref. '4' ** Obsolete normative reference: RFC 1889 (ref. '5') (Obsoleted by RFC 3550) ** Obsolete normative reference: RFC 2481 (ref. '6') (Obsoleted by RFC 3168) ** Obsolete normative reference: RFC 2988 (ref. '7') (Obsoleted by RFC 6298) Summary: 7 errors (**), 0 flaws (~~), 2 warnings (==), 6 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Engineering Task Force TSV WG 2 INTERNET-DRAFT Mark Handley/ACIRI 3 draft-ietf-tsvwg-tfrc-01.txt Jitendra Padhye/ACIRI 4 Sally Floyd/ACIRI 6 Joerg Widmer/Univ. Mannheim 7 2 March 2001 8 Expires: September 2001 10 TCP Friendly Rate Control (TFRC): 11 Protocol Specification 13 Status of this Document 15 This document is an Internet-Draft and is in full conformance with all 16 provisions of Section 10 of RFC2026. 18 Internet-Drafts are working documents of the Internet Engineering Task 19 Force (IETF), its areas, and its working groups. Note that other groups 20 may also distribute working documents as Internet-Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet- Drafts as reference material 25 or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This document is a product of the IETF TSV WG. Comments should be 34 addressed to the authors. 36 Abstract 38 This document specifies TCP-Friendly Rate Control (TFRC). 39 TFRC is a congestion control mechanism for unicast flows 40 operating in a best-effort Internet environment. It is 41 reasonably fair when competing for bandwidth with TCP flows, 42 but has a much lower variation of throughput over time 43 compared with TCP, making it more suitable for applications 44 such as telephony or streaming media where a relatively smooth 45 sending rate is of importance. 47 Table of Contents 49 1. Introduction. . . . . . . . . . . . . . . . . . . . . . 4 50 2. Terminology . . . . . . . . . . . . . . . . . . . . . . 5 51 3. Protocol Mechanism. . . . . . . . . . . . . . . . . . . 5 52 3.1. TCP Throughput Equation. . . . . . . . . . . . . . . 5 53 3.2. Packet Contents. . . . . . . . . . . . . . . . . . . 7 54 3.2.1. Data Packets. . . . . . . . . . . . . . . . . . . 7 55 3.2.2. Feedback Packets. . . . . . . . . . . . . . . . . 8 56 4. Data Sender Protocol. . . . . . . . . . . . . . . . . . 8 57 4.1. Measuring the Packet Size. . . . . . . . . . . . . . 8 58 4.2. Sender behavior when a feedback packet is 59 received. . . . . . . . . . . . . . . . . . . . . . . . . 9 60 4.3. Expiration of nofeedback timer . . . . . . . . . . . 10 61 4.4. Sender Initialization. . . . . . . . . . . . . . . . 11 62 4.5. Preventing Oscillations. . . . . . . . . . . . . . . 11 63 4.6. Scheduling of Packet Transmissions . . . . . . . . . 12 64 5. Calculation of the loss event rate (p). . . . . . . . . 13 65 5.1. Detection of Lost or Marked Packets. . . . . . . . . 13 66 5.2. Translation from Loss History to Loss Events . . . . 14 67 5.3. Inter-loss Event Interval. . . . . . . . . . . . . . 15 68 5.4. Average Loss Interval. . . . . . . . . . . . . . . . 15 69 5.5. History Discounting. . . . . . . . . . . . . . . . . 16 70 6. Data Receiver Protocol. . . . . . . . . . . . . . . . . 18 71 6.1. Receiver behavior when a data packet is 72 received. . . . . . . . . . . . . . . . . . . . . . . . . 18 73 6.2. Expiration of feedback timer . . . . . . . . . . . . 19 74 6.3. Receiver initialization. . . . . . . . . . . . . . . 20 75 7. Security Considerations . . . . . . . . . . . . . . . . 20 76 8. Authors' Addresses. . . . . . . . . . . . . . . . . . . 20 77 9. Acknowledgments . . . . . . . . . . . . . . . . . . . . 21 78 10. References . . . . . . . . . . . . . . . . . . . . . . 21 80 1. Introduction 82 This document specifies TCP-Friendly Rate Control (TFRC). TFRC is a 83 congestion control mechanism designed for unicast flows operating in a 84 Internet environment and competing with TCP traffic. Instead of 85 specifying a complete protocol, this document simply specifies a 86 congestion control mechanism that could be used in a transport protocol 87 such as RTP [5], in an application incorporating end-to-end congestion 88 control at the application level, or in the context of endpoint 89 congestion management. This document does not discuss packet formats, 90 reliability, or implementation-related issues. 92 TFRC is designed to be reasonably fair when competing for bandwidth with 93 TCP flows [1]. However it has a much lower variation of throughput over 94 time compared with TCP, which makes it more suitable for applications 95 such as telephony or streaming media where a relatively smooth sending 96 rate is of importance. 98 The penalty of having smoother throughput than TCP whilst competing 99 fairly for bandwidth is that TFRC responds slower than TCP to changes in 100 available bandwidth. Thus TFRC should only be used when the application 101 has a requirement for smooth throughput, in particular, avoiding TCP's 102 halving of the sending rate in response to a single packet drop. For 103 applications that simply require to transfer as much data as possible in 104 as short a time as possible we recommend using TCP, or if reliability is 105 not required, using an Additive-Increase, Multiplicative-Decrease (AIMD) 106 congestion control scheme with similar parameters to those used by TCP. 108 TFRC is designed for applications that use a fixed packet size, and vary 109 their sending rate in packets per second in response to congestion. 110 Some audio applications require a fixed interval of time between packets 111 and vary their packet size instead of their packet rate in response to 112 congestion. The congestion control mechanism in this document cannot be 113 used by those applications; variants of TFRC for applications that have 114 a fixed sending rate but vary their packet size in response to 115 congestion will be addressed in a separate document. 117 TFRC is a receiver-based mechanism, with the calculation of the 118 congestion control information (i.e., the loss event rate) in the data 119 receiver rather in the data sender. This is well-suited to an 120 application where the sender is a large web server handling many 121 concurrent connections, and the receiver has more memory and CPU cycles 122 available for computation. In addition, a receiver-based mechanism is 123 more suitable as a building block for multicast congestion control. 125 2. Terminology 127 In this document, the key words "MUST", "MUST NOT", "REQUIRED", "SHALL", 128 "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and 129 "OPTIONAL" are to be interpreted as described in RFC 2119 and indicate 130 requirement levels for compliant TFRC implementations. 132 3. Protocol Mechanism 134 For its congestion control mechanism, TFRC directly uses a throughput 135 equation for the allowed sending rate as a function of the loss event 136 rate and round-trip time. In order to compete fairly with TCP, TFRC 137 uses the TCP throughput equation, which roughly describes TCP's sending 138 rate as a function of the loss event rate, round-trip time, and packet 139 size. We define a loss event as one or more lost or marked packets from 140 a window of data, where a marked packet refers to a mark from Explicit 141 Congestion Notification (ECN). 143 Generally speaking, TFRC's congestion control mechanism works as 144 follows: 146 o The receiver measures the loss event rate and feeds this 147 information back to the sender. 149 o The sender also uses these feedback messages to measure the round- 150 trip time (RTT). 152 o The loss event rate and RTT are then fed into TFRC's throughput 153 equation, giving the acceptable transmit rate. 155 o The sender then adjusts its transmit rate to match the calculated 156 rate. 158 The dynamics of TFRC are sensitive to how the measurements are performed 159 and applied. We recommend specific mechanisms below to perform and 160 apply these measurements. Other mechanisms are possible, but it is 161 important to understand how the interactions between mechanisms affect 162 the dynamics of TFRC. 164 3.1. TCP Throughput Equation 166 Any realistic equation of TCP throughput as a function of loss event 167 rate and RTT should be suitable for use in TFRC. However, we note that 168 the TCP throughput equation used must reflect TCP's retransmit timeout 169 behavior, as this dominates TCP throughput at higher loss rates. We 170 also note that the assumptions implicit in the throughput equation about 171 the loss event rate parameter have to be a reasonable match to how the 172 loss rate or loss event rate is actually measured. Whilst this match is 173 not perfect for the throughput equation and loss rate measurement 174 mechanisms given below, in practice the assumptions turn out to be close 175 enough. 177 The throughput equation we currently recommend for TFRC is a slightly 178 simplified version of the throughput equation for Reno TCP from [3]. 179 Ideally we'd prefer a throughput equation based on SACK TCP, but no one 180 has yet derived the throughput equation for SACK TCP, and from both 181 simulations and experiments, the differences between the two equations 182 are relatively minor. 184 The throughput equation is: 186 s 187 X = ---------------------------------------------------------- 188 R*sqrt(2*b*p/3) + (t_RTO * (3*sqrt(3*b*p/8) * p * (1+32*p^2))) 190 Where: 192 X is the transmit rate in bytes/second. 194 s is the packet size in bytes. 196 R is the round trip time in seconds. 198 p is the loss event rate, between 0 and 1.0, of the number of loss 199 events as a fraction of the number of packets transmitted. 201 t_RTO is the TCP retransmission timeout value in seconds. 203 b is the number of packets acknowledged by a single TCP 204 acknowledgement. 206 We further simplify this by setting t_RTO = 4*R. A more accurate 207 calculation of t_RTO is possible, but experiments with the current 208 setting have resulted in reasonable fairness with existing TCP 209 implementations [4]. Another possibility would be to set t_RTO = max(4R, 210 one second), to match the recommended minimum of one second on the RTO 211 [7]. 213 Many current TCP connections use delayed acknowledgements, sending an 214 acknowledgement for every two data packets received, and thus have a 215 sending rate modeled by b = 2. However, TCP is also allowed to send an 216 acknowledgement for every data packet, and this would be modeled by b = 217 1. Because many TCP implementations do not use delayed 218 acknowledgements, we recommend b = 1. 220 In future, different TCP equations may be substituted for this equation. 221 The requirement is that the throughput equation be a reasonable 222 approximation of the sending rate of TCP for conformant TCP congestion 223 control. 225 The parameters s (packet size), p (loss event rate) and R (RTT) need to 226 be measured or calculated by a TFRC implementation. The measurement of 227 s is specified in Section 4.1, measurement of R is specified in Section 228 4.2, and measurement of p is specified in Section 4.2. In the rest of 229 this document, all data rates are measured in bytes/second. 231 3.2. Packet Contents 233 Before specifying the sender and receiver functionality, we describe the 234 contents of the data packets sent by the sender and feedback packets 235 sent by the receiver. As TFRC will be used along with a transport 236 protocol, we do not specify packet formats, as these depend on the 237 details of the transport protocol used. 239 3.2.1. Data Packets 241 Each data packet sent by the data sender contains the following 242 information: 244 o A sequence number. This number is incremented by one for each data 245 packet transmitted. The field must be sufficiently large that it 246 does not wrap causing two different packets with the same sequence 247 number to be in the receiver's recent packet history at the same 248 time. 250 o A timestamp indicating when the packet is sent. We denote by ts_i 251 the timestamp of the packet with sequence number i. The resolution 252 of the timestamp should typically be measured in milliseconds. 254 o The sender's current estimate of the round trip time. The estimate 255 reported in packet i is denoted by R_i. 257 o The sender's current transmit rate. The estimate reported in packet 258 i is denoted by X_i. 260 3.2.2. Feedback Packets 262 Each feedback packet sent by the data receiver contains the following 263 information: 265 o The timestamp of the last data packet received. We denote this by 266 t_recvdata. If the last packet received at the receiver has 267 sequence number i, then t_recvdata = ts_i. 269 o The amount of time elapsed between the receipt of the last data 270 packet at the receiver, and the generation of this feedback report. 271 We denote this by t_delay. 273 o The rate at which the receiver estimates that data was received 274 since the last feedback report was sent. We denote this by X_recv. 276 o The receiver's current estimate of the loss event rate, p. 278 4. Data Sender Protocol 280 The data sender sends a stream of data packets to the data receiver at a 281 controlled rate. When a feedback packet is received from the data 282 receiver, the data sender changes its sending rate, based on the 283 information contained in the feedback report. If the sender does not 284 receive a feedback report for two round trip times, it cuts its sending 285 rate in half. This is achieved by means of a timer called the 286 nofeedback timer. 288 We specify the sender-side protocol in the following steps: 290 o Measurement of the mean packet size being sent. 292 o The sender behavior when a feedback packet is received. 294 o The sender behavior when the nofeedback timer expires. 296 o Oscillation prevention (optional) 298 o Scheduling of transmission on non-realtime operating systems. 300 4.1. Measuring the Packet Size 302 The parameter s (packet size) is normally known to an application. This 303 may not be the case in two cases: 305 o The packet size naturally varies depending on the data. In this 306 case, although the packet size varies, that variation is not 307 coupled to the transmit rate. It should normally be safe to use an 308 estimate of the mean packet size for s. 310 o The application needs to change the packet size rather than the 311 number of packets per second to perform congestion control. This 312 would normally be the case with packet audio applications where a 313 fixed interval of time needs to be represented by each packet. 314 Such applications need to have a completely different way of 315 measuring parameters. 317 The second class of applications are discussed separately in a separate 318 document. For the remainder of this section we assume the sender can 319 estimate the packet size, and that congestion control is performed by 320 adjusting the number of packets sent per second. 322 4.2. Sender behavior when a feedback packet is received 324 The sender knows its current sending rate, X, and maintains an estimate 325 of the current round trip time, R, and an estimate of the timeout 326 interval, t_RTO. 328 When a feedback packet is received by the sender at time t_now, the 329 following actions should be performed: 331 1) Calculate a new round trip sample. 332 R_sample = (t_now - t_recvdata) - t_delay. 334 2) Update the round trip time estimate: 336 If no feedback has been received before 337 R = R_sample; 338 Else 339 R = q*R + (1-q)*R_sample; 341 TFRC is not sensitive to the precise value for the filter constant 342 q, but we recommend a default value of 0.9. 344 3) Update the timeout interval: 346 t_RTO = 4*R. 348 4) Update the sending rate: 350 First, calculate the allowed transmit rate, X_calc, using the TCP 351 equation from Section 3.1. 353 Then: 355 If (p > 0) 356 X = min(X_calc, 2*X_recv, s/64); 357 Else 358 X = max(min(2*X, 2*X_recv), s/RTT); 360 Note that if p == 0, then the sender is in slow-start phase, where 361 it approximately doubles the sending rate each round-trip time 362 until a loss occurs. The s/RTT term gives a minimum sending rate 363 during slow-start of one packet per RTT. When p > 0, the sender 364 sends at least one packet every 64 seconds. 366 5) Reset the nofeedback timer to expire after max(2*R, 2*s/X) seconds. 368 4.3. Expiration of nofeedback timer 370 If the nofeedback timer expires, the sender should perform the following 371 actions: 373 1) Cut the sending rate in half. This is done by modifying the 374 sender's cached copy of X_recv (the receive rate). Because the 375 sending rate is limited to at most twice X_recv, modifying X_recv 376 limits the current sending rate, but allows the sender to slow- 377 start, doubling its sending rate each RTT, if feedback messages 378 resume reporting no losses. 380 If (X_calc > 2*X_recv) 381 X_recv = max(X_recv/2, s/128); 382 Else 383 X_recv = X_calc/4; 385 The s/128 term limits the backoff to one packet every 64 seconds in 386 the case of persistent absence of feedback. 388 2) The value of X must then be recalculated as described under point 389 (4) above. 391 If the nofeedback timer expires when the sender does not yet have 392 an RTT sample, and has not yet received any feedback from the 393 sender, then step (1) can be skipped, and the sending rate cut in 394 half directly: 396 X = X_calc. 398 3) Restart the nofeedback timer to expire after max(4*R, 2*s/X) 399 seconds. 401 Note that when the sender stops sending, the receiver will stop sending 402 feedback. This will cause the nofeedback timer to start to expire and 403 decrease X_recv. If the sender subsequently starts to send again, 404 X_recv will limit the transmit rate, and a normal slowstart phase will 405 occur until the transmit rate reached X_calc. 407 4.4. Sender Initialization 409 To initialize the sender, the value of X is set to 1 packet/second and 410 the nofeedback timer is set to expire after 2 seconds. The initial 411 values for R (RTT) and t_RTO are undefined until they are set as 412 described above. 414 4.5. Preventing Oscillations 416 To prevent oscillatory behavior in environments with a low degree of 417 statistical multiplexing it is useful to modify sender's transmit rate 418 to provide congestion avoidance behavior by reducing the transmit rate 419 as the queuing delay (and hence RTT) increases. To do this the sender 420 maintains an estimate of the long-term RTT and modifies its sending rate 421 depending on how the most recent sample of the RTT differs from this 422 value. The long-term sample is R_sqmean, and is set as follows: 424 If no feedback has been received before 425 R_sqmean = sqrt(R_sample); 426 Else 427 R_sqmean = q2*R_sqmean + (1-q2)*sqrt(R_sample); 429 Thus R_sqmean gives the exponentially weighted moving average of the 430 square root of the RTT samples. The constant q2 should be set similarly 431 to q, and we recommend a value of 0.9 as the default. 433 The sender obtains the base transmit rate, X, from the throughput 434 function. It then calculates a modified instantaneous transmit rate 435 X_inst, as follows: 437 X_inst = X * R_sqmean / sqrt(R_sample); 439 When sqrt(R_sample) is greater than R_sqmean then the queue is typically 440 increasing and so the transmit rate needs to be decreased for stable 441 operation. 443 Note: This modification is not always strictly required, especially if 444 the degree of statistical multiplexing in the network is high. However 445 we recommend that it is done because it does make TFRC behave better in 446 environments with a low level of statistical multiplexing. If it is not 447 done, we recommend using a very low value of q, such that q is close to 448 or exactly zero. 450 4.6. Scheduling of Packet Transmissions 452 As TFRC is rate-based, and as operating systems typically cannot 453 schedule events precisely, it is necessary to be opportunistic about 454 sending data packets so that the correct average rate is maintained 455 despite the course-grain or irregular scheduling of the operating 456 system. Thus a typical sending loop will calculate the correct inter- 457 packet interval, t_ipi, as follows: 459 t_ipi = s/X_inst; 461 When a sender first starts sending at time t_0, it calculates t_ipi, and 462 calculates a nominal send time t_1 = t_0 + t_ipi for packet 1. When the 463 application becomes idle, it checks the current time, t_now, and then 464 requests re-scheduling after (t_ipi - (t_now - t_0)) seconds. When the 465 application is re-scheduled, it checks the current time, t_now, again. 466 If (t_now > t_1 - delta) then packet 1 is sent. 468 Now a new t_ipi may be calculated, and used to calculate a nominal send 469 time t_2 for packet 2: t2 = t_1 + t_ipi. The process then repeats, with 470 each successive packet's send time being calculated from the nominal 471 send time of the previous packet. 473 In some cases, when the nominal send time, t_i, of the next packet is 474 calculated, it may already be the case that t_now > t_i - delta. In 475 such a case the packet should be sent immediately. Thus if the 476 operating system has coarse timer granularity and the transmit rate is 477 high, then TFRC may send short bursts of several packets separated by 478 intervals of the OS timer granularity. 480 The parameter delta is to allow a degree of flexibility in the send time 481 of a packet. If the operating system has a scheduling timer granularity 482 of t_gran seconds, then delta would typically be set to: 484 delta = min(t_ipi/2, t_gran/2); 486 t_gran is 10ms on many Unix systems. If t_gran is not known, a value of 487 10ms can be safely assumed. 489 5. Calculation of the loss event rate (p) 491 Obtaining a accurate and stable measurement of the loss event rate is of 492 primary importance for TFRC. Loss rate measurement is performed at the 493 receiver, based on the detection of lost or marked packets from the 494 sequence numbers of arriving packets. We describe this process before 495 describing the rest of the receiver protocol. 497 5.1. Detection of Lost or Marked Packets 499 TFRC assumes that all packets contain a sequence number that is 500 incremented by one for each packet that is sent. For the purposes of 501 this specification, we require that if a lost packet is retransmitted, 502 the retransmission is given a new sequence number that is the latest in 503 the transmission sequence, and not the same sequence number as the 504 packet that was lost. If a transport protocol has the requirement that 505 it must retransmit with the original sequence number, then the transport 506 protocol designer must figure out how to distinguish delayed from 507 retransmitted packets and how to detect lost retransmissions. 509 The receiver maintains a data structure that keeps track of which 510 packets have arrived and which are missing. For the purposes of 511 specification, we assume that the data structure consists of a list of 512 packets that have arrived along with the receiver timestamp when each 513 packet was received. In practice this data structure will normally be 514 stored in a more compact representation, but this is implementation- 515 specific. 517 The loss of a packet is detected by the arrival of at least three 518 packets with a higher sequence number than the lost packet. The 519 requirement for three subsequent packets is the same as with TCP, and is 520 to make TFRC more robust in the presence of reordering. In contrast to 521 TCP, if a packet arrives late (after 3 subsequent packets arrived) in 522 TFRC, the late packet can fill the hole in TFRC's reception record, and 523 the receiver can recalculate the loss event rate. Future versions of 524 TFRC might make the requirement for three subsequent packets adaptive 525 based on experienced packet reordering, but we do not specify such a 526 mechanism here. 528 For an ECN-capable connection, a marked packet is detected as a 529 congestion event as soon as it arrives, without having to wait for the 530 arrival of subsequent packets. 532 5.2. Translation from Loss History to Loss Events 534 TFRC requires that the loss fraction be robust to several consecutive 535 packets lost where those packets are part of the same loss event. This 536 is similar to TCP, which (typically) only performs one halving of the 537 congestion window during any single RTT. Thus the receiver needs to map 538 the packet loss history into a loss event record, where a loss event is 539 one or more packets lost in an RTT. To perform this mapping, the 540 receiver needs to know the RTT to use, and this is supplied periodically 541 by the sender, typically as control information piggy-backed onto a data 542 packet. TFRC is not sensitive to how the RTT measurement sent to the 543 receiver is made, but we recommend using the sender's calculated RTT, R, 544 (see Section 4.2) for this purpose. 546 To determine whether a lost or marked packet should start a new loss 547 event, or be counted as part of an existing loss event, we need to 548 compare the sequence numbers and timestamps of the packets that arrived 549 at the receiver. For a marked packet S_new, its reception time T_new 550 can be noted directly. For a lost packet, we can interpolate to infer 551 the nominal "arrival time". Assume: 553 S_loss is the sequence number of a lost packet. 555 S_before is the sequence number of the last packet to arrive with 556 sequence number before S_loss. 558 S_after is the sequence number of the first packet to arrive with 559 sequence number after S_loss. 561 T_before is the reception time of S_before. 563 T_after is the reception time of S_after. 565 Note that T_before can either be before or after T_after due to 566 reordering. 568 For a lost packet S_loss, we can interpolate its nominal "arrival time" 569 at the receiver from the arrival times of S_before and S_after. Thus 571 T_loss = T_before + ( (T_after - T_before) 572 * (S_loss - S_before)/(S_after - S_before) ); 574 Note that if the sequence space wrapped between S_before and S_after, 575 then the sequence numbers must be modified to take this into account 576 before performing this calculation. If the largest possible sequence 577 number is S_max, and S_before > S_after, then modifying each sequence 578 number S by S' = (S + (S_max + 1)/2) mod (S_max + 1) would normally be 579 sufficient. 581 If the lost packet S_old was determined to have started the previous 582 loss event, and we have just determined that S_new has been lost, then 583 we interpolate the nominal arrival times of S_old and S_new, called 584 T_old and T_new respectively. 586 If T_old + R >= T_new, then S_new is part of the existing loss event. 587 Otherwise S_new is the first packet in a new loss event. 589 5.3. Inter-loss Event Interval 591 If a loss interval, A, is determined to have started with packet 592 sequence number S_A and the next loss interval, B, started with packet 593 sequence number S_B, then the number of packets in loss interval A is 594 given by (S_B - S_A). 596 5.4. Average Loss Interval 598 To calculate the loss event rate p, we first calculate the average loss 599 interval. This is done using a filter that weights the n most recent 600 loss event intervals in such a way that the measured loss event rate 601 changes smoothly. 603 Weights w_0 to w_(n-1) are calculated as: 605 If (i < n/2) 606 w_i = 1; 607 Else 608 w_i = 1 - (i - (n/2 - 1))/(n/2 + 1); 610 Thus if n=8, the values of w_0 to w_7 are: 612 1.0, 1.0, 1.0, 1.0, 0.8, 0.6, 0.4, 0.2 614 The value n for the number of loss intervals used in calculating the 615 loss event rate determines TRFC's speed in responding to changes in the 616 level of congestion. As currently specified, TFRC should not be used 617 for values of n significantly greater than 8, for traffic that might 618 compete in the global Internet with TCP. At the very least, safe 619 operation with values of n greater than 8 would require a slight change 620 to TFRC's mechanisms to include a more severe response to two or more 621 round-trip times with heavy packet loss. 623 To calculate the average loss interval we need to decide whether to 624 include the interval since the most recent packet loss event. We only 625 do this if it is sufficiently large to increase the average loss 626 interval. 628 Thus if the most recent loss intervals are I_0 to I_n, with I_0 being 629 the interval since the most recent loss event, then we calculate the 630 average loss interval I_mean as: 632 I_tot0 = 0; 633 I_tot1 = 0; 634 W_tot = 0; 635 for (i = 0 to n-1) { 636 I_tot0 = I_tot0 + (I_i * w_i); 637 W_tot = W_tot + w_i; 638 } 639 for (i = 1 to n) { 640 I_tot1 = I_tot1 + (I_i * w_(i-1)); 641 } 642 I_tot = max(I_tot0, I_tot1); 643 I_mean = I_tot / W_tot; 645 The loss event rate, p is simply: 647 p = 1 / I_mean; 649 5.5. History Discounting 651 As described in Section 5.4, the most recent loss interval is only 652 assigned 1/(.75n) of the total weight in calculating the average loss 653 interval, regardless of the size of the most recent loss interval. This 654 section describes an optional history discounting mechanism, discussed 655 further in [2] and [4], that allows the TFRC receiver to adjust the 656 weights, concentrating more of the relative weight on the most recent 657 loss interval, when the most recent loss interval is more than twice as 658 large as the computed average loss interval. 660 To carry out history discounting, we associate a discount factor DF_i 661 with each loss interval L_i, for i > 0, where each discount factor is a 662 floating point number. The discount array maintains the cumulative 663 history of discounting for each loss interval. At the beginning, the 664 values of DF_i in the discount array are initialized to 1: 666 for (i = 1 to n) { 667 DF_i = 1; 668 } 670 History discounting also uses a general discount factor DF, also a 671 floating point number, that is also initialized to 1. First we show how 672 the discount factors are used in calculating the average loss interval, 673 and then we describe later in this section how the discount factors are 674 modified over time. 676 As described in Section 5.4 the average loss interval is calculated 677 using the n previous loss intervals I_1, ..., I_n, and the interval I_0 678 that represents the number of packets received since last loss event. 679 The computation of the average loss interval using the discount factors 680 is a simple modification of the procedure in Section 5.4, as follows: 682 I_tot0 = 0; 683 I_tot1 = 0; 684 W_tot = 0; 685 for (i = 0 to n-1) { 686 I_tot0 = I_tot0 + (I_i * w_i * DF_i * DF); 687 W_tot = W_tot + w_i * DF_i * DF; 688 } 689 for (i = 1 to n) { 690 I_tot1 = I_tot1 + (I_i * w_(i-1) * DF_i * DF); 691 } 692 p = W_tot / max(I_tot0, I_tot1); 694 The general discounting factor, DF is updated on every packet arrival as 695 follows. First, the receiver computes the weighted average I_tot of the 696 loss intervals I_1, ..., I_n: 698 I_tot = 0; 699 for (i = 1 to n) { 700 I_tot = I_tot + (I_i * w_(i-1) * DF_i); 701 } 703 This weighted average I_tot is compared against I_0, the number of 704 packets received since the last loss event. If I_0 is greater than 705 twice I_tot, then the new loss interval is considerably larger than the 706 old ones, and the general discount factor DF is updated to decrease the 707 relative weight on the older intervals, as follows: 709 if (I_0 > 2 * I_tot) { 710 DF = 2 * I_tot / I_0; 711 if (DF < THRESHOLD) 712 DF = THRESHOLD; 713 } else 714 DF = 1; 716 A nonzero value for THRESHOLD ensures that older loss intervals from an 717 earlier time of high congestion are not discounted entirely. We 718 recommend a THRESHOLD of 0.5. Note that with each new packet arrival, 719 I_0 will increase further, and the discount factor DF will be updated. 721 When a new loss event occurs, the current interval shifts from I_0 to 722 I_1, loss interval I_i shifts to interval I_(i+1), and the loss interval 723 I_n is forgotten. The previous discount factor DF has to be 724 incorporated into the discount array. Because DF_i carries the discount 725 factor associated with loss interval I_i, the DF_i array has to be 726 shifted as well. This is done as follows: 728 for (i = 1 to n) { 729 DF_i = DF * DF_i; 730 } 731 for (i = n-1 to 0) { 732 DF_(i+1) = DF_i; 733 } 734 I_0 = 1; 735 DF_0 = 1; 736 DF = 1; 738 This completes the description of the optional history discounting 739 mechanism. We emphasize that this is an optional mechanism whose sole 740 purpose is to allow TFRC to response somewhat more quickly to the sudden 741 absence of congestion, as represented by a long current loss interval. 743 6. Data Receiver Protocol 745 The receiver periodically sends feedback messages to the sender. 746 Feedback packets should normally be sent at least once per RTT, unless 747 the sender is sending at a rate of less than one packet per RTT, in 748 which case a feedback packet should be send for every data packet 749 received. A feedback packet should also be sent whenever a new loss 750 event is detected without waiting for the end of an RTT, and whenever an 751 out-of-order data packet is received that removes a loss event from the 752 history. 754 If the sender is transmitting at a high rate (many packets per RTT) 755 there may be some advantages to sending periodic feedback messages more 756 than once per RTT as this allows faster response to changing RTT 757 measurements, and more resilience to feedback packet loss. However 758 there is little gain from sending a large number of feedback messages 759 per RTT. 761 6.1. Receiver behavior when a data packet is received 763 When a data packet is received, the receiver performs the following 764 steps: 766 1) Add the packet to the packet history. 768 2) Let the previous value of p be p_prev. Calculate the new value of 769 p as described in Section 5. 771 3) If p > p_prev, cause the feedback timer to expire, and perform the 772 actions described in Section 6.2 774 If p <= p_prev no action need be performed. 776 However an optimization might check to see if the arrival of the 777 packet caused a hole in the packet history to be filled and 778 consequently two loss intervals were merged into one. If this is 779 the case, the receiver might also send feedback immediately. The 780 effects of such an optimization are normally expected to be small. 782 6.2. Expiration of feedback timer 784 When the feedback timer at the receiver expires, the action to be taken 785 depends on whether data packets have been received since the last 786 feedback was sent. 788 Let the maximum sequence number of a packet at the receiver so far be 789 S_m, and the value of the RTT measurement included in packet S_m be R_m. 790 If data packets have been received since the pervious feedback was sent, 791 the receiver performs the following steps: 793 1) Calculate the average loss event rate using the algorithm described 794 above. 796 2) Calculate the measured receive rate, X_sample, based on the packets 797 received within the previous R_m seconds. 799 3) Calculate X_recv as follows: 801 X_recv = q3*X_sample + (1-q3)*X_recv; 803 Where q3 is a constant with recommended value 0.5. 805 4) Prepare and send a feedback packet containing the information 806 described in Section 3.2.2 808 5) Restart the feedback timer to expire after R_m seconds. 810 If no data packets have been received since the last feedback was sent, 811 no feedback packet is sent, and the feedback timer is restarted to 812 expire after R_m seconds. 814 6.3. Receiver initialization 816 The receiver is initialized by the first packet that arrives at the 817 receiver. Let the sequence number of this packet be i. 819 When the first packet is received: 821 o Set p=0 823 o Set X_recv = X_send, where X_send is the sending rate the sender 824 reports in the first data packet. 826 o Prepare and send a feedback packet. 828 o Set the feedback timer to expire after R_i seconds. 830 7. Security Considerations 832 TFRC is not a transport protocol in its own right, but a congestion 833 control mechanism that is intended to be used in conjunction with a 834 transport protocol. Therefore security primarily needs to be considered 835 in the context of a specific transport protocol and its authentication 836 mechanisms. 838 Congestion control mechanisms can potentially be exploited to create 839 denial of service. This may occur through spoofed feedback. Thus any 840 transport protocol that uses TFRC should take care to ensure that 841 feedback is only accepted from the receiver of the data. The precise 842 mechanism to achieve this will however depend on the transport protocol 843 itself. 845 In addition, congestion control mechanisms may potentially be 846 manipulated by a greedy receiver that wishes to receive more than its 847 fair share of network bandwidth. A receiver might do this by claiming 848 to have received packets that in fact were lost due to congestion. 849 Possible defenses against such a receiver would normally include some 850 form of nonce that the receiver must feed back to the sender to prove 851 receipt. However, the details of such a nonce would depend on the 852 transport protocol, and in particular on whether the transport protocol 853 is reliable or unreliable. 855 8. Authors' Addresses 856 Mark Handley, Jitendra Padhye, Sally Floyd 857 ACIRI/ICSI 858 1947 Center St, Suite 600 859 Berkeley, CA 94708 860 mjh@aciri.org, padhye@aciri.org, floyd@aciri.org 862 Joerg Widmer 863 Lehrstuhl Praktische Informatik IV 864 Universitat Mannheim 865 L 15, 16 - Room 415 866 D-68131 Mannheim 867 Germany 868 widmer@informatik.uni-mannheim.de 870 9. Acknowledgments 872 We would like to acknowledge feedback and discussions on equation-based 873 congestion control with a wide range of people, including members of the 874 Reliable Multicast Research Group, the Reliable Multicast Transport 875 Working Group, and the End-to-End Research Group. We would like to 876 thank Eduardo Urzaiz and Shushan Wen for feedback on earlier versions of 877 this document, and to thank Mark Allman for his extensive feedback from 878 using the draft to produce a working implementation. 880 10. References 882 [1] S. Floyd, M. Handley, J. Padhye, and J. Widmer, "Equation-Based 883 Congestion Control for Unicast Applications", August 2000, Proc 884 SIGCOMM 2000. 886 [2] S. Floyd, M. Handley, J. Padhye, and J. Widmer, "Equation-Based 887 Congestion Control for Unicast Applications: the Extended Version", 888 ICSI tech report TR-00-03, March 2000. 890 [3] Padhye, J. and Firoiu, V. and Towsley, D. and Kurose, J., "Modeling 891 TCP Throughput: A Simple Model and its Empirical Validation", Proc 892 ACM SIGCOMM 1998. 894 [4] Widmer, J., Equation-Based Congestion Control, Diploma Thesis, 895 University of Mannheim, February 2000. URL 896 "http://www.aciri.org/tfrc/". 898 [5] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson, RTP: A 899 Transport Protocol for Real-Time Applications, RFC 1889, January 900 1996. 902 [6] K. Ramakrishnan and S. Floyd, A Proposal to add Explicit Congestion 903 Notification (ECN) to IP, RFC 2481, January 1999. 905 [7] V. Paxson and M. Allman, Computing TCP's Retransmission Timer, RFC 906 2988, November 2000.