idnits 2.17.1
draft-irtf-iccrg-multfrc-01.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
** The document seems to lack an IANA Considerations section. (See Section
2.2 of https://www.ietf.org/id-info/checklist for how to handle the case
when there are no actions for IANA.)
** The document seems to lack a both a reference to RFC 2119 and the
recommended RFC 2119 boilerplate, even if it appears to use RFC 2119
keywords.
RFC 2119 keyword, line 198: '...es that the use of b=1 is RECOMMENDED....'
RFC 2119 keyword, line 238: '...etting n to 8 is RECOMMENDED. The sam...'
RFC 2119 keyword, line 399: '... N MUST be set at the beginning of a...'
RFC 2119 keyword, line 445: '...n set for MulTFRC SHOULD NOT exceed 6....'
RFC 2119 keyword, line 478: '...ximum value of N MUST be restricted. ...'
(1 more instance...)
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (July 31, 2010) is 5017 days in the past. Is this
intentional?
Checking references for intended status: Experimental
----------------------------------------------------------------------------
-- Obsolete informational reference (is this intentional?): RFC 2309
(Obsoleted by RFC 7567)
Summary: 2 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Internet Engineering Task Force M. Welzl
3 Internet-Draft University of Oslo
4 Intended status: Experimental D. Damjanovic
5 Expires: February 1, 2011 University of Innsbruck
6 S. Gjessing
7 University of Oslo
8 July 31, 2010
10 MulTFRC: TFRC with weighted fairness
11 draft-irtf-iccrg-multfrc-01.txt
13 Abstract
15 This document specifies the MulTFRC congestion control mechanism.
16 MulTFRC is derived from TFRC and can be implemented by carrying out a
17 few small and simple changes to the original mechanism. Its behavior
18 differs from TFRC in that it emulates a number of TFRC flows with
19 more flexibility than what would be practical or even possible using
20 multiple real TFRC flows. Additionally, MulTFRC better preserves the
21 original features of TFRC than multiple real TFRCs do.
23 Status of this Memo
25 This Internet-Draft is submitted in full conformance with the
26 provisions of BCP 78 and BCP 79.
28 Internet-Drafts are working documents of the Internet Engineering
29 Task Force (IETF). Note that other groups may also distribute
30 working documents as Internet-Drafts. The list of current Internet-
31 Drafts is at http://datatracker.ietf.org/drafts/current/.
33 Internet-Drafts are draft documents valid for a maximum of six months
34 and may be updated, replaced, or obsoleted by other documents at any
35 time. It is inappropriate to use Internet-Drafts as reference
36 material or to cite them other than as "work in progress."
38 This Internet-Draft will expire on February 1, 2011.
40 Copyright Notice
42 Copyright (c) 2010 IETF Trust and the persons identified as the
43 document authors. All rights reserved.
45 This document is subject to BCP 78 and the IETF Trust's Legal
46 Provisions Relating to IETF Documents
47 (http://trustee.ietf.org/license-info) in effect on the date of
48 publication of this document. Please review these documents
49 carefully, as they describe your rights and restrictions with respect
50 to this document. Code Components extracted from this document must
51 include Simplified BSD License text as described in Section 4.e of
52 the Trust Legal Provisions and are provided without warranty as
53 described in the Simplified BSD License.
55 Table of Contents
57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
58 2. Specification . . . . . . . . . . . . . . . . . . . . . . . . 4
59 2.1. Section 3 of RFC 5348 . . . . . . . . . . . . . . . . . . 4
60 2.2. Section 4 of RFC 5348 . . . . . . . . . . . . . . . . . . 5
61 2.3. Section 5 of RFC 5348 . . . . . . . . . . . . . . . . . . 6
62 2.4. Section 6 of RFC 5348 . . . . . . . . . . . . . . . . . . 7
63 2.5. Section 8 of RFC 5348 . . . . . . . . . . . . . . . . . . 8
64 2.6. Appendix A of RFC 5348 . . . . . . . . . . . . . . . . . . 9
65 3. Usage Considerations . . . . . . . . . . . . . . . . . . . . . 9
66 3.1. Which applications could use MulTFRC? . . . . . . . . . . 9
67 3.2. Setting N . . . . . . . . . . . . . . . . . . . . . . . . 9
68 4. Security Considerations . . . . . . . . . . . . . . . . . . . 11
69 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 11
70 6. References . . . . . . . . . . . . . . . . . . . . . . . . . . 12
71 6.1. Normative References . . . . . . . . . . . . . . . . . . . 12
72 6.2. Informative References . . . . . . . . . . . . . . . . . . 12
73 Appendix A. X_Bps implementation considerations . . . . . . . . . 13
74 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15
76 1. Introduction
78 "TCP-friendliness", the requirement for a flow to behave under
79 congestion like a flow produced by a conformant TCP (introduced by
80 the name of "TCP-compatibility" in [RFC2309]), has been put into
81 question in recent years (cf. [Bri07]). As an illustrative example,
82 consider the fact that not all data transfers are of equal importance
83 to a user. A user may therefore want to assign different priorities
84 to different flows between two hosts, but TCP(-friendly) congestion
85 control would always let these flows use the same sending rate.
86 Users and their applications are now already bypassing TCP-
87 friendliness in practice: since multiple TCP flows can better
88 saturate a bottleneck than a single one, some applications open
89 multiple connections as a simple workaround. The "GridFTP" [All03]
90 protocol explicitly provides this function as a performance
91 improvement.
93 Some research efforts were therefore carried out to develop protocols
94 where a weight can directly be applied to the congestion control
95 mechanism, allowing a flow to be as aggressive as a number of
96 parallel TCP flows at the same time. The first, and best known, such
97 protocol is MulTCP [Cro+98], which emulates N TCPs in a rather simple
98 fashion. Improved versions were later published, e.g. Stochastic
99 TCP [Hac+04] and Probe-Aided (PA-)MulTCP [Kuo+08]. These protocols
100 could be called "N-TCP-friendly", i.e. as TCP-friendly as N TCPs.
102 MulTFRC, defined in this document, does with TFRC [RFC5348] what
103 MulTCP does with TCP. In [Dam+09] [Dam10], it was shown that MulTFRC
104 achieves its goal of emulating N flows better than MulTCP (and
105 improved versions of it) and has a number of other benefits. For
106 instance, MulTFRC with N=2 is more reactive than two real TFRC flows
107 are, and it has a smoother sending rate than two real MulTFRC flows
108 do. Moreover, since it is only one mechanism, a protocol that uses
109 MulTFRC can send a single data stream with the congestion control
110 behavior of multiple data streams without the need to split the data
111 and spread it over separate connections. Depending on the protocol
112 in use, N real TFRC flows can also be expected to have N times the
113 overhead for, e.g., connection setup and teardown, of a MulTFRC flow
114 with the same value of N.
116 The core idea of TFRC is to achieve TCP-friendliness by explicitly
117 calculating an equation which approximates the steady-state
118 throughput of TCP and sending as much as the calculation says. The
119 core idea of MulTFRC is to replace this equation in TFRC with the
120 algorithm from [Dam+08] [Dam10], which approximates the steady-state
121 throughput of N TCP flows. MulTFRC can be implemented via a few
122 simple changes to the TFRC code. It is therefore defined here by
123 specifying how it differs from the TFRC specification [RFC5348].
125 2. Specification
127 This section lists the changes to [RFC5348] that must be applied to
128 turn TFRC into MulTFRC. The small number of changes ensures that
129 many original properties of a single TFRC flow are preserved, which
130 is often the most appropriate choice (e.g. it would probably not make
131 sense for a MulTFRC flow to detect a data-limited interval
132 differently than a single TFRC flow would). It also makes MulTFRC
133 easy to understand and implement. Experiments have shown that these
134 changes are enough to attain the desired effect.
136 2.1. Section 3 of RFC 5348
138 While the TCP throughput equation requires the loss event rate,
139 round-trip time and segment size as input, the algorithm to be used
140 for MulTFRC additionally needs the number of packets lost in a loss
141 event. The equation, specified in [RFC5348] as
143 s
144 X_Bps = ----------------------------------------------------------
145 R*sqrt(2*b*p/3) + (t_RTO * (3*sqrt(3*b*p/8)*p*(1+32*p^2)))
147 is replaced with the following algorithm, which returns X_Bps, the
148 average transmit rate of N TCPs in bytes per second:
150 If (N < 12) {
151 af = N * (1-(1-1/N)^j);
152 }
153 Else {
154 af = j;
155 }
156 af=max(min(af,ceil(N)),1);
157 a = p*b*af*(24*N^2+p*b*af*(N-2*af)^2);
158 x= (af*p*b*(2*af-N)+sqrt(a))/(6*N^2*p);
159 z=t_RTO*(1+32*p^2)/(1-p);
160 q=min(2*j*b*z/(R*(1+3*N/j)*x^2), N*z/(x*R), N);
161 X_Bps=((1-q/N)/(p*x*R)+q/(z*(1-p)))*s;
163 Where:
165 s is the segment size in bytes (excluding IP and transport
166 protocol headers).
168 R is the round-trip time in seconds.
170 b is the maximum number of packets acknowledged by a single TCP
171 acknowledgement.
173 p is the loss event rate, between 0 and 1.0, of the number of loss
174 events as a fraction of the number of packets transmitted.
176 j is the number of packets lost in a loss event.
178 t_RTO is the TCP retransmission timeout value in seconds.
180 N is the number of TFRC flows that MulTFRC should emulate. N is a
181 positive rational number; a discussion of appropriate values for
182 this parameter, and reasons for choosing them, is provided in
183 Section 3.2.
185 ceil(N) gives the smallest integer greater than or equal to N.
187 x, af, a, z and q are temporary floating point variables.
189 Appendix A contains an argument that shows why the calculations in
190 the algorithm will not overflow, underflow or produce significant
191 rounding errors.
193 Section 3.1 of [RFC5348] contains a recommendation for setting the
194 t_RTO parameter, and a simplification of the equation as a result of
195 setting this parameter in a specific way. This part of the TFRC
196 specification could be applied here too. Section 3.1 of [RFC5348]
197 also contains a discussion of the parameter b for delayed
198 acknowledgements and concludes that the use of b=1 is RECOMMENDED.
199 This is also the case for MulTFRC.
201 Section 3.2.2 of [RFC5348] specifies the contents of feedback
202 packets. In addition to the information listed there, a MulTFRC
203 feedback packet also carries j, the number of packets lost in a loss
204 event.
206 2.2. Section 4 of RFC 5348
208 The procedure for updating the allowed sending rate in section 4.3 of
209 [RFC5348] ("action 4") contains the statement:
211 Calculate X_Bps using the TCP throughput equation.
213 which is replaced with the statement:
215 If (p==1) {
216 X_Bps=s*N/t_mbi;
217 } Else {
218 Calculate X_Bps using the algorithm defined in section 3.
219 }
221 2.3. Section 5 of RFC 5348
223 Section 5.2 of [RFC5348] explains how a lost packet that starts a new
224 loss event should be distinguished from a lost packet that is a part
225 of the previous loss event interval. Here, additionally the number
226 of packets lost in a loss event is counted, and therefore this
227 section is extended with:
229 If S_new is a part of the current loss interval LP_0 (the number of
230 lost packets in the current interval) is increased by 1. On the
231 other hand, if S_new starts a new loss event, LP_0 is set to 1.
233 Section 5.4 of [RFC5348] contains the algorithm for calculating the
234 average loss interval that is needed for calculation of the loss
235 event rate, p. MulTFRC also requires the number of lost packets in a
236 loss event, j. In [RFC5348] the calculation of the average loss
237 interval is done using a filter that weights the n most recent loss
238 event intervals, and setting n to 8 is RECOMMENDED. The same
239 algorithm is used here for calculating the average loss interval.
240 For the number of lost packets in a loss event interval, j, the
241 weighted average number of lost packets in the n most recent loss
242 intervals is taken and the same filter is used.
244 For calculating the average number of packets lost in a loss event
245 interval we use the same loss intervals as for the p calculation.
246 Let LP_0 to LP_k be the number of lost packets in the k most recent
247 loss intervals. The algorithm for calculating I_mean in Section 5.4
248 of [RFC5348] (page 23) is extended by adding, after the last line ("p
249 = 1 / I_mean;"):
251 LP_tot = 0;
252 If (I_tot0 > I_tot1) {
253 for (i = 0 to k-1) {
254 LP_tot = LP_tot + (LP_i * w_i);
255 }
256 }
257 Else {
258 for (i = 1 to k) {
259 LP_tot = LP_tot + (LP_i * w_i);
260 }
261 }
262 j = LP_tot/W_tot;
264 In section 5.5 of [RFC5348] (page 25), the algorithm that ends with
265 "p = min(W_tot0/I_tot0, W_tot1/I_tot1);" is extended by adding:
267 LP_tot = 0;
268 If (I_tot0 > I_tot1) {
269 for (i = 0 to k-1) {
270 LP_tot = Lp_tot + (LP_i * w_i * DF_i * DF);
271 }
272 j = LP_tot/W_tot0;
273 }
274 Else {
275 for (i = 1 to k) {
276 LP_tot = LP_tot + (LP_i * w_(i-1) * DF_i);
277 }
278 j = LP_tot/W_tot1;
279 }
281 2.4. Section 6 of RFC 5348
283 The steps to be carried out by the receiver when a packet is received
284 in section 6.1 of [RFC5348] ("action 4") contain the statement:
286 3) Calculate p: Let the previous value of p be p_prev. Calculate
287 the new value of p as described in Section 5.
289 which is replaced with the statement:
291 3) Calculate p and j: Let the previous values of p and j be p_prev
292 and j_prev. Calculate the new values of p and j as described in
293 Section 5.
295 The steps to be carried out by the receiver upon expiration of
296 feedback timer in section 6.2 of [RFC5348] ("action 1") contain the
297 statement:
299 1) Calculate the average loss event rate using the algorithm
300 described in Section 5.
302 which is replaced with:
304 1) Calculate the average loss event rate and average number of
305 lost packets in a loss event using the algorithm described in
306 Section 5.
308 This statement is added at the beginning of the list of initial steps
309 to take when the first packet is received, in section 6.3 of
310 [RFC5348]:
312 o Set j = 0.
314 Section 6.3.1 of [RFC5348] discusses how the loss history is
315 initialized after the first loss event. TFRC approximates the rate
316 to be the maximum value of X_recv so far, assuming that a higher rate
317 introduces loss. Therefore j for this rate is approximated by 1 and
318 the number of packets lost in the first interval is set to 1. This
319 is accomplished by the following change. The first sentence of the
320 fourth paragraph (in section 6.3.1) is:
322 TFRC does this by finding some value, p, for which the throughput
323 equation in Section 3.1 gives a sending rate within 5% of X_target,
324 given the round-trip time R, and the first loss interval is then set
325 to 1/p.
327 which is replaced with:
329 TFRC does this by finding some value, p, for which the throughput
330 equation in Section 3.1 gives a sending rate within 5% of X_target,
331 given the round-trip time R, and j equal to 1. The first loss
332 interval is then set to 1/p.
334 The second last paragraph in section 6.3.1 ends with:
336 Thus, the TFRC receiver calculates the loss interval that would be
337 required to produce the target rate X_target of 0.5/R packets per
338 second, for the round-trip time R, and uses this synthetic loss
339 interval for the first loss interval. The TFRC receiver uses 0.5/R
340 packets per second as the minimum value for X_target when
341 initializing the first loss interval.
343 which is replaced with:
345 Thus, the TFRC receiver calculates the loss interval that would be
346 required to produce the target rate X_target of 0.5/R packets per
347 second, for the round-trip time R, and for j equal to 1. This
348 synthetic loss interval is used for the first loss interval. The TFRC
349 receiver uses 0.5/R packets per second as the minimum value for
350 X_target when initializing the first loss interval.
352 2.5. Section 8 of RFC 5348
354 Section 8.1 explains details about calculating the original TCP
355 throughput equation, which was replaced with a new algorithm in this
356 document. It is therefore obsolete.
358 2.6. Appendix A of RFC 5348
360 This section provides a terminology list for TFRC, which is extended
361 as follows:
363 N: number of emulated TFRC flows.
365 j: number of packets lost in a loss event.
367 3. Usage Considerations
369 The "weighted fairness" service provided by a protocol using MulTFRC
370 is quite different from the service provided by traditional Internet
371 transport protocols. This section intends to answer some questions
372 that this new service may raise.
374 3.1. Which applications could use MulTFRC?
376 Like TFRC, MulTFRC is suitable for applications that require a
377 smoother sending rate than standard TCP. Since it is likely that
378 these would be multimedia applications, TFRC has largely been
379 associated with them (and [RFC5348] mentions "streaming media" as an
380 example). Since timely transmission is often more important for them
381 than reliability, multimedia applications usually do not keep
382 retransmitting packets until their successful delivery is ensured.
383 Accordingly, TFRC usage was specified for the Datagram Congestion
384 Control Protocol (DCCP) [RFC4342], but not for reliable data
385 transfers.
387 MulTFRC, on the other hand, provides an altogether different service.
388 For some applications, a smoother sending rate may not be
389 particularly desirable but might also not be considered harmful,
390 while the ability to emulate the congestion control of N flows may be
391 useful for them. This could include reliable transfers such as the
392 transmission of files. Possible reasons to use MulTFRC for file
393 transfers include the assignment of priorities according to user
394 preferences, increased efficiency with N > 1, the implementation of
395 low-priority "scavenger" services and resource pooling [Wis+08].
397 3.2. Setting N
399 N MUST be set at the beginning of a transfer; it MUST NOT be changed
400 while a transfer is ongoing. The effects of changing N during the
401 lifetime of a MulTFRC session on the dynamics of the mechanism are
402 yet to be investigated; in particular, it is unclear how often N
403 could safely be changed, and how "safely" should be defined in this
404 context. Further research is required to answer these questions.
406 N is a positive floating point number which can also take values
407 between 0 and 1, making MulTFRC applicable as a mechanism for what
408 has been called a "Lower-than-Best-Effort" (LBE) service. Since it
409 does not reduce its sending rate early as delay increases, as some
410 alternative proposals for such a service do (e.g. TCP-LP [Kuz+06],
411 TCP Nice [Ven+02] or 4CP [Liu+07]), it can probably be expected to be
412 more aggressive than these mechanisms if they share a bottleneck at
413 the same time. This, however, also means that MulTFRC is less likely
414 to be prone to starvation. Values between 0 and 1 could also be
415 useful if MulTFRC is used across multiple paths to realize resource
416 pooling [Wis+08].
418 Setting N to 1 is also possible. In this case, the only difference
419 between TFRC and MulTFRC is that the underlying model of TFRC assumes
420 that all remaining packets following a dropped packet in a "round"
421 (less than one round-trip time apart) are also dropped, whereas the
422 underlying model of MulTFRC does not have this assumption. Whether
423 it is correct or not depends on the specific network situation; large
424 windows and other queuing schemes than Drop-Tail make it less likely
425 for the assumption to match reality. This document does not make any
426 recommendation about which mechanism to use if only one flow is
427 desired.
429 Since TCP has been extensively studied, and the aggression of its
430 congestion control mechanism is emulated by TFRC, we can look at the
431 behavior of a TCP aggregate in order to find a reasonable upper limit
432 for N in MulTFRC. From [Alt+06], N TCPs (assuming non-sychronized
433 loss events over connections) can saturate a bottleneck link by
434 roughly 100-100/(1+3N) percent. This means that a single flow can
435 only achieve 75% utilization, whereas 3 flows already achieve 90%.
436 The theoretical gain that can be achieved by adding a flow declines
437 with the total number of flows - e.g., while going from 1 to 2 flows
438 is a 14.3% performance gain, the gain becomes less than 1% beyond 6
439 flows (which already achieve 95% link utilization). Since the link
440 utilization of MulTFRC can be expected to be roughly the same as the
441 link utilization of multiple TCPs, the approximation above also holds
442 for MulTFRC. Thus, setting N to a much larger value than the values
443 mentioned above will only yield a marginal benefit in isolation but
444 can significantly affect other traffic. Therefore, the maximum value
445 that a user can set for MulTFRC SHOULD NOT exceed 6.
447 Note that the model in [Alt+06], and hence the above discussion,
448 considers the long-term steady-state behavior of TCP, which may not
449 always be seen when the bandwidth*delay product is very large
450 [RFC3649]. This is due to TCP's slow congestion window growth in the
451 congestion avoidance phase. While a MulTFRC flow with N > 1 can
452 generally be expected to outperform a single standard TCP flow if N
453 is large enough, such usage of MulTFRC is not recommended as a fix to
454 the problem of saturating very large bandwidth*delay product paths:
455 in order to always achieve good bottleneck utilization with MulTFRC
456 under such conditions, N would have to be a function of the
457 bandwidth*delay product. In other words, the mechanism does not
458 scale with bandwidth and delay; very large bandwidth or delay values
459 may require very large values for N, leading to a behavior which is
460 overly aggressive but possibly worse in terms of performance than
461 mechanisms such as HighSpeed TCP [RFC3649], which are specifically
462 designed for such situations. The same argument applies for running
463 multiple TCP flows, as in [All03].
465 4. Security Considerations
467 It is well known that a single uncontrolled UDP flow can cause
468 significant harm to a large number of TCP flows that share the same
469 bottleneck. This potential danger is due to the total lack of
470 congestion control in UDP. Because this problem is well known, and
471 because UDP is easy to detect, UDP traffic will often be rate limited
472 by service providers.
474 If MulTFRC is used within a protocol such as DCCP, which will
475 normally not be considered harmful and will therefore typically not
476 be rate-limited, its tunable aggression could theoretically make it
477 possible to use it for a Denial-of-Service (DoS) attack. In order to
478 avoid such usage, the maximum value of N MUST be restricted. If, as
479 recommended in this document, the maximum value for N is restricted
480 to 6, the impact of MulTFRC on TCP is roughly the same as the impact
481 of 6 TCP flows would be. It is clear that the conjoint congestion
482 control behavior of 6 TCPs is far from being such an attack.
484 With transport protocols such as TCP, SCTP or DCCP, users can already
485 be more aggressive than others by opening multiple connections. If
486 MulTFRC is used within a transport protocol, this effect becomes more
487 pronounced - e.g., 2 connections with N set to 6 for each of them
488 roughly exhibit the same congestion control behavior as 12 TCP flows.
489 The N limit SHOULD therefore be implemented as a system wide
490 parameter such that the sum of the N values of all MulTFRC
491 connections does not exceed it. Alternatively, the number of
492 connections that can be opened could be restricted.
494 5. Acknowledgements
496 This work was partially funded by the EU IST project EC-GIN under the
497 contract STREP FP6-2006-IST-045256.
499 The authors would like to thank the following people whose feedback
500 and comments contributed to this document (in alphabetic order):
501 Lachlan Andrew, Dirceu Cavendish, Soo-Hyun Choi, Wes Eddy.
503 6. References
505 6.1. Normative References
507 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP
508 Friendly Rate Control (TFRC): Protocol Specification",
509 RFC 5348, September 2008.
511 6.2. Informative References
513 [All03] Allcock, W., "GridFTP: Protocol Extensions to FTP for the
514 Grid", Open Grid Forum Document GFD.20, 2003.
516 [Alt+06] Altman, E., Barman, D., Tuffin, B., and M. Vojnovic,
517 "Parallel TCP Sockets: Simple Model, Throughput and
518 Validation", Proceedings of Infocom 2006, April 2006.
520 [Bri07] Briscoe, B., "Flow rate fairness: dismantling a religion",
521 ACM SIGCOMM Computer Communication Review vol. 37, no. 2,
522 (April 2007), pp. 63-74, April 2007.
524 [Cro+98] Crowcroft, J. and P. Oechslin, "Differentiated end-to-end
525 Internet services using a weighted proportional fair
526 sharing TCP", ACM SIGCOMM Computer Communication
527 Review vol. 28, no. 3 (July 1998), pp. 53-69, 1998.
529 [Dam+08] Damjanovic, D., Welzl, M., Telek, M., and W. Heiss,
530 "Extending the TCP Steady-State Throughput Equation for
531 Parallel TCP Flows", University of Innsbruck, Institute of
532 Computer Science, DPS NSG Technical Report 2, August 2008.
534 [Dam+09] Damjanovic, D. and M. Welzl, "MulTFRC: Providing Weighted
535 Fairness for Multimedia Applications (and others too!)",
536 ACM SIGCOMM Computer Communication Review vol. 39, issue 9
537 (July 2009), 2009.
539 [Dam10] Damjanovic, D., "Parallel TCP Data Transfers: A Practical
540 Model and its Application", Ph.D. thesis, University of
541 Innsbruck, Austria, February 2010, .
544 [Hac+04] Hacker, T., Noble, B., and B. Athey, "Improving Throughput
545 and Maintaining Fairness using Parallel TCP", Proceedings
546 of Infocom 2004, March 2004.
548 [Kuo+08] Kuo, F. and X. Fu, "Probe-Aided MulTCP: an aggregate
549 congestion control mechanism", ACM SIGCOMM Computer
550 Communication Review vol. 38, no. 1 (January 2008), pp.
551 17-28, 2008.
553 [Kuz+06] Kuzmanovic, A. and E. Knightly, "TCP-LP: low-priority
554 service via end-point congestion control", IEEE/ACM
555 Transactions on Networking (ToN) Volume 14, Issue 4, pp.
556 739-752., August 2006,
557 .
559 [Liu+07] Liu, S., Vojnovic, M., and D. Gunawardena, "Competitive
560 and Considerate Congestion Control for Bulk Data
561 Transfers", Proceedings of IWQoS 2007, June 2007.
563 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering,
564 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G.,
565 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker,
566 S., Wroclawski, J., and L. Zhang, "Recommendations on
567 Queue Management and Congestion Avoidance in the
568 Internet", RFC 2309, April 1998.
570 [RFC3649] Floyd, S., "HighSpeed TCP for Large Congestion Windows",
571 RFC 3649, December 2003.
573 [RFC4342] Floyd, S., Kohler, E., and J. Padhye, "Profile for
574 Datagram Congestion Control Protocol (DCCP) Congestion
575 Control ID 3: TCP-Friendly Rate Control (TFRC)", RFC 4342,
576 March 2006.
578 [Ven+02] Venkataramani, A., Kokku, R., and M. Dahlin, "TCP Nice: a
579 mechanism for background transfers", Proceedings of
580 OSDI '02, 2002.
582 [Wis+08] Wischik, D., Handley, M., and M. Braun, "The Resource
583 Pooling Principle", ACM Computer Communication Review
584 Volume 38, Issue 5 (October 2008), October 2008, .
587 Appendix A. X_Bps implementation considerations
589 In this appendix we show why the algorithm for calculating X_Bps in
590 Section 2.1 contains integer and floating point arithmetic that will
591 not give underflow, overflow or rounding errors that will adversely
592 affect the result of the algorithm. Note that the algorithm is not
593 invoked when p == 1. p is computed in section 5.4 of [RFC5348]. If p
594 is equal to 1 there is exactly one packet in each loss interval (and
595 this will be an ECN-marked packet).
597 Assuming that the rest of the parameters are not outside their
598 conceivable values, the calculation of X_Bps in Section 2.1 could
599 lead to arithmetic errors or large imprecision only if p is very
600 close to 1. However this will never happen because p is calculated
601 as 1/I_mean in section 5.4 of [RFC5348]. The algorithm that
602 calculates I_mean does so by a weighted average of the number of
603 packets in the last n loss events. If n == 8 and the weights are set
604 as defined in [RFC5348], the smallest possible value of I_mean will
605 be 1.033333. The reason for this is that the number of packets in a
606 loss event is a positive integer and the smallest value (not equal to
607 1) is found when all recent intervals are of length 1, but the oldest
608 interval (number 8 in this case) is of length two. All other sizes
609 of loss intervals and smaller values of n will give higher values for
610 I_mean and hence lower values for p.
612 Below we analyze the algorithm that calculates X_Bps, and see that it
613 will always execute without any overflow, underflows or large
614 rounding errors. In this analysis we assume that no parameter has an
615 improper value. We say that a calculation is "sound" when no
616 overflow, underflow or significant rounding may occur during the
617 calculation.
619 In the first lines of the algorithm, af is calculated, and it is easy
620 to see that the calculation is sound and that the value of af will
621 end up between 1 and N+1. In the calculation of a, N may be equal to
622 or close to af/2, and then the expression (p*b*af*(N-2*af)^2) may
623 evaluate to a number very close to 0. However, (24*N^2) is added,
624 and hence the calculation of a is sound, as the values of p, b and N
625 are neither extremely small nor extremely large. Note that a will be
626 a smaller value when p is a smaller value.
628 When x is calculated there will be no problem to find the root of a,
629 and then add it to (af*p*b*(2*af-N)), which again might be a very
630 small value. The final calculation of x involves a division by
631 (6*p*N^2). However when p is small, the dividend is not very large
632 (dominated by p and a), hence the calculation of x is sound and the
633 result will neither be a very large nor a very small number.
634 However, we need to show that x will not be 0 or negative, otherwise
635 the algorithm will try a division by 0, or, if x is negative, the
636 calculation of X_Bps could turn out to be a negative value.
638 We show that x is a positive rational number (and not 0) by defining
639 W=p*b*af, Y=2*af-N and D=6*N^2*p. Then the assignment to x can be
640 written as x= (W*Y + sqrt(W*24*N^2 + W^2 * (-Y)^2 )) / D. If we
641 subtract W*24*N^2 from the argument to the sqrt-function, we get an
642 expression that is obviously smaller, hence after the assignment to
643 x, this inequality holds: x > (W*Y + sqrt(W^2 * (-Y)^2) ) / D.
644 Simplifying the right hand side gives: x > (W*Y - W*Y) / D, that is:
645 x > 0.
647 From this argument it is also clear that x is not close to 0, so that
648 the divisons by x (or x*R) that we will see later in the algorithm
649 will give a sound result. Then z is found by division by 1-p, which
650 is sound when p is not close to 1 (which we have argued above it is
651 not). Below we also need the fact that z is not close to 0, which is
652 true because neither t_RTO nor p is close to 0.
654 Since neither R, j nor x is 0, or close to 0, the calculation of the
655 first parameter to the min function is sound. The second parameter
656 of the min function is sound because x*R is not a value close to 0,
657 and hence the execution of all the three arguments to the min
658 function is sound.
660 Finally X_Bps is calculated and since N, p, x, R, z are not close to
661 zero, and p is not close to one (and hence 1-p is not close to 0),
662 this final calculation is also sound.
664 Authors' Addresses
666 Michael Welzl
667 University of Oslo
668 PO Box 1080 Blindern
669 Oslo, N-0316
670 Norway
672 Phone: +47 22 85 24 20
673 Email: michawe@ifi.uio.no
675 Dragana Damjanovic
676 University of Innsbruck
677 Technikerstr. 21 A
678 Innsbruck, A-6020
679 Austria
681 Phone: +43 512 507 96803
682 Email: dragana.damjanovic@uibk.ac.at
683 Stein Gjessing
684 University of Oslo
685 PO Box 1080 Blindern
686 Oslo, N-0316
687 Norway
689 Phone: +47 22 85 24 44
690 Email: steing@ifi.uio.no