idnits 2.17.1 draft-irtf-iccrg-multfrc-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 200: '...he use of b=1 is RECOMMENDED. This is...' RFC 2119 keyword, line 239: '...etting n to 8 is RECOMMENDED. The sam...' RFC 2119 keyword, line 399: '... N MUST be set at the beginning of a...' RFC 2119 keyword, line 445: '...n set for MulTFRC SHOULD NOT exceed 6....' RFC 2119 keyword, line 478: '...ximum value of N MUST be restricted. ...' (1 more instance...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 5, 2010) is 5044 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) Summary: 2 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Welzl 3 Internet-Draft University of Oslo 4 Intended status: Experimental D. Damjanovic 5 Expires: January 6, 2011 University of Innsbruck 6 S. Gjessing 7 University of Oslo 8 July 5, 2010 10 MulTFRC: TFRC with weighted fairness 11 draft-irtf-iccrg-multfrc-00.txt 13 Abstract 15 This document specifies the MulTFRC congestion control mechanism. 16 MulTFRC is derived from TFRC and can be implemented by carrying out a 17 few small and simple changes to the original mechanism. Its behavior 18 differs from TFRC in that it emulates a number of TFRC flows with 19 more flexibility than what would be practical or even possible using 20 multiple real TFRC flows. Additionally, MulTFRC better preserves the 21 original features of TFRC than multiple real TFRCs do. 23 Status of this Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at http://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on January 6, 2011. 40 Copyright Notice 42 Copyright (c) 2010 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 58 2. Specification . . . . . . . . . . . . . . . . . . . . . . . . 4 59 2.1. Section 3 of RFC 5348 . . . . . . . . . . . . . . . . . . 4 60 2.2. Section 4 of RFC 5348 . . . . . . . . . . . . . . . . . . 5 61 2.3. Section 5 of RFC 5348 . . . . . . . . . . . . . . . . . . 6 62 2.4. Section 6 of RFC 5348 . . . . . . . . . . . . . . . . . . 7 63 2.5. Section 8 of RFC 5348 . . . . . . . . . . . . . . . . . . 8 64 2.6. Appendix A of RFC 5348 . . . . . . . . . . . . . . . . . . 9 65 3. Usage Considerations . . . . . . . . . . . . . . . . . . . . . 9 66 3.1. Which applications could use MulTFRC? . . . . . . . . . . 9 67 3.2. Setting N . . . . . . . . . . . . . . . . . . . . . . . . 9 68 4. Security Considerations . . . . . . . . . . . . . . . . . . . 11 69 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 11 70 6. References . . . . . . . . . . . . . . . . . . . . . . . . . . 12 71 6.1. Normative References . . . . . . . . . . . . . . . . . . . 12 72 6.2. Informative References . . . . . . . . . . . . . . . . . . 12 73 Appendix A. X_Bps implementation considerations . . . . . . . . . 13 74 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 76 1. Introduction 78 "TCP-friendliness", the requirement for a flow to behave under 79 congestion like a flow produced by a conformant TCP (introduced by 80 the name of "TCP-compatibility" in [RFC2309]), has been put into 81 question in recent years (cf. [Bri07]). As an illustrative example, 82 consider the fact that not all data transfers are of equal importance 83 to a user. A user may therefore want to assign different priorities 84 to different flows between two hosts, but TCP(-friendly) congestion 85 control would always let these flows use the same sending rate. 86 Users and their applications are now already bypassing TCP- 87 friendliness in practice: since multiple TCP flows can better 88 saturate a bottleneck than a single one, some applications open 89 multiple connections as a simple workaround. The "GridFTP" [All03] 90 protocol explicitly provides this function as a performance 91 improvement. 93 Some research efforts were therefore carried out to develop protocols 94 where a weight can directly be applied to the congestion control 95 mechanism, allowing a flow to be as aggressive as a number of 96 parallel TCP flows at the same time. The first, and best known, such 97 protocol is MulTCP [Cro+98], which emulates N TCPs in a rather simple 98 fashion. Improved versions were later published, e.g. Stochastic 99 TCP [Hac+04] and Probe-Aided (PA-)MulTCP [Kuo+08]. These protocols 100 could be called "N-TCP-friendly", i.e. as TCP-friendly as N TCPs. 102 MulTFRC, defined in this document, does with TFRC [RFC5348] what 103 MulTCP does with TCP. In [Dam+09], it was shown that MulTFRC 104 achieves its goal of emulating N flows better than MulTCP (and 105 improved versions of it) and has a number of other benefits. For 106 instance, MulTFRC with N=2 is more reactive than two real TFRC flows 107 are, and it has a smoother sending rate than two real MulTFRC flows 108 do. Moreover, since it is only one mechanism, a protocol that uses 109 MulTFRC can send a single data stream with the congestion control 110 behavior of multiple data streams without the need to split the data 111 and spread it over separate connections. Depending on the protocol 112 in use, N real TFRC flows can also be expected to have N times the 113 overhead for, e.g., connection setup and teardown, of a MulTFRC flow 114 with the same value of N. 116 The core idea of TFRC is to achieve TCP-friendliness by explicitly 117 calculating an equation which approximates the steady-state 118 throughput of TCP and sending as much as the calculation says. The 119 core idea of MulTFRC is to replace this equation in TFRC with the 120 algorithm from [Dam+08], which approximates the steady-state 121 throughput of N TCP flows. MulTFRC can be implemented via a few 122 simple changes to the TFRC code. It is therefore defined here by 123 specifying how it differs from the TFRC specification [RFC5348]. 125 2. Specification 127 This section lists the changes to [RFC5348] that must be applied to 128 turn TFRC into MulTFRC. The small number of changes ensures that 129 many original properties of a single TFRC flow are preserved, which 130 is often the most appropriate choice (e.g. it would probably not make 131 sense for a MulTFRC flow to detect a data-limited interval 132 differently than a single TFRC flow would). It also makes MulTFRC 133 easy to understand and implement. Experiments have shown that these 134 changes are enough to attain the desired effect. 136 2.1. Section 3 of RFC 5348 138 While the TCP throughput equation requires the loss event rate, 139 round-trip time and segment size as input, the algorithm to be used 140 for MulTFRC additionally needs the number of packets lost in a loss 141 event. The equation, specified in [RFC5348] as 143 s 144 X_Bps = ---------------------------------------------------------- 145 R*sqrt(2*b*p/3) + (t_RTO * (3*sqrt(3*b*p/8)*p*(1+32*p^2))) 147 is replaced with the following algorithm, which returns X_Bps, the 148 average transmit rate of N TCPs in bytes per second: 150 If (N < 12) { 151 af = N * (1-(1-1/N)^j); 152 } 153 Else { 154 af = j; 155 } 156 af=max(min(af,ceil(N)),1); 157 a = p*b*af*(24*N^2+p*b*af*(N-2*af)^2); 158 x= (af*p*b*(2*af-N)+sqrt(a))/(6*N^2*p); 159 q=min(2*j*b/(x*(1+3*N/j)),N); 160 z=t_RTO*(1+32*p^2)/(1-p); 161 q=min(q*z/(x*R),N); 162 X_Bps = ((1-q/N)/(p*x*R)+q/(z*(1-p)))*s; 164 Where: 166 s is the segment size in bytes (excluding IP and transport 167 protocol headers). 169 R is the round-trip time in seconds. 171 b is the maximum number of packets acknowledged by a single TCP 172 acknowledgement. 174 p is the loss event rate, between 0 and 1.0, of the number of loss 175 events as a fraction of the number of packets transmitted. 177 j is the number of packets lost in a loss event. 179 t_RTO is the TCP retransmission timeout value in seconds. 181 N is the number of TFRC flows that MulTFRC should emulate. N is a 182 positive rational number; a discussion of appropriate values for 183 this parameter, and reasons for choosing them, is provided in 184 Section 3.2. 186 ceil(N) gives the smallest integer greater than or equal to N. 188 x, af, a, z and q are temporary floating point variables. 190 Appendix A contains an argument that shows why the calculations in 191 the algorithm will not overflow, underflow or produce significant 192 rounding errors. 194 Section 3.1 of [RFC5348] contains a recommendation for setting a 195 parameter called t_RTO, and a simplification of the equation as a 196 result of setting this parameter in a specific way. This part of the 197 TFRC specification is irrelevant here, as the t_RTO parameter no 198 longer exists. Section 3.1 of [RFC5348] also contains a discussion 199 of the parameter b for delayed acknowledgements and concludes that 200 the use of b=1 is RECOMMENDED. This is also the case for MulTFRC. 202 Section 3.2.2 of [RFC5348] specifies the contents of feedback 203 packets. In addition to the information listed there, a MulTFRC 204 feedback packet also carries j, the number of packets lost in a loss 205 event. 207 2.2. Section 4 of RFC 5348 209 The procedure for updating the allowed sending rate in section 4.3 of 210 [RFC5348] ("action 4") contains the statement: 212 Calculate X_Bps using the TCP throughput equation. 214 which is replaced with the statement: 216 If (p==1) { 217 X_Bps=s/t_mbi; 218 } Else { 219 Calculate X_Bps using the algorithm defined in section 3. 220 } 222 2.3. Section 5 of RFC 5348 224 Section 5.2 of [RFC5348] explains how a lost packet that starts a new 225 loss event should be distinguished from a lost packet that is a part 226 of the previous loss event interval. Here, additionally the number 227 of packets lost in a loss event is counted, and therefore this 228 section is extended with: 230 If S_new is a part of the current loss interval LP_0 (the number of 231 lost packets in the current interval) is increased by 1. On the 232 other hand, if S_new starts a new loss event, LP_0 is set to 1. 234 Section 5.4 of [RFC5348] contains the algorithm for calculating the 235 average loss interval that is needed for calculation of the loss 236 event rate, p. MulTFRC also requires the number of lost packets in a 237 loss event, j. In [RFC5348] the calculation of the average loss 238 interval is done using a filter that weights the n most recent loss 239 event intervals, and setting n to 8 is RECOMMENDED. The same 240 algorithm is used here for calculating the average loss interval. 241 For the number of lost packets in a loss event interval, j, the 242 weighted average number of lost packets in the n most recent loss 243 intervals is taken and the same filter is used. 245 For calculating the average number of packets lost in a loss event 246 interval we use the same loss intervals as for the p calculation. 247 Let LP_0 to LP_k be the number of lost packets in the k most recent 248 loss intervals. The algorithm for calculating I_mean in Section 5.4 249 of [RFC5348] (page 23) is extended by adding, after the last line ("p 250 = 1 / I_mean;"): 252 LP_tot = 0; 253 If (I_tot0 > I_tot1) { 254 for (i = 0 to k-1) { 255 LP_tot = LP_tot + (LP_i * w_i); 256 } 257 } 258 Else { 259 for (i = 1 to k) { 260 LP_tot = LP_tot + (LP_i * w_i); 261 } 262 } 263 j = LP_tot/W_tot; 264 In section 5.5 of [RFC5348] (page 25), the algorithm that ends with 265 "p = min(W_tot0/I_tot0, W_tot1/I_tot1);" is extended by adding: 267 LP_tot = 0; 268 If (I_tot0 > I_tot1) { 269 for (i = 0 to k-1) { 270 LP_tot = Lp_tot + (LP_i * w_i * DF_i * DF); 271 } 272 j = LP_tot/W_tot0; 273 } 274 Else { 275 for (i = 1 to k) { 276 LP_tot = LP_tot + (LP_i * w_(i-1) * DF_i); 277 } 278 j = LP_tot/W_tot1; 279 } 281 2.4. Section 6 of RFC 5348 283 The steps to be carried out by the receiver when a packet is received 284 in section 6.1 of [RFC5348] ("action 4") contain the statement: 286 3) Calculate p: Let the previous value of p be p_prev. Calculate 287 the new value of p as described in Section 5. 289 which is replaced with the statement: 291 3) Calculate p and j: Let the previous values of p and j be p_prev 292 and j_prev. Calculate the new values of p and j as described in 293 Section 5. 295 The steps to be carried out by the receiver upon expiration of 296 feedback timer in section 6.2 of [RFC5348] ("action 1") contain the 297 statement: 299 1) Calculate the average loss event rate using the algorithm 300 described in Section 5. 302 which is replaced with: 304 1) Calculate the average loss event rate and average number of 305 lost packets in a loss event using the algorithm described in 306 Section 5. 308 This statement is added at the beginning of the list of initial steps 309 to take when the first packet is received, in section 6.3 of 310 [RFC5348]: 312 o Set j = 0. 314 Section 6.3.1 of [RFC5348] discusses how the loss history is 315 initialized after the first loss event. TFRC approximates the rate 316 to be the maximum value of X_recv so far, assuming that a higher rate 317 introduces loss. Therefore j for this rate is approximated by 1 and 318 the number of packets lost in the first interval is set to 1. This 319 is accomplished by the following change. The first sentence of the 320 fourth paragraph (in section 6.3.1) is: 322 TFRC does this by finding some value, p, for which the throughput 323 equation in Section 3.1 gives a sending rate within 5% of X_target, 324 given the round-trip time R, and the first loss interval is then set 325 to 1/p. 327 which is replaced with: 329 TFRC does this by finding some value, p, for which the throughput 330 equation in Section 3.1 gives a sending rate within 5% of X_target, 331 given the round-trip time R, and j equal to 1. The first loss 332 interval is then set to 1/p. 334 The second last paragraph in section 6.3.1 ends with: 336 Thus, the TFRC receiver calculates the loss interval that would be 337 required to produce the target rate X_target of 0.5/R packets per 338 second, for the round-trip time R, and uses this synthetic loss 339 interval for the first loss interval. The TFRC receiver uses 0.5/R 340 packets per second as the minimum value for X_target when 341 initializing the first loss interval. 343 which is replaced with: 345 Thus, the TFRC receiver calculates the loss interval that would be 346 required to produce the target rate X_target of 0.5/R packets per 347 second, for the round-trip time R, and for j equal to 1. This 348 synthetic loss interval is used for the first loss interval. The TFRC 349 receiver uses 0.5/R packets per second as the minimum value for 350 X_target when initializing the first loss interval. 352 2.5. Section 8 of RFC 5348 354 Section 8.1 explains details about calculating the original TCP 355 throughput equation, which was replaced with a new algorithm in this 356 document. It is therefore obsolete. 358 2.6. Appendix A of RFC 5348 360 This section provides a terminology list for TFRC, which is extended 361 as follows: 363 N: number of emulated TFRC flows. 365 j: number of packets lost in a loss event. 367 3. Usage Considerations 369 The "weighted fairness" service provided by a protocol using MulTFRC 370 is quite different from the service provided by traditional Internet 371 transport protocols. This section intends to answer some questions 372 that this new service may raise. 374 3.1. Which applications could use MulTFRC? 376 Like TFRC, MulTFRC is suitable for applications that require a 377 smoother sending rate than standard TCP. Since it is likely that 378 these would be multimedia applications, TFRC has largely been 379 associated with them (and [RFC5348] mentions "streaming media" as an 380 example). Since timely transmission is often more important for them 381 than reliability, multimedia applications usually do not keep 382 retransmitting packets until their successful delivery is ensured. 383 Accordingly, TFRC usage was specified for the Datagram Congestion 384 Control Protocol (DCCP) [RFC4342], but not for reliable data 385 transfers. 387 MulTFRC, on the other hand, provides an altogether different service. 388 For some applications, a smoother sending rate may not be 389 particularly desirable but might also not be considered harmful, 390 while the ability to emulate the congestion control of N flows may be 391 useful for them. This could include reliable transfers such as the 392 transmission of files. Possible reasons to use MulTFRC for file 393 transfers include the assignment of priorities according to user 394 preferences, increased efficiency with N > 1, the implementation of 395 low-priority "scavenger" services and resource pooling [Wis+08]. 397 3.2. Setting N 399 N MUST be set at the beginning of a transfer; it MUST NOT be changed 400 while a transfer is ongoing. The effects of changing N during the 401 lifetime of a MulTFRC session on the dynamics of the mechanism are 402 yet to be investigated; in particular, it is unclear how often N 403 could safely be changed, and how "safely" should be defined in this 404 context. Further research is required to answer these questions. 406 N is a positive floating point number which can also take values 407 between 0 and 1, making MulTFRC applicable as a mechanism for what 408 has been called a "Lower-than-Best-Effort" (LBE) service. Since it 409 does not reduce its sending rate early as delay increases, as some 410 alternative proposals for such a service do (e.g. TCP-LP [Kuz+06], 411 TCP Nice [Ven+02] or 4CP [Liu+07]), it can probably be expected to be 412 more aggressive than these mechanisms if they share a bottleneck at 413 the same time. This, however, also means that MulTFRC is less likely 414 to be prone to starvation. Values between 0 and 1 could also be 415 useful if MulTFRC is used across multiple paths to realize resource 416 pooling [Wis+08]. 418 Setting N to 1 is also possible. In this case, the only difference 419 between TFRC and MulTFRC is that the underlying model of TFRC assumes 420 that all remaining packets following a dropped packet in a "round" 421 (less than one round-trip time apart) are also dropped, whereas the 422 underlying model of MulTFRC does not have this assumption. Whether 423 it is correct or not depends on the specific network situation; large 424 windows and other queuing schemes than Drop-Tail make it less likely 425 for the assumption to match reality. This document does not make any 426 recommendation about which mechanism to use if only one flow is 427 desired. 429 Since TCP has been extensively studied, and the aggression of its 430 congestion control mechanism is emulated by TFRC, we can look at the 431 behavior of a TCP aggregate in order to find a reasonable upper limit 432 for N in MulTFRC. From [Alt+06], N TCPs (assuming non-sychronized 433 loss events over connections) can saturate a bottleneck link by 434 roughly 100-100/(1+3N) percent. This means that a single flow can 435 only achieve 75% utilization, whereas 3 flows already achieve 90%. 436 The theoretical gain that can be achieved by adding a flow declines 437 with the total number of flows - e.g., while going from 1 to 2 flows 438 is a 14.3% performance gain, the gain becomes less than 1% beyond 6 439 flows (which already achieve 95% link utilization). Since the link 440 utilization of MulTFRC can be expected to be roughly the same as the 441 link utilization of multiple TCPs, the approximation above also holds 442 for MulTFRC. Thus, setting N to a much larger value than the values 443 mentioned above will only yield a marginal benefit in isolation but 444 can significantly affect other traffic. Therefore, the maximum value 445 that a user can set for MulTFRC SHOULD NOT exceed 6. 447 Note that the model in [Alt+06], and hence the above discussion, 448 considers the long-term steady-state behavior of TCP, which may not 449 always be seen when the bandwidth*delay product is very large 450 [RFC3649]. This is due to TCP's slow congestion window growth in the 451 congestion avoidance phase. While a MulTFRC flow with N > 1 can 452 generally be expected to outperform a single standard TCP flow if N 453 is large enough, such usage of MulTFRC is not recommended as a fix to 454 the problem of saturating very large bandwidth*delay product paths: 455 in order to always achieve good bottleneck utilization with MulTFRC 456 under such conditions, N would have to be a function of the 457 bandwidth*delay product. In other words, the mechanism does not 458 scale with bandwidth and delay; very large bandwidth or delay values 459 may require very large values for N, leading to a behavior which is 460 overly aggressive but possibly worse in terms of performance than 461 mechanisms such as HighSpeed TCP [RFC3649], which are specifically 462 designed for such situations. The same argument applies for running 463 multiple TCP flows, as in [All03]. 465 4. Security Considerations 467 It is well known that a single uncontrolled UDP flow can cause 468 significant harm to a large number of TCP flows that share the same 469 bottleneck. This potential danger is due to the total lack of 470 congestion control in UDP. Because this problem is well known, and 471 because UDP is easy to detect, UDP traffic will often be rate limited 472 by service providers. 474 If MulTFRC is used within a protocol such as DCCP, which will 475 normally not be considered harmful and will therefore typically not 476 be rate-limited, its tunable aggression could theoretically make it 477 possible to use it for a Denial-of-Service (DoS) attack. In order to 478 avoid such usage, the maximum value of N MUST be restricted. If, as 479 recommended in this document, the maximum value for N is restricted 480 to 6, the impact of MulTFRC on TCP is roughly the same as the impact 481 of 6 TCP flows would be. It is clear that the conjoint congestion 482 control behavior of 6 TCPs is far from being such an attack. 484 With transport protocols such as TCP, SCTP or DCCP, users can already 485 be more aggressive than others by opening multiple connections. If 486 MulTFRC is used within a transport protocol, this effect becomes more 487 pronounced - e.g., 2 connections with N set to 6 for each of them 488 roughly exhibit the same congestion control behavior as 12 TCP flows. 489 The N limit SHOULD therefore be implemented as a system wide 490 parameter such that the sum of the N values of all MulTFRC 491 connections does not exceed it. Alternatively, the number of 492 connections that can be opened could be restricted. 494 5. Acknowledgements 496 This work was partially funded by the EU IST project EC-GIN under the 497 contract STREP FP6-2006-IST-045256. 499 6. References 501 6.1. Normative References 503 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP 504 Friendly Rate Control (TFRC): Protocol Specification", 505 RFC 5348, September 2008. 507 6.2. Informative References 509 [All03] Allcock, W., "GridFTP: Protocol Extensions to FTP for the 510 Grid", Open Grid Forum Document GFD.20, 2003. 512 [Alt+06] Altman, E., Barman, D., Tuffin, B., and M. Vojnovic, 513 "Parallel TCP Sockets: Simple Model, Throughput and 514 Validation", Proceedings of Infocom 2006, April 2006. 516 [Bri07] Briscoe, B., "Flow rate fairness: dismantling a religion", 517 ACM SIGCOMM Computer Communication Review vol. 37, no. 2, 518 (April 2007), pp. 63-74, April 2007. 520 [Cro+98] Crowcroft, J. and P. Oechslin, "Differentiated end-to-end 521 Internet services using a weighted proportional fair 522 sharing TCP", ACM SIGCOMM Computer Communication 523 Review vol. 28, no. 3 (July 1998), pp. 53-69, 1998. 525 [Dam+08] Damjanovic, D., Welzl, M., Telek, M., and W. Heiss, 526 "Extending the TCP Steady-State Throughput Equation for 527 Parallel TCP Flows", University of Innsbruck, Institute of 528 Computer Science, DPS NSG Technical Report 2, August 2008. 530 [Dam+09] Damjanovic, D. and M. Welzl, "MulTFRC: Providing Weighted 531 Fairness for Multimedia Applications (and others too!)", 532 ACM SIGCOMM Computer Communication Review vol. 39, issue 9 533 (July 2009), 2009. 535 [Hac+04] Hacker, T., Noble, B., and B. Athey, "Improving Throughput 536 and Maintaining Fairness using Parallel TCP", Proceedings 537 of Infocom 2004, March 2004. 539 [Kuo+08] Kuo, F. and X. Fu, "Probe-Aided MulTCP: an aggregate 540 congestion control mechanism", ACM SIGCOMM Computer 541 Communication Review vol. 38, no. 1 (January 2008), pp. 542 17-28, 2008. 544 [Kuz+06] Kuzmanovic, A. and E. Knightly, "TCP-LP: low-priority 545 service via end-point congestion control", IEEE/ACM 546 Transactions on Networking (ToN) Volume 14, Issue 4, pp. 548 739-752., August 2006, 549 . 551 [Liu+07] Liu, S., Vojnovic, M., and D. Gunawardena, "Competitive 552 and Considerate Congestion Control for Bulk Data 553 Transfers", Proceedings of IWQoS 2007, June 2007. 555 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 556 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 557 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 558 S., Wroclawski, J., and L. Zhang, "Recommendations on 559 Queue Management and Congestion Avoidance in the 560 Internet", RFC 2309, April 1998. 562 [RFC3649] Floyd, S., "HighSpeed TCP for Large Congestion Windows", 563 RFC 3649, December 2003. 565 [RFC4342] Floyd, S., Kohler, E., and J. Padhye, "Profile for 566 Datagram Congestion Control Protocol (DCCP) Congestion 567 Control ID 3: TCP-Friendly Rate Control (TFRC)", RFC 4342, 568 March 2006. 570 [Ven+02] Venkataramani, A., Kokku, R., and M. Dahlin, "TCP Nice: a 571 mechanism for background transfers", Proceedings of 572 OSDI '02, 2002. 574 [Wis+08] Wischik, D., Handley, M., and M. Braun, "The Resource 575 Pooling Principle", ACM Computer Communication Review 576 Volume 38, Issue 5 (October 2008), October 2008, . 579 Appendix A. X_Bps implementation considerations 581 In this appendix we show why the algorithm for calculating X_Bps in 582 Section 2.1 contains integer and floating point arithmetic that will 583 not give underflow, overflow or rounding errors that will adversely 584 affect the result of the algorithm. Note that the algorithm is not 585 invoked when p == 1. p is computed in section 5.4 of [RFC5348]. If p 586 is equal to 1 there is exactly one packet in each loss interval (and 587 this will be an ECN-marked packet). 589 Assuming that the rest of the parameters are not outside their 590 conceivable values, the calculation of X_Bps in Section 2.1 could 591 lead to arithmetic errors or large imprecision only if p is very 592 close to 1. However this will never happen because p is calculated 593 as 1/I_mean in section 5.4 of [RFC5348]. The algorithm that 594 calculates I_mean does so by a weighted average of the number of 595 packets in the last n loss events. If n == 8 and the weights are set 596 as defined in [RFC5348], the smallest possible value of I_mean will 597 be 1.033333. The reason for this is that the number of packets in a 598 loss event is a positive integer and the smallest value (not equal to 599 1) is found when all recent intervals are of length 1, but the oldest 600 interval (number 8 in this case) is of length two. All other sizes 601 of loss intervals and smaller values of n will give higher values for 602 I_mean and hence lower values for p. 604 Below we analyze the algorithm that calculates X_Bps, and see that it 605 will always execute without any overflow, underflows or large 606 rounding errors. In this analysis we assume that no parameter has an 607 improper value. We say that a calculation is "sound" when no 608 overflow, underflow or significant rounding may occur during the 609 calculation. 611 In the first lines of the algorithm, af is calculated, and it is easy 612 to see that the calculation is sound and that the value of af will 613 end up between 1 and N+1. In the calculation of a, N may be equal to 614 or close to af/2, and then the expression (p*b*af*(N-2*af)^2) may 615 evaluate to a number very close to 0. However, (24*N^2) is added, 616 and hence the calculation of a is sound, as the values of p, b and N 617 are neither extremely small nor extremely large. Note that a will be 618 a smaller value when p is a smaller value. 620 When x is calculated there will be no problem to find the root of a, 621 and then add it to (af*p*b*(2*af-N)), which again might be a very 622 small value. The final calculation of x involves a division by 623 (6*p*N^2). However when p is small, the dividend is not very large 624 (dominated by p and a), hence the calculation of x is sound and the 625 result will neither be a very large nor a very small number. 626 However, we need to show that x will not be 0 or negative, otherwise 627 the algorithm will try a division by 0, or, if x is negative, the 628 calculation of X_Bps could turn out to be a negative value. 630 We show that x is a positive rational number (and not 0) by defining 631 W=p*b*af, Y=2*af-N and D=6*N^2*p. Then the assignment to x can be 632 written as x= (W*Y + sqrt(W*24*N^2 + W^2 * (-Y)^2 )) / D. If we 633 subtract W*24*N^2 from the argument to the sqrt-function, we get an 634 expression that is obviously smaller, hence after the assignment to 635 x, this inequality holds: x > (W*Y + sqrt(W^2 * (-Y)^2) ) / D. 636 Simplifying the right hand side gives: x > (W*Y - W*Y) / D, that is: 637 x > 0. 639 From this argument it is also clear that x is not close to 0, so that 640 the divisons by x (or x*R) that we will see later in the algorithm 641 will give a sound result. Since neither x nor j is 0, or close to 0, 642 when the right hand side in the first assignment to q is executed, 643 the calculation of the first parameter to the min function is sound. 644 Then z is found by division by 1-p, witch is sound when p is not 645 close to 1 (which we have argued above it is not). When q is 646 assigned a value the second time, again the calculation of the first 647 parameter of the min function is sound because x*R is not a value 648 close to 0. 650 Finally X_Bps is calculated and since p is not close to one (and 651 hence (1-p) is not close to 0), and since (p*x*R) is not close to 0, 652 this final calculation is also sound. 654 Authors' Addresses 656 Michael Welzl 657 University of Oslo 658 PO Box 1080 Blindern 659 Oslo, N-0316 660 Norway 662 Phone: +47 22 85 24 20 663 Email: michawe@ifi.uio.no 665 Dragana Damjanovic 666 University of Innsbruck 667 Technikerstr. 21 A 668 Innsbruck, A-6020 669 Austria 671 Phone: +43 512 507 96803 672 Email: dragana.damjanovic@uibk.ac.at 674 Stein Gjessing 675 University of Oslo 676 PO Box 1080 Blindern 677 Oslo, N-0316 678 Norway 680 Phone: +47 22 85 24 44 681 Email: steing@ifi.uio.no