idnits 2.17.1 draft-khalili-mptcp-performance-issues-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([4]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (October 21, 2013) is 3838 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: '16' is defined on line 725, but no explicit reference was found in the text == Unused Reference: '18' is defined on line 735, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 6182 (ref. '3') == Outdated reference: A later version (-05) exists of draft-khalili-mptcp-congestion-control-02 Summary: 4 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Multipath TCP Ramin Khalili 3 INTERNET-DRAFT T-Labs/TU-Berlin 4 Intended Status: Standard Track Nicolas Gast 5 Expires: April 24, 2014 Miroslav Popovic 6 Jean-Yves Le Boudec 7 EPFL-LCA2 8 October 21, 2013 10 Performance Issues with MPTCP 11 draft-khalili-mptcp-performance-issues-04 13 Abstract 15 We show, by measurements over a testbed and by mathematical analysis, 16 that the current MPTCP suffers from two problems: (P1) Upgrading some 17 TCP users to MPTCP can reduce the throughput of others without any 18 benefit to the upgraded users; and (P2) MPTCP users can be 19 excessively aggressive towards TCP users. We attribute these problems 20 to the "Linked Increases" Algorithm (LIA) of MPTCP [4], and more 21 specifically, to an excessive amount of traffic transmitted over 22 congested paths. Our results show that these problems are important 23 and can be mitigated. We believe that the design of the congestion 24 control of MPTCP should be improved. 26 Status of this Memo 28 This Internet-Draft is submitted to IETF in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF), its areas, and its working groups. Note that 33 other groups may also distribute working documents as 34 Internet-Drafts. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on April 24, 2014. 43 The list of current Internet-Drafts can be accessed at 44 http://www.ietf.org/1id-abstracts.html 45 The list of Internet-Draft Shadow Directories can be accessed at 46 http://www.ietf.org/shadow.html 48 Copyright and License Notice 50 Copyright (c) 2013 IETF Trust and the persons identified as the 51 document authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal 54 Provisions Relating to IETF Documents 55 (http://trustee.ietf.org/license-info) in effect on the date of 56 publication of this document. Please review these documents 57 carefully, as they describe your rights and restrictions with respect 58 to this document. Code Components extracted from this document must 59 include Simplified BSD License text as described in Section 4.e of 60 the Trust Legal Provisions and are provided without warranty as 61 described in the Simplified BSD License. 63 Table of Contents 65 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 3 66 1.1 Requirements Language . . . . . . . . . . . . . . . . . . . 4 67 1.2 Terminology . . . . . . . . . . . . . . . . . . . . . . . . 4 68 2 MPTCP's LIA . . . . . . . . . . . . . . . . . . . . . . . . . . 5 69 3 Testbed Setup . . . . . . . . . . . . . . . . . . . . . . . . . 6 70 4 Scenario A: MPTCP can penalize regular TCP users . . . . . . . . 7 71 5 Scenario B: MPTCP can penalize other MPTCP users . . . . . . . . 10 72 6 Scenario C: MPTCP is excessively aggressive towards TCP users . 12 73 7 Can the suboptimality of MPTCP with LIA be fixed? . . . . . . . 14 74 8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 75 9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 76 9.1 Normative References . . . . . . . . . . . . . . . . . . . . 18 77 9.2 Informative References . . . . . . . . . . . . . . . . . . . 18 78 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 80 1 Introduction 82 Regular TCP uses a window-based congestion-control mechanism to 83 adjust the transmission rate of users [2]. It always provides a 84 Pareto-optimal allocation of resources: it is impossible to increase 85 the throughput of one user without decreasing the throughput of 86 another or without increasing the congestion cost [5]. It also 87 guarantees a fair allocation of bandwidth among the users but favors 88 the connections with lower RTTs [6]. 90 Various mechanisms have been used to build a multipath transport 91 protocol compatible with the regular TCP. Inspired by utility 92 maximization frameworks, [7, 8] propose a family of algorithms. These 93 algorithms tend to use only the best paths available to users and are 94 optimal in static settings where paths have similar RTTs. In 95 practice, however, they suffer from several problems [9]. First, they 96 might fail to quickly detect free capacity as they do not probe paths 97 with high loss probabilities sufficiently. Second, they exhibit 98 flappiness: When there are multiple good paths available to a user, 99 the user will randomly flip its traffic between these paths. This is 100 not desirable, specifically, when the achieved rate depends on RTTs, 101 as with regular TCP. 103 Because of the issues just mentioned, the congestion control part of 104 MPTCP does not follow the algorithms in [7, 8]. Instead, it follows 105 an ad-hoc design based on the following goals [4]. (1) Improve 106 throughput: a multipath TCP user should perform at least as well as a 107 TCP user that uses the best path available to it. (2) Do no harm: a 108 multipath TCP user should never take up more capacity from any of its 109 paths than a regular TCP user. And (3) balance congestion: a 110 multipath TCP algorithm should balance congestion in the network, 111 subject to meeting the first two goals. 113 MPTCP compensates for different RTTs and solves many problems of 114 multipath transport [10, 11]: It can effectively use the available 115 bandwidth, it improves throughput and fairness compared to 116 independent regular TCP flows in many scenarios, and it solves the 117 flappiness problem. 119 We show, however, by measurements over our testbed and mathematical 120 analysis, that MPTCP still suffers from the following problems: 121 (P1) Upgrading some regular TCP users to MPTCP can reduce the 122 throughput of other users without any benefit to the upgraded users. 123 This is a symptom of non-Pareto optimality. 124 (P2) MPTCP users can be excessively aggressive towards TCP users. 125 We attribute these problems to the "Linked Increases" Algorithm (LIA) 126 of MPTCP [4] and specially to an excessive amount of traffic 127 transmitted over congested paths. 129 These problems indicate that LIA fails to fully satisfy its design 130 goals, especially goal number 3. The design of LIA forces a trade off 131 between optimal resource pooling and responsiveness, it cannot 132 provide both at the same time. Hence, to provide good responsiveness, 133 LIA's current implementation must depart from Pareto-optimality, 134 which leads to problems (P1) and (P2). 136 This document provides a number of examples and scenarios (Sections 4 137 to 6) in which MPTCP with LIA exhibits problems (P1) and (P2). Our 138 results show that the identified problems with LIA are important. 139 Moreover, we show in Section 7 that these problems are not due to the 140 nature of a window-based multipath protocol, but rather to the design 141 of LIA; it is possible to build an alternative to LIA that mitigates 142 these problems and is as responsive and non-flappy as LIA. Hence, we 143 believe that the design of the congestion control of MPTCP should be 144 improved. 146 1.1 Requirements Language 148 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 149 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 150 document are to be interpreted as described in RFC 2119 [1]. 152 1.2 Terminology 154 Regular TCP: The standard version of TCP that operates between a 155 single pair of IP addresses and ports [2]. 157 Multipath TCP: A modified version of the regular TCP that allows a 158 user to spread its traffic across multiple paths. 160 MPTCP: The proposal for multipath TCP specified in [3]. 162 LIA: The "Linked Increases" Algorithm of MPTCP (the congestion 163 control of MPTCP) [4]. 165 OLIA: The Opportunistic "Linked Increases" Algorithm for MPTCP 166 proposed in [12]. 168 RTT: The Round-Trip Time seen by a user. 170 MSS: The Maximum Segment Size that specifies the largest amount of 171 data can be transmitted by a TCP packet. 173 AP: Access Point. 175 2 MPTCP's LIA 177 The design of the congestion control algorithm of MPTCP has been 178 described in details by Mark Handly et al. at the 77th IETF meeting 179 in Anaheim [13]. 181 The actual implementation of the congestion control algorithm, called 182 "Linked Increases" Algorithm (LIA), increases the window size w_r by 183 a quantity proportional to w_r /w_tot for each ACK received on path 184 r (see [13, slide 3]). w_tot is the sum of the congestion windows 185 over all paths available to the user. 187 LIA is one of a family of multipath congestion control algorithms 188 that can be indexed by a parameter 0p1then w2=0 and w1>0). It correspond 193 to the fully coupled algorithm of [7] and is flappy [9]. 194 * phi=2 corresponds to having uncoupled TCP flows on each of the 195 path. It is very responsive and non-flappy, but does not balance 196 congestion in the network. 197 * phi=1 correspond to LIA and is described in RFC6356 [4]. It 198 provides a compromise between congestion balancing and 199 responsiveness. 201 3 Testbed Setup 203 To investigate the behavior of MPTCP in practice, three testbed 204 topologies are created representing scenarios in Sections 4, 5, and 205 6. We use server-client PCs that run MPTCP enabled Linux kernels. We 206 use MPTCP for the Linux kernel 3.0.0 released in February 2012. In 207 all our scenarios, laptop PCs are used as routers. We install "Click 208 Modular Router" software [14] on the routers to implement topologies 209 with different characteristics. Iperf is used to create multiple 210 connections. 212 In our scenarios, we are able to implement links with configurable 213 bandwidth and delay. We are also able to set the parameters of the 214 RED queues following the structure in [15]. For a 10 Mbps link, we 215 set the packet loss probability equal to 0 up to a queue size of 25 216 Maximum Segment Size (MSS). Then it grows linearly to the value 0.1 217 at 50 MSS. It again increases linearly up to 1 for 100 MSS. The 218 parameters are proportionally adapted when the link capacity changes. 220 To verify that the problems observed are caused by the congestion- 221 control algorithm of MPTCP and not by some unknown problems in our 222 testbed, we perform a mathematical analysis of MPTCP. This analysis 223 is based on the fix point analysis of MPTCP. As we will see, our 224 mathematical results confirm our measurement results. The details of 225 these mathematical analyses are available in [12]. 227 4 Scenario A: MPTCP can penalize regular TCP users 229 Consider a network with two types of users. There are N1 users of 230 type1, each with a high-speed private connection. These users access 231 different files on a media streaming server. The server has a network 232 connection with capacity limit of N1C1 Mbps. Type1 users can activate 233 a second connection through a shared AP by using MPTCP. There are 234 also N2 users of type2; they are connected to the shared AP. They 235 download their contents from the Internet. The shared AP has a 236 capacity of N2C2 Mbps. 238 We implement this scenario in a testbed similarly to Figure 1. Within 239 router-PCs R1 and R2, we implement links with capacities N1C1 and 240 N2C2 and RTTs of 150 ms (including queuing delay), modeling the 241 bottleneck at the server side and the shared AP, respectively. High 242 speed connections are used to implement private connections of type1 243 users. 244 _____ 245 N1 | | /-- Servers 246 Type1 ------- high speed -------|N1.C1|--|--- For type1 247 users ---\ connections /--|_____| \-- Users 248 \ / router R2 249 \ / 250 \ _____ / 251 \--| |--/ 252 N2 |N2.C2| /-- Servers 253 Type2 ----------|_____|-------------|--- For type 2 254 users router R1 \-- Users 256 Figure 1: Testbed implementation of Scenario A: router R1 implements 257 the bottleneck at the server side and router R2 implements the shared 258 AP bottleneck. 260 When type1 users use only their private connection, each type1 user 261 receives a rate of C1 and each type2 user receives a rate of C2. By 262 upgrading type1 users to MPTCP, we observe, through measurement and 263 by mathematical analysis, that the throughput of type2 users 264 significantly decreases. However, type1 users receive the same 265 throughput as before, because the bottleneck for their connections is 266 at the server side. We report the throughput received by users after 267 upgrading type1 users to MPTCP on Table 1 for C1=C2=1 Mbps. For each 268 case, we take 5 measurements. In each case, the confidence intervals 269 are very small (less than 0.01Mbps). 271 In [12 Section3.2], we provide a mathematical analysis of MPTCP that 272 confirm our measurements: The predicted rate of type1 users is always 273 1 and the predicted rate for type2 users is, respectively, 0.74 when 274 N1=N2=10 and 0.50 when N1=30 and N2=10. 276 +-------------+------------------------------------+ 277 | Type1 users | Type1 users are multipath | 278 | are |---------+--------------------------| 279 | single path | MPTCP | optimal algorithm | 280 | | |with p. cost|w/out p. cost| 281 |(measurement)| (meas.) | (theory) | (theory) | 282 +------+-------------+---------+------------+-------------+ 283 N1=10 |type1 | 0.98 | 0.96 | 1 | 1 | 284 +------+-------------+---------+------------+-------------+ 285 N2=10 |type2 | 0.98 | 0.70 | 0.94 | 1 | 286 ------+------+-------------+---------+------------+-------------+ 287 N1=30 |type1 | 0.98 | 0.98 | 1 | 1 | 288 +------+-------------+---------+------------+-------------+ 289 N2=10 |type2 | 0.98 | 0.44 | 0.8 | 1 | 290 +------+-------------+---------+------------+-------------+ 291 meas.=measurements, p.=probing, w/out=without, Values are in Mbps. 293 Table 1: Throughput obtained by type1 and type2 users in Scenario A: 294 upgrading type1 users to MPTCP decrease the throughput of type2 with 295 no benefit for type1 users. The problem is much less critical using 296 optimal algorithm with probing cost. 298 We observe that MPTCP exhibits problem (P1): upgrading type1 users to 299 MPTCP penalizes type2 users without any gain for type1 users. As the 300 number of type1 users increases, the throughput of type2 users 301 decreases, but the throughput of type1 users does not change as it is 302 limited by the capacity of the streaming server. For N1=N2, type2 303 users see a decrease of 30%. When N1=3N2, this decrease is 55%. 305 We compare the performance of MPTCP with two theoretical baselines. 306 They serve as references to see how far from the optimum MPTCP with 307 LIA is. We show in Section 7 that it is possible to replace LIA by an 308 alternative that keeps the same non-flappiness and responsiveness and 309 performs closer to the optimum. 311 The first baseline is an algorithm that provides theoretical optimal 312 resource pooling in the network (as discussed in [7] and several 313 other theoretical papers). We refer to it as "optimal algorithm 314 without probing cost". 316 In practice, however, the value of the congestion windows are bounded 317 below by 1 MSS. Hence, with a window-based congestion-control 318 algorithm, a minimum probing traffic of 1 MSS per RTT will be sent 319 over a path. We introduce a second theoretical baseline, called 320 "optimal algorithm with probing cost"; it provides optimal resource 321 pooling in the network given that a minimum probing traffic of 1 MSS 322 per RTT is sent over each path. 324 We show the performance of these optimal algorithms in Table 1. Using 325 an optimal algorithm with probing cost, the entire capacity of the 326 shared AP is allocated to type2 users. Hence, all the users in the 327 network receives a throughput of 1 Mbps. By using an optimal 328 algorithm with probing cost, type1 users will send only 1MSS per RTT 329 over the shared AP. Hence, we observe a decrease on the throughput of 330 type2 users. However, the decrease is much less than what we observe 331 using MPTCP. The performance of our proposed alternative to LIA is 332 shown in Section 7, Table 4. 334 This performance problem of MPTCP can be explained by the fact that 335 LIA does not fully balance congestion. For N1=N2, we observe through 336 measurements that p1=0.009 and p2=0.02 (the probability of losses at 337 routers R1 and R2). For N1=3N2, the value of p1 remains almost the 338 same and p2 increases to p2=0.05. LIA excessively increases 339 congestion on the shared AP, which is not in compliance with goal 3. 340 In [12], we propose an alternative to LIA. Using this algorithm, we 341 have p1=0.009 and p2=0.012 for N1=10 and 0.018 for N1=30. Hence, it 342 is possible to provide a better congestion balancing in the network. 344 Because of some practical issues, we did not study larger number of 345 connections in our testbed. However, we have mathematical results 346 (using LIA's loss-throughput formula [12]) as well as simulation 347 results (using a flow-level simulator) that confirm our observation 348 for larger number of connections. 350 5 Scenario B: MPTCP can penalize other MPTCP users 352 Consider a multi-homing scenario as follows. We have four Internet 353 Service Providers (ISPs), X, Y, Z, and T. Y is a local ISP in a small 354 city, which connects to the Internet through Z. X, Z, and T are 355 nation-wide service providers and are connected to each other through 356 high speed links. X provides Internet services to users in the city 357 and is a competitor of Y. They have access capacity limits of CX, CY, 358 CZ, and CT. Z and T are hosts of different video streaming servers. 360 There are two types of users: Blue users download contents from 361 servers in Z and Red users download from servers in ISP T. To 362 increase their reliability, Blue users use multi-homing and are 363 connected to both ISPs X and Y. Red users can connect either only to 364 Y or to both X and Y . 366 ____ ____ 367 Blue |------b1---| |-----------b1----------| |-b1-\ Servers 368 Users|-\ | X | | Z | | for 369 \ ..|____|.. /-|____|-b2-/ Blue users 370 \ . . / 371 b2 r2 r2 b2 372 \. . / 373 .\ ____ . ____ / 374 . \---| |--b2--.--| |-/ 375 Red |... | Y | ..| T |...r2...\ Servers for 376 Users |--r1------|____|--r1-----|____|---r1----| Red users 378 Figure 2. Testbed implementation of Scenario B. Blue users transmit 379 over path b1 and b2. Red users use path r1, but can upgrade to MPTCP 380 by establishing a second connection through path r2. 382 We implement the scenario in a testbed similar to Figure 2. b1 and b2 383 are the paths available to Blue users. Red users use the path r1, but 384 can upgrade to MPTCP by establishing a second connection through path 385 r2. The measurement results are reported in Table 2 for a setting 386 with CX=27, CT=36, and CY=CZ=100, all in Mbps, where we have 15 Red 387 and 15 Blue users. RTTs are around 150 ms (including queuing delay) 388 over all paths. We also show the performance of theoretically optimal 389 algorithms with and without probing cost. 391 We observe that when Red users only connect to ISP Y, the aggregate 392 throughput of users is close to the cut-set bound, 63 Mbps. However, 393 Blue users get a higher share of the network bandwidth. Now, let's 394 consider that Red users upgrade to MPTCP by establishing a second 395 connection through X (showed by pointed-line in Figure 2). Our 396 results in Table 1 show that Red users do not receive any higher 397 throughput. However, the average rate of Blue users drops by 20%, 398 which results in a drop of 13% in aggregate throughput. 400 +-------------------------+-------------------------+ 401 |Red users are single-path| Red users are multipath | 402 |-------------------------+-------------------------+ 403 | Blue users use | Blue and Red users use | 404 | MPTCP |optimal algorithm| MPTCP |optimal algorithm| 405 | |with p. |w/out p.| |with p. |w/out p.| 406 | | cost | cost | | cost | cost | 407 |(meas.)|(theory)|(theory)|(meas.)|(theory)|(theory)| 408 +-----------+-------+--------+--------+-------+--------+--------+ 409 |Red users | 1.5 | 2.1 | 2.1 | 1.4 | 2.04 | 2.1 | 410 +-----------+-------+--------+--------+-------+--------+--------+ 411 |Blue users | 2.5 | 2.1 | 2.1 | 2.0 | 2.04 | 2.1 | 412 +-----------+-------+--------+--------+-------+--------+--------+ 413 meas.=measurements, p.=probing, w/out=without, Values are in Mbps. 415 Table 2: Throughput received by users before and after upgrading Red 416 users to MPTCP. We have 15 Red and 15 Blue users. By upgrading Red 417 users to MPTCP, the aggregate throughput of users decreases by 13% 418 with no benefit for Red users. 420 In [12 Section 3.3], we also provide a mathematical analysis of 421 MPTCP. Our mathematical results predict that by upgrading the Red 422 user to MPTCP the rate of Blue users will be reduced by 21%. This 423 results in 14% decrease in the aggregate throughput. Hence, our 424 mathematical results confirm our observations from the measurement. 425 Similar behavior is predicted for other values of CX and CT [12 426 Figure 4a]. 428 Using an optimal algorithm without probing cost, Red users transmit 429 only over path r1 and Blue users split their traffic over paths b1 430 and b2 to equalize the rate of blue and red users. Upgrading Red 431 users to multipath does not change the allocation. Hence, we observe 432 no decrease in the aggregate throughput and the rate of each user. By 433 using an optimal algorithm with probing cost, the rate of Blue and 434 Red users decreases by 3% when we upgrade the Red users to multipath 435 users since red users are forced to send 1 MSS per RTT over path r2. 436 This decrease is much less than what we observe using MPTCP with LIA. 437 The performance of our proposed alternative to LIA is shown in 438 Section 7, Table 5. 440 Similarly to Scenario A, the problem can be attributed to the 441 excessive amount of traffic sent over the congested paths. This 442 illustrates that MPTCP fails to balance the congestion in the 443 network. 445 6 Scenario C: MPTCP is excessively aggressive towards TCP users 447 Consider a scenario with N1 multipath users, N2 single-path users, 448 and two APs with capacities N1C1 and N2C2 Mbps. Multipath users 449 connect to both APs and they share AP2 with single-path users. The 450 users download their contents from the Internet. This scenario is 451 implemented in a testbed similar to Figure 3. 453 _____ 454 N1 | | /-- Servers 455 multipath -------------|N1.C1|---------|---|--- For multipath 456 users ----\ |_____| /--| \-- Users 457 \ router R1 / 458 \ / 459 \ _____ / 460 N2 \--| |--/ 461 single-path |N2.C2| /-- Servers 462 users --------|_____|---------|--- For single-path 463 router R2 \-- Users 465 Figure 3: Testbed implementation of Scenario C: routers R1 and R2 466 implement AP1 and AP2 with capacities N1C1 and N2C2 Mbps. 468 If the allocation of rates is proportionally fair, multipath users 469 will use AP2 only if C1C2, a fair multipath user will not transmit over 471 AP2. However, using MPTCP with LIA, multpath users receive a 472 disproportionately larger share of bandwidth. 474 We implement the scenario in our testbed and measure the performance 475 of MPTCP with LIA. We report the throughput received by single-path 476 and multipath users in Table 3. We present the results for N2=10 and 477 two values of N1=10 and 30, where C1=C2=1 Mbps. RTTs are around 150 478 ms (including queuing delay). We also present the performance of 479 optimal (proportionally fair) algorithms. 481 As C1=C2, for any fairness criterion, multipath users should not use 482 AP2. Our results show that, MPTCP users are disproportionately 483 aggressive and exhibit problem (P2). Hence, single-path users receive 484 a much smaller share than they should. For N1=N2, single-path users 485 see a decrease of about 30% in their received throughput compared to 486 a fair allocation. When N1=3N2, this decrease is around 55%. 488 These measurements are confirmed by our mathematical analysis as 489 shown in [12 Section 3.4]. The predicted rate of type1 users is 1.3 490 for N1=N2=10 and is 1.17 when N1=30 and N2=10. For type2 users, the 491 predicted rate is 0.7 when N1=N2=10 and 0.48 when N1=30 and N2=10. 493 +---------------+--------------------------+ 494 |multipath users| multipath users use | 495 | use | optimal algorithm | 496 | MPTCP |with p. cost|w/out p. cost| 497 | (measurement) | (theory) | (theory) | 498 +-----------------+---------------+------------+-------------+ 499 N1=10 |multipath users | 1.3 | 1.04 | 1 | 500 +-----------------+---------------+------------+-------------+ 501 N2=10 |single-path users| 0.68 | 0.94 | 1 | 502 +-----+-----------------+---------------+------------+-------------+ 503 N1=30 |multipath users | 1.19 | 1.04 | 1 | 504 +-----------------+---------------+------------+-------------+ 505 N2=10 |single-path users| 0.38 | 0.8 | 1 | 506 +-----------------+---------------+------------+-------------+ 507 p.=probing, w/out=without, Values are in Mbps. 509 Table3: Throughput obtained by single-path and multipath users in 510 Scenario C: MPTCP is excessively aggressive toward TCP users and 511 performs far from how an optimal algorithm would perform. 513 An optimal algorithm with probing cost provide a proportional 514 fairness among the users. By using an optimal algorithm with probing 515 cost, single-path users receive a rate less than what a 516 proportionally fair algorithm will provide them. However, as we 517 observe, the problem is much less critical compared to the case we 518 use MPTCP. The performance of our proposed alternative to LIA is 519 shown in Section 7, Table 6. 521 These results clearly show that MPTCP suffers from fairness issues. 522 The problem occurs because LIA fails to fully satisfy goal 3. As in 523 Scenarios A and B, MPTCP sends an excessive amount of traffic over 524 the congested paths. 526 7 Can the suboptimality of MPTCP with LIA be fixed? 528 We have shown in Section 4, 5, and 6 that MPTCP with LIA performs far 529 behind an optimal algorithm. The question is, "is it possible to 530 modify the congestion control algorithm of MPTCP to perform closer to 531 the optimum". To answer this question, we implement a new congestion 532 control algorithm for MPTCP, called Opportunistic "Linked Increases" 533 Algorithm (OLIA). OLIA is described in [13] and its performance is 534 analyzed in [12]. In this section, we show that in Scenarios A, B and 535 C OLIA performs close to an optimal algorithm with probing cost. 536 Moreover, as shown in [12, Sections 4.3 and 6.2], OLIA keeps the same 537 non-flappiness and responsiveness as LIA. 539 Contrary to LIA, OLIA's design is not based on a trade-off between 540 responsiveness and optimal resource pooling. OLIA couples the 541 additive increases and uses unmodified TCP behavior in the case of a 542 loss. The difference between LIA and OLIA is in the increase part. 543 OLIA's increase part two has terms: 544 * The first term is an adaptation of the increase term of the 545 optimal algorithm in [7]. This term is essential to provide 546 congestion balancing and fairness. 547 * The second term guarantees responsiveness and non-flappiness of 548 OLIA. By measuring the number of transmitted bytes since the last 549 loss, it reacts to events within the current window and adapts to 550 changes faster than the first term. 551 Because OLIA is rooted in the optimal algorithm of [7], it can 552 provide fairness and congestion balancing. Because of the second 553 term, it is responsive and non-flappy. 555 We implemented OLIA by modifying the congestion control part of the 556 MPTCP implementation based on the Linux Kernel 3.0.0. For 557 conciseness, we do not describe OLIA in this paper and refer to [12] 558 for details about the algorithm and its implementation. 560 We study the performance of MPTCP with OLIA through measurements in 561 Scenarios A, B, and C. The results are reported in Tables 4, 5 and 6. 562 We compare the performance of our algorithm with MPTCP with LIA and 563 with optimal algorithms. We observe that in all cases, MPTCP with 564 OLIA provide a significant improvement over MPTCP with LIA. Moreover, 565 it performs close to an optimal algorithm with probing cost. 567 +-------------+----------------------------------------+ 568 | Type1 users | Type1 users are multipath | 569 | are |-------------+--------------------------| 570 | single path |MPTCP w. OLIA| optimal algorithm | 571 | | [with LIA] |with p. cost|w/out p. cost| 572 |(measurement)|(measurement)| (theory) | (theory) | 573 +------+-------------+-------------+------------+-------------+ 574 N1=10 |type1 | 0.98 |0.98 [0.96] | 1 | 1 | 575 +------+-------------+-------------+------------+-------------+ 576 N2=10 |type2 | 0.98 |0.86 [0.70] | 0.94 | 1 | 577 ------+------+-------------+-------------+------------+-------------+ 578 N1=30 |type1 | 0.98 |0.98 [0.98] | 1 | 1 | 579 +------+-------------+-------------+------------+-------------+ 580 N2=10 |type2 | 0.98 |0.75 [0.44] | 0.8 | 1 | 581 +------+-------------+-------------+------------+-------------+ 582 p.=probing, w.=with, w/out=without, Values are in Mbps. 584 Table 4. Performance of MPTCP with OLIA compared to MPTCP with LIA in 585 scenario A. We show the throughput obtained by users before and after 586 upgrading type1 users to MPTCP. The values in brackets are the values 587 for MPTCP with LIA (taken from table 1). 589 +----------------------------+----------------------------+ 590 | Red users are single-path | Red users are multipath | 591 |----------------------------+----------------------------+ 592 | Blue users use | Blue and Red users use | 593 | MPTCP |optimal algorithm| MPTCP |optimal algorithm| 594 |with OLIA |with p. |w/out p.|with OLIA |with p. |w/out p.| 595 |[with LIA]| cost | cost |[with LIA]| cost | cost | 596 | (meas.) |(theory)|(theory)| (meas.) |(theory)|(theory)| 597 +-----------+----------+--------+--------+----------+--------+--------+ 598 |Red users |1.8 [1.5]| 2.1 | 2.1 |1.7 [1.4]| 2.04 | 2.1 | 599 +-----------+----------+--------+-- -----+----------+--------+--------+ 600 |Blue users |2.2 [2.5]| 2.1 | 2.1 |2.2 [2.0]| 2.04 | 2.1 | 601 +-----------+----------+--------+--------+----------+--------+--------+ 602 meas.=measurements, p.=probing, w/out=without, Values are in Mbps. 604 Table 5. Performance of MPTCP with OLIA compared to MPTCP with LIA in 605 scenario B. We show the throughput received by users before and after 606 upgrading Red users to MPTCP. The values in brackets are the values 607 for MPTCP with LIA (taken from Table 2). 609 +---------------+--------------------------+ 610 |multipath users| multipath users use | 611 | use | optimal algorithm | 612 |MPTCP with OLIA|with probing|w/out probing| 613 | [with LIA] | cost | cost | 614 |(measurement) | (theory) | (theory) | 615 +-----------------+---------------+------------+-------------+ 616 N1=10 |multipath users | 1.11 [1.30] | 1.04 | 1 | 617 +-----------------+---------------+------------+-------------+ 618 N2=10 |single-path users| 0.88 [0.68] | 0.94 | 1 | 619 ------+-----------------+---------------+------------+-------------+ 620 N1=10 |multipath users | 1.1 [1.19] | 1.04 | 1 | 621 +-----------------+---------------+------------+-------------+ 622 N2=10 |single-path users| 0.72 [0.38] | 0.8 | 1 | 623 +-----------------+---------------+------------+-------------+ 624 meas.=measurements, w/out=without, Values are in Mbps. 626 Table 6. Performance of MPTCP with OLIA compared to MPTCP with LIA in 627 scenario C. We show the throughput obtained by single-path and 628 multipath users. The values in brackets are the values for MPTCP with 629 LIA (taken from Table 3). 631 The results show that it is possible to perform close to an optimal 632 algorithm with probing cost by using a TCP-like algorithm. Moreover, 633 we show in [12, Section 4.3 and Section 6.2] that MPTCP with OLIA is 634 as responsive and non-flappy as MPTCP with LIA. This shows that it is 635 possible to build a practical multi-path congestion control that 636 works close to an optimal algorithm with probing cost. 638 In [17], Chen et al. study how MPTCP with LIA and OLIA performs in 639 the wild with a common wireless environment, namely using both WiFi 640 and Cellular simultaneously. Their results show that MPTCP with OLIA 641 is very responsive to the changes in the environment and always 642 outperforms MPTCP with LIA. Furthermore, using Experimental Design, 643 Paasch et al. [8] show that MPTCP with OLIA satisfy the design goal 644 of MPTCP in a very wide range of scenarios and always outperform 645 MPTCP with LIA. 647 8 Conclusion 649 We have shown that MPTCP with LIA suffers from important performance 650 issues. Moreover, it is possible to build an alternative to LIA that 651 performs close to an optimal algorithm with probing cost while being 652 as responsive and non-flappy as LIA. Hence, we believe that mptcp 653 working group should revisit the congestion control part of MPTCP and 654 that an alternative algorithm, such as OLIA [12], should be 655 considered. 657 9 References 659 9.1 Normative References 661 [1] Bradner, S., "Key words for use in RFCs to Indicate 662 Requirement Levels", BCP 14, RFC 2119, March 1997. 664 [2] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 665 Control", RFC 5681, September 2009. 667 [3] Ford, A., Raiciu, C., Greenhalgh, A., and M. Handley, 668 "Architectural Guidelines for Multipath TCP Development", 669 RFC 6182, March 2011. 671 9.2 Informative References 673 [4] Raiciu, C., Handley, M., and D. Wischik, "Coupled 674 Congestion Control for Multipath Transport Protocols", 675 RFC 6356, October 2011. 677 [5] F.P. Kelly. Mathematical modelling of the internet. 678 Mathematics unlimited-2001 and beyond. 680 [6] F.P. Kelly, A.K. Maulloo, and D.K.H. Tan. Rate control for 681 communication networks: shadow prices, proportional 682 fairness and stability. Journal of the Operational 683 Research society, 49, 1998. 685 [7] Kelly, F. and T. Voice, "Stability of end-to-end 686 algorithms for joint routing and rate control", ACM 687 SIGCOMM CCR vol. 35 num. 2, pp. 5-12, 2005. 689 [8] H. Han, S. Shakkottai, CV Hollot, R. Srikant, and D. 690 Towsley. "Multi-path tcp: a joint congestion control and 691 routing scheme to exploit path diversity in the internet", 692 Trans. on Net., 14, 2006. 694 [9] D. Wischik, M. Handly, and C. Raiciu. "Control of 695 multipath tcp and optimization of multipath routing in the 696 internet", NetCOOP, 2009. 698 [10] D. Wischik, C. Raiciu, A. Greenhalgh, and M. Handly. 699 "Design, implementation and evaluation of congestion 700 control for multipath tcp", Usenix NSDI, 2011. 702 [11] C. Raiciu, S. Barre, C. Pluntke, A. Greenhalgh, D. 703 Wischik, and M. Handly. "Improving datacenter performance 704 and robustness with multipath tcp", ACM Sigcomm, 2011. 706 [12] R. Khalili, N. Gast, M. Popovic, U. Upadhyay, and J.-Y. Le 707 Boudec. "MPTCP is not Pareto-optimality: Performance 708 issues and a possible solution", ACM CoNEXT 2012(The 709 extended version of this paper will be appeared at 710 IEEE/ACM Transaction on Networking.). 712 [13] M. Handly, D. Wischik, C. Raiciu, "Coupled Congestion 713 Control for MPTCP", presented at 77th IETF meeting, 714 Anaheim, California. 715 . 717 [14] B. Chen J. Jannotti E. Kohler, R. Morris and M. F. 718 Kaashoek. "The click modular router", Trans. Comput. 719 Syst., 18, August 2000. 721 [15] S. Floyd and V. Jacobson. "Random early detection gateways 722 for congestion avoidance", Trans. on Net., 1, August, 723 1993. 725 [16] R. Khalili, N. Gast, M. Popovic, J.-Y. Le Boudec, 726 "Opportunistic Linked-Increases Congestion Control 727 Algorithm for MPTCP", draft-khalili-mptcp-congestion- 728 control-02. 730 [17] Chen Y.-C, Y.-S. Lim, R. J. Gibbens, E. M. Nahum, R. 731 Khalili, and D. Towsley, "A Measurement-based Study of 732 Multipath TCP Performance over Wireless Networks." ACM IMC 733 2013. 735 [18] C. Paasch, R. Khalili, and O. Bonaventure, "On the 736 Benefits of Applying Experimental Design to Improve 737 Multipath TCP." ACM CoNEXT 2013. 739 Authors' Addresses 741 Ramin Khalili 742 T-Labs/TU-Berlin 743 TEL 3, Ernst-Reuter-Platz 7 744 10587 Berlin 745 Germany 747 Phone: +49 30 8353 58276 748 EMail: ramin@net.t-labs.tu-berlin.de 750 Nicolas Gast 751 EPFL IC ISC LCA2 752 Station 14 753 CH-1015 Lausanne 754 Switzerland 756 Phone: +41 21 693 1254 757 EMail: nicolas.gast@epfl.ch 759 Miroslav Popovic 760 EPFL IC ISC LCA2 761 Station 14 762 CH-1015 Lausanne 763 Switzerland 765 Phone: +41 21 693 6466 766 EMail: miroslav.popovic@epfl.ch 768 Jean-Yves Le Boudec 769 EPFL IC ISC LCA2 770 Station 14 771 CH-1015 Lausanne 772 Switzerland 774 Phone: +41 21 693 6631 775 EMail: jean-yves.leboudec@epfl.ch