idnits 2.17.1 draft-welzl-rmcat-coupled-cc-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (January 19, 2013) is 4107 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 2140 (Obsoleted by RFC 9040) Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RTP Media Congestion Avoidance M. Welzl 3 Techniques (rmcat) University of Oslo 4 Internet-Draft January 19, 2013 5 Intended status: Experimental 6 Expires: July 23, 2013 8 Coupled congestion control for RTP media 9 draft-welzl-rmcat-coupled-cc-00 11 Abstract 13 When multiple congestion controlled RTP sessions traverse the same 14 network bottleneck, it can be beneficial to combine their controls 15 such that the total on-the-wire behavior is improved. This document 16 describes such a method for flows that have the same sender, in a way 17 that is as flexible and simple as possible while minimizing the 18 amount of changes needed to existing RTP applications. 20 Status of this Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at http://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on July 23, 2013. 37 Copyright Notice 39 Copyright (c) 2013 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 1. Introduction 54 When there is enough data to send, a congestion controller must 55 increase its sending rate until the path's available capacity has 56 been reached; depending on the controller, sometimes the rate is 57 increased further, until packets are ECN-marked or dropped. In the 58 public Internet, this is currently the only way to get any feedback 59 from the network that can be used as an indication of congestion. 60 This process inevitably creates undesirable queuing delay -- an 61 effect that is amplified when multiple congestion controlled 62 connections traverse the same network bottleneck. When such 63 connections originate from the same host, it would therefore be ideal 64 to use only one single sender-side congestion controller which 65 determines the overall allowed sending rate, and then use a local 66 scheduler to assign a proportion of this rate to each RTP session. 67 This way, priorities could also be implemented quite easily, as a 68 function of the scheduler; honoring user-specified priorities is, for 69 example, required by rtcweb [rtcweb-usecases]. 71 The Congestion Manager (CM) [RFC3124] provides a single congestion 72 controller with a scheduling function just as described above. It 73 is, however, hard to implement because it requires an additional 74 congestion controller and removes all per-connection congestion 75 control functionality, which is quite a significant change to 76 existing RTP based applications. This document presents a method 77 that is easier to implement than the CM and also requires less 78 significant changes to existing RTP based applications. It attempts 79 to roughly approximate the CM behavior by sharing information between 80 existing congestion controllers, akin to "Ensemble Sharing" in 81 [RFC2140]. 83 2. Definitions 85 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 86 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 87 document are to be interpreted as described in RFC 2119 [RFC2119]. 89 Available Bandwidth: 90 The available bandwidth is the nominal link capacity minus the 91 amount of traffic that traversed the link during a certain time 92 interval, divided by that time interval. 94 Bottleneck: 95 The first link with the smallest available bandwidth along the 96 path between a sender and receiver. 98 Flow: 99 A flow is the entity that congestion control is operating on. 100 It could, for example, be a transport layer connection, an RTP 101 session, or a subsession that is multiplexed onto a single RTP 102 session together with other subsessions. 104 Flow Group Identifier (FGI): 105 A unique identifier for each subset of flows that is limited by 106 a common bottleneck. 108 Flow State Exchange (FSE): 109 The entity which maintains information that is exchanged 110 between flows. 112 Flow Group (FG): 113 A group of flows having the same FGI. 115 Shared Bottleneck Detection (SBD): 116 The entity that determines which flows traverse the same 117 bottleneck in the network, or the process of doing so. 119 3. Limitations 121 Sender-side only: 122 Coupled congestion control as described here only operates 123 inside a single host on the sender side. This is because, 124 irrespective of where the major decisions for congestion 125 control are taken, the sender of a flow needs to eventually 126 decide the transmission rate. Additionally, the necessary 127 information about how much data an application can currently 128 send on a flow is typically only available at the sender side, 129 making the sender an obvious choice for placement of the 130 elements and mechanisms described here. It is recognized that 131 flows that have different senders but the same receiver, or 132 different senders and different receivers can also share a 133 bottleneck; such scenarios have been omitted for simplicity, 134 and could be incorporated in future versions of this document. 135 Note that limiting the flows on which coupled congestion 136 control operates merely limits the benefits derived from the 137 mechanism. 139 Shared bottlenecks do not change quickly: 140 As per the definition above, a bottleneck depends on cross 141 traffic, and since such traffic can heavily fluctuate, 142 bottlenecks can change at a high frequency (e.g., there can be 143 oscillation between two or more links). This means that, when 144 flows are partially routed along different paths, they may 145 quickly change between sharing and not sharing a bottleneck. 146 For simplicity, here it is assumed that a shared bottleneck is 147 valid for a time interval that is significantly longer than the 148 interval at which congestion controllers operate. Note that, 149 for the only SBD mechanism defined in this document 150 (multiplexing on the same five-tuple), the notion of a shared 151 bottleneck stays correct even in the presence of fast traffic 152 fluctuations: since all flows that are assumed to share a 153 bottleneck are routed in the same way, if the bottleneck 154 changes, it will still be shared. 156 4. Architectural overview 158 Figure 1 shows the elements of the architecture for coupled 159 congestion control: the Flow State Exchange (FSE), Shared Bottleneck 160 Detection (SBD) and Flows. The FSE is a storage element. It is 161 passive in that it does not actively initiate communication with 162 flows and the SBD; its only active role is internal state maintenance 163 (e.g., an implementation could use soft state to remove a flow's data 164 after long periods of inactivity). Every time a flow's congestion 165 control mechanism would normally update its sending rate, the flow 166 instead updates information in the FSE and performs a query on the 167 FSE, leading to a sending rate that is often different from what the 168 congestion controller originally determined. Using information 169 about/from the currently active flows, SBD updates the FSE with the 170 correct Flow Group Identifiers (FGIs). 172 ------- <--- Flow 1 173 | FSE | <--- Flow 2 .. 174 ------- <--- .. Flow N 175 ^ 176 | | 177 ------- | 178 | SBD | <-------| 179 ------- 181 Figure 1: Coupled congestion control architecture 183 Since everything shown in Figure 1 is assumed to operate on a single 184 host (the sender) only, this document only describes aspects that 185 have an influence on the resulting on-the-wire behavior. It does, 186 for instance, not define how many bits must be used to represent 187 FGIs, or in which way the entities communicate. Implementations can 188 take various forms: for instance, all the elements in the figure 189 could be implemented within a single application, thereby operating 190 on flows generated by that application only. Another alternative 191 could be to implement both the FSE and SBD together in a separate 192 process which different applications communicate with via some form 193 of Inter-Process Communication (IPC). Such an implementation would 194 extend the scope to flows generated by multiple applications. The 195 FSE and SBD could also be included in the Operating System kernel. 197 5. Roles 199 This section gives an overview of the roles of the elements of 200 coupled congestion control, and provides an example of how coupled 201 congestion control can operate. 203 5.1. SBD 205 SBD uses knowledge about the flows to determine which flows belong in 206 the same Flow Group (FG), and assigns FGIs accordingly. This 207 knowledge can be derived from measurements, by considering 208 correlations among measured delay and loss as an indication of a 209 shared bottleneck, or it can be based on the simple assumption that 210 packets sharing the same five-tuple (IP source and destination 211 address, protocol, and transport layer port number pair) are 212 typically routed in the same way. The latter method is the only one 213 specified in this document: SBD MUST consider all flows that use the 214 same five-tuple to belong to the same FG. This classification 215 applies to certain tunnels, or RTP flows that are multiplexed over 216 one transport (cf. [transport-multiplex]). In one way or another, 217 such multiplexing will probably be recommended for use with rtcweb 218 [rtcweb-rtp-usage]. Port numbers are needed as part of the 219 classification due to mechanisms like Equal-Cost Multi-Path (ECMP) 220 routing which use different paths for packets towards the same 221 destination, but are typically configured to keep packets from the 222 same transport connection on the same path. 224 5.2. FSE 226 The FSE contains a list of all flows that have registered with it. 227 For each flow, it stores: 229 o a unique flow number to identify the flow 231 o the FGI of the FG that it belongs to (based on the definitions in 232 this document, a flow has only one bottleneck, and can therefore 233 be in only one FG) 235 o a priority P, which here is assumed to be represented as a 236 floating point number in the range from 0.1 (unimportant) to 1 237 (very important). A negative value is used to indicate that a 238 flow has terminated. 240 o The calculated rate CR, i.e. the rate that was most recently 241 calculated by the flow's congestion controller. 243 o The desired rate DR. This can be smaller than the calculated rate 244 if the application feeding into the flow has less data to send 245 than the congestion controller would allow. In case of a greedy 246 flow, DR must be set to CR. A DR value that is larger than CR 247 indicates that the flow has taken leftover bandwidth from a non- 248 greedy flow. 250 o S_CR, the sum of the calculated rates of all flows in the same FG 251 (including the flow itself), as seen by the flow during its last 252 rate update. 254 The information listed here is enough to implement the sample flow 255 algorithm given below. FSE implementations could easily be extended 256 to store, e.g., a flow's current sending rate for statistics 257 gathering or future potential optimizations. 259 5.3. Flows 261 Flows register themselves with SBD and FSE when they start, 262 deregister from the FSE when they stop, and carry out an UPDATE 263 function call every time their congestion controller calculates a new 264 sending rate. Via UPDATE, they provide the newly calculated rate and 265 the desired rate (less than the calculated rate in case of non-greedy 266 flows, the same otherwise). UPDATE returns a rate that should be 267 used instead of the rate that the congestion controller has 268 determined. 270 Below, an example algorithm is described. While other algorithms 271 could be used instead, the same algorithm must be applied to all 272 flows. The way the algorithm is described here, the operations are 273 carried out by the flows, but they are the same for all flows. This 274 means that the algorithm could, for example, be implemented in a 275 library that provides registration, deregistration functions and the 276 UPDATE function. To minimize the number of changes to existing 277 applications, one could, however, also embed this functionality in 278 the FSE element. 280 5.3.1. Example algorithm 282 (1) When a flow starts, it registers itself with SBD and the FSE. 283 CR and DR are initialized with the congestion controller's 284 initial rate. SBD will assign the correct FGI. When a flow is 285 assigned an FGI, its S_CR is initialized to be the sum of the 286 calculated rates of all the flows in its FG. 288 (2) When a flow stops, it sets its DR to 0 and negates P. 290 (3) Every time the flow's congestion controller determines a new 291 sending rate new_CR, assuming the flow's new desired rate new_DR 292 to be "infinity" in case of a greedy flow with an unknown 293 maximum rate, the flow calls UPDATE, which carries out the 294 following tasks: 296 (a) For all the flows in its FG (including itself), it 297 calculates the sum of all the absolute values of all 298 priorities, S_P, the sum of all desired rates, S_DR, and 299 the sum of all the calculated rates, new_S_CR. 301 (b) It updates CR if new_CR is smaller than the already stored 302 CR value, or if new_S_CR is smaller or equal to the flow's 303 stored S_CR value. This restriction on updating CR ensures 304 that only one flow can make S_CR increase at a time. 306 (c) It updates new_S_CR using its own updated CR, and updates 307 S_CR with new_S_CR. 309 (d) It subtracts DR from S_DR, updates DR to min(new_DR, CR), 310 and adds the updated DR to S_DR. 312 (e) It initializes the total leftover rate TLO to 0. Then, for 313 every other flow i in its FG that has DR(i) < CR(i), it 314 calculates the leftover rate as abs(P(i))/S_P * S_CR - 315 DR(i), adds the flow's leftover rate to TLO, and sets DR(i) 316 to CR(i). This makes flow i look like a greedy flow and 317 ensures that the leftover rate can only once be taken from 318 it. Finally, if P(i) is negative, it removes flow i's 319 entry from the FSE. 321 (f) It calculates the new sending rate as min(new_DR, P/S_P * 322 S_CR + TLO). This gives the flow the correct share of the 323 bandwidth based on its priority, applies an upper bound in 324 case of an application-limited flow, and adds any 325 potentially leftover bandwidth from non-greedy flows. 327 (g) If the flow's new sending rate is greater than DR, then it 328 updates DR with the flow's new sending rate. 330 The goals of the flow algorithm are to achieve prioritization, 331 improve network utilization in the face of non-greedy flows, and 332 impose limits on the increase behavior such that the negative impact 333 of multiple flows trying to increase their rate together is 334 minimized. It does that by assigning a flow a sending rate that may 335 not be what the flow's congestion controller expected. It therefore 336 builds on the assumption that no significant inefficiencies arise 337 from temporary non-greedy behavior or from quickly jumping to a rate 338 that is higher than the congestion controller intended. How 339 problematic these issues really are depends on the controllers in use 340 and requires careful per-controller experimentation. The coupled 341 congestion control mechanism described here also does not require all 342 controllers to be equal; effects of heterogeneous controllers, or 343 homogeneous controllers being in different states, are also subject 344 to experimentation. 346 There are more potential issues with the algorithm described here. 347 Rule 3 b) leads to a conservative behavior: it ensures that only one 348 flow at a time can increase the overall sending rate. This rule is 349 probably appropriate for situations where minimizing delay is the 350 major goal, but it may not fit for all purposes; it also does not 351 incorporate the magnitude by which a flow can increase its rate. 352 Notably, despite this limitation on the overall rate of all flows per 353 FGI, immediate rate jumps of single flows could become problematic 354 when the FSE is used in a highly asynchronous manner, e.g. when flows 355 have very different RTTs. Rule 3 e) gives all the leftover rate of 356 non-greedy flows to the first flow that updates its sending rate, 357 provided that this flow needs it all (otherwise, its own leftover 358 rate can be taken by the next flow that updates its rate). Other 359 policies could be applied, e.g. to divide the leftover rate of a flow 360 equally among all other flows in the FGI. 362 5.3.2. Example operation 364 In order to illustrate the operation of the coupled congestion 365 control algorithm, this section presents a toy example of two flows 366 that use it. Let us assume that both flows traverse a common 10 367 Mbit/s bottleneck and use a simplistic congestion controller that 368 starts out with 1 Mbit/s, increases its rate by 1 Mbit/s in the 369 absence of congestion and decreases it by 2 Mbit/s in the presence of 370 congestion. For simplicity, flows are assumed to always operate in a 371 round-robin fashion. Rate numbers below without units are assumed to 372 be in Mbit/s. For illustration purposes, the actual sending rate is 373 also shown for every flow in FSE diagrams even though it is not 374 really stored in the FSE. 376 Flow #1 begins. It is greedy and considers itself to have top 377 priority. This is the FSE after the flow algorithm's step 1: 379 --------------------------------------------- 380 | # | FGI | P | CR | DR | S_CR | Rate | 381 | | | | | | | | 382 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 383 --------------------------------------------- 385 Its congestion controller gradually increases its rate. Eventually, 386 at some point, the FSE should look like this: 388 --------------------------------------------- 389 | # | FGI | P | CR | DR | S_CR | Rate | 390 | | | | | | | | 391 | 1 | 1 | 1 | 10 | 10 | 10 | 10 | 392 --------------------------------------------- 394 Now another flow joins. It is also greedy, and has a lower priority 395 (0.5): 397 ----------------------------------------------- 398 | # | FGI | P | CR | DR | S_CR | Rate | 399 | | | | | | | | 400 | 1 | 1 | 1 | 10 | 10 | 10 | 10 | 401 | 2 | 1 | 0.5 | 1 | 1 | 11 | 1 | 402 ----------------------------------------------- 404 Now assume that the first flow updates its rate to 8, because the 405 total sending rate of 11 exceeds the total capacity. Let us take a 406 closer look at what happens in step 3 of the flow algorithm. 408 new_CR = 8. new_DR = infinity. 409 3 a) S_P = 1.5; S_DR = 11; new_S_CR = 11. 410 3 b) new_CR < CR, hence CR = 8. 411 3 c) new_S_CR = 9; S_CR = 9. 412 3 d) DR = CR = 8; S_DR = 9. 413 3 e) TLO = 0; there are no other flows with DR < CR. 414 3 f) new sending rate: min(infinity, 1/1.5 * 9 + 0) = 6. 415 3 g) does not apply. 417 The resulting FSE looks as follows: 418 ----------------------------------------------- 419 | # | FGI | P | CR | DR | S_CR | Rate | 420 | | | | | | | | 421 | 1 | 1 | 1 | 8 | 8 | 9 | 6 | 422 | 2 | 1 | 0.5 | 1 | 1 | 11 | 1 | 423 ----------------------------------------------- 425 The effect is that flow #1 is sending with 6 Mbit/s instead of the 8 426 Mbit/s that the congestion controller derived. Let us now assume 427 that flow #2 updates its rate. Its congestion controller detects 428 that the network is not fully saturated (the actual total sending 429 rate is 6+1=7) and increases its rate. 431 new_CR=2. new_DR = infinity. 432 3 a) S_P = 1.5; S_DR = 9; new_S_CR = 9. 433 3 b) new_CR > CR but new_S_CR < S_CR, hence CR = 2. 434 3 c) new_S_CR = 10; S_CR = 10. 435 3 d) DR = CR = 2; S_DR = 10. 436 3 e) TLO = 0; there are no other flows with DR < CR. 437 3 f) new sending rate: min(infinity, 0.5/1.5 * 10 + 0) = 3.33. 438 3 g) new sending rate > DR, hence DR = 3.33. 440 The resulting FSE looks as follows: 441 ----------------------------------------------- 442 | # | FGI | P | CR | DR | S_CR | Rate | 443 | | | | | | | | 444 | 1 | 1 | 1 | 8 | 8 | 9 | 6 | 445 | 2 | 1 | 0.5 | 2 | 3.33 | 10 | 3.33 | 446 ----------------------------------------------- 448 The effect is that flow #2 is now sending with 3.33 Mbit/s, which is 449 close to half of the rate of flow #1 and leads to a total utilization 450 of 6(#1) + 3.33(#2) = 9.33 Mbit/s. Flow #2's congestion controller 451 has increased its rate faster than the controller actually expected. 452 Now, flow #1 updates its rate. Its congestion controller detects 453 that the network is not fully saturated and increases its rate. 454 Additionally, the application feeding into flow #1 limits the flow's 455 sending rate to at most 2 Mbit/s. 457 new_CR=9. new_DR=2. 458 3 a) S_P = 1.5; S_DR = 11.33; new_S_CR = 10. 459 3 b) new_CR > CR and new_S_CR > S_CR, hence CR is not updated 460 (since flow #2 has just increased S_CR, flow #1 cannot also 461 increase it in this iteration). 462 3 c) new_S_CR = 10; S_CR = 10. 463 3 d) DR = 2; S_DR = 5.33. 464 3 e) TLO = 0; there are no other flows with DR < CR. 465 3 f) new sending rate: min(2, 1/1.5 * 10 + 0) = 2. Note that, 466 without the 2 Mbit/s limitation from the application, the new 467 sending rate for flow #1 would now be 6.66 Mbit/s, leading to 468 perfect network saturation (6.66 + 3.33 = approx. 10). 469 3 g) does not apply. 471 The resulting FSE looks as follows: 472 ----------------------------------------------- 473 | # | FGI | P | CR | DR | S_CR | Rate | 474 | | | | | | | | 475 | 1 | 1 | 1 | 8 | 2 | 10 | 2 | 476 | 2 | 1 | 0.5 | 2 | 3.33 | 10 | 3.33 | 477 ----------------------------------------------- 479 Now, the total rate of the two flows is 2 + 3.33 = 5.33 Mbit/s, i.e. 480 the network is significantly underutilized due to the limitation of 481 flow #1. Flow #2 updates its rate. Its congestion controller 482 detects that the network is not fully saturated and increases its 483 rate. 485 new_CR=3. new_DR = infinity. 486 3 a) S_P = 1.5; S_DR = 5.33; new_S_CR = 10. 487 3 b) new_CR > CR but new_S_CR = S_CR, hence CR = 3. 488 3 c) new_S_CR = 11; S_CR = 11. 489 3 d) DR = 3; S_DR = 5. 490 3 e) TLO = 0; flow #1 has DR < CR, hence TLO += 1/1.5 * 11 491 - 2 = 5.33. 492 DR of flow #1 is set to 8. Flow #1 does not have a negative 493 P(i) value, so its entry is not deleted. 494 3 f) new sending rate: min(infinity, 0.5/1.5*11 + 5.33) = 9. 495 3 g) new sending rate > DR, hence DR = 9. 497 The resulting FSE looks as follows: 498 ----------------------------------------------- 499 | # | FGI | P | CR | DR | S_CR | Rate | 500 | | | | | | | | 501 | 1 | 1 | 1 | 8 | 8 | 10 | 2 | 502 | 2 | 1 | 0.5 | 3 | 9 | 11 | 9 | 503 ----------------------------------------------- 505 Now, the total rate of the two flows is 2 + 9 = 11 Mbit/s, exceeding 506 the total capacity by the 1 Mbit/s by which the congestion controller 507 of flow #2 has increased its rate. Note that, had flow #1 been 508 greedy, the same total rate would have resulted after this iteration. 509 Finally, flow #1 terminates. It sets P to -1 and DR to 0. Let us 510 assume that it terminated late enough for flow #2 to still experience 511 the network in a congested state, i.e. flow #2 decreases its rate in 512 the next iteration. 514 new_CR = 1. new_DR = infinity. 515 3 a) S_P = 1.5; S_DR = 9; new_S_CR = 11. 516 3 b) new_CR < CR hence CR = 1. 517 3 c) new_S_CR = 9; S_CR = 9. 518 3 d) DR = 1; S_DR = 1. 519 3 e) TLO = 0; flow #1 has DR < CR, hence TLO += 1/1.5 * 9 - 0 = 6. 520 DR of flow #1 is set to 8. Flow #1 has a negative P(i) value, so 521 its entry is deleted. 522 3 f) new sending rate: min(infinity, 0.5/1.5 * 9 + 6) = 9. 523 3 g) new sending rate > DR, hence DR = 9. 525 The resulting FSE looks as follows: 526 ----------------------------------------------- 527 | # | FGI | P | CR | DR | S_CR | Rate | 528 | | | | | | | | 529 | 1 | 1 | -1 | 8 | 0 | 10 | 2 | (before deletion) 530 | 2 | 1 | 0.5 | 1 | 9 | 9 | 9 | 531 ----------------------------------------------- 533 Now, the total rate, used only by flow #2, is 9 Mbit/s, which is the 534 rate that it would have had alone upon reacting to congestion after a 535 sending rate of 11 Mbit/s. 537 6. Acknowledgements 539 This document has benefitted from discussions with and feedback from 540 Stein Gjessing, David Hayes, Safiqul Islam, Naeem Khademi, Andreas 541 Petlund, and David Ros (who also gave the FSE its name). 543 7. IANA Considerations 545 This memo includes no request to IANA. 547 8. Security Considerations 549 In scenarios where the architecture described in this document is 550 applied across applications, various cheating possibilities arise: 551 e.g., supporting wrong values for the calculated rate, the desired 552 rate, or the priority of a flow. In the worst case, such cheating 553 could either prevent other flows from sending or make them send at a 554 rate that is unreasonably large. The end result would be unfair 555 behavior at the network bottleneck, akin to what could be achieved 556 with any UDP based application. Hence, since this is no worse than 557 UDP in general, there seems to be no significant harm in using this 558 in the absence of UDP rate limiters. 560 In the case of a single-user system, it should also be in the 561 interest of any application programmer to give the user the best 562 possible experience by using reasonable flow priorities or even 563 letting the user choose them. In a multi-user system, this interest 564 may not be given, and one could imagine the worst case of an "arms 565 race" situation, where applications end up setting their priorities 566 to the maximum value. If all applications do this, the end result is 567 a fair allocation in which the priority mechanism is implicitly 568 eliminated, and no major harm is done. 570 9. References 571 9.1. Normative References 573 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 574 Requirement Levels", BCP 14, RFC 2119, March 1997. 576 [RFC2140] Touch, J., "TCP Control Block Interdependence", RFC 2140, 577 April 1997. 579 [RFC3124] Balakrishnan, H. and S. Seshan, "The Congestion Manager", 580 RFC 3124, June 2001. 582 9.2. Informative References 584 [rtcweb-rtp-usage] 585 Perkins, C., Westerlund, M., and J. Ott, "Web Real-Time 586 Communication (WebRTC): Media Transport and Use of RTP", 587 draft-ietf-rtcweb-rtp-usage-05.txt (work in progress), 588 October 2012. 590 [rtcweb-usecases] 591 Holmberg, C., Hakansson, S., and G. Eriksson, "Web Real- 592 Time Communication Use-cases and Requirements", 593 draft-ietf-rtcweb-use-cases-and-requirements-10.txt (work 594 in progress), December 2012. 596 [transport-multiplex] 597 Westerlund, M. and C. Perkins, "Multiple RTP Sessions on a 598 Single Lower-Layer Transport", 599 draft-westerlund-avtcore-transport-multiplexing-04.txt 600 (work in progress), October 2012. 602 Author's Address 604 Michael Welzl 605 University of Oslo 606 PO Box 1080 Blindern 607 Oslo, N-0316 608 Norway 610 Phone: +47 22 85 24 20 611 Email: michawe@ifi.uio.no