idnits 2.17.1 draft-welzl-rmcat-coupled-cc-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 14, 2013) is 3962 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 2140 (Obsoleted by RFC 9040) Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RTP Media Congestion Avoidance M. Welzl 3 Techniques (rmcat) S. Islam 4 Internet-Draft S. Gjessing 5 Intended status: Experimental University of Oslo 6 Expires: December 16, 2013 June 14, 2013 8 Coupled congestion control for RTP media 9 draft-welzl-rmcat-coupled-cc-01 11 Abstract 13 When multiple congestion controlled RTP sessions traverse the same 14 network bottleneck, it can be beneficial to combine their controls 15 such that the total on-the-wire behavior is improved. This document 16 describes such a method for flows that have the same sender, in a way 17 that is as flexible and simple as possible while minimizing the 18 amount of changes needed to existing RTP applications. 20 Status of this Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at http://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on December 16, 2013. 37 Copyright Notice 39 Copyright (c) 2013 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 2. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 3 56 3. Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 4. Architectural overview . . . . . . . . . . . . . . . . . . . . 5 58 5. Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 59 5.1. SBD . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 60 5.2. FSE . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 61 5.3. Flows . . . . . . . . . . . . . . . . . . . . . . . . . . 7 62 5.3.1. Example algorithm . . . . . . . . . . . . . . . . . . 7 63 5.3.2. Example operation . . . . . . . . . . . . . . . . . . 10 64 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 14 65 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 66 8. Security Considerations . . . . . . . . . . . . . . . . . . . 14 67 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 15 68 9.1. Normative References . . . . . . . . . . . . . . . . . . . 15 69 9.2. Informative References . . . . . . . . . . . . . . . . . . 15 70 Appendix A. Changes from -00 to -01 . . . . . . . . . . . . . . . 15 71 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 73 1. Introduction 75 When there is enough data to send, a congestion controller must 76 increase its sending rate until the path's capacity has been reached; 77 depending on the controller, sometimes the rate is increased further, 78 until packets are ECN-marked or dropped. This process inevitably 79 creates undesirable queuing delay -- an effect that is amplified when 80 multiple congestion controlled connections traverse the same network 81 bottleneck. When such connections originate from the same host, it 82 would therefore be ideal to use only one single sender-side 83 congestion controller which determines the overall allowed sending 84 rate, and then use a local scheduler to assign a proportion of this 85 rate to each RTP session. This way, priorities could also be 86 implemented quite easily, as a function of the scheduler; honoring 87 user-specified priorities is, for example, required by rtcweb 88 [rtcweb-usecases]. 90 The Congestion Manager (CM) [RFC3124] provides a single congestion 91 controller with a scheduling function just as described above. It is 92 hard to implement because it requires an additional congestion 93 controller and removes all per-connection congestion control 94 functionality, which is quite a significant change to existing RTP 95 based applications. This document presents a method that is easier 96 to implement than the CM and also requires less significant changes 97 to existing RTP based applications. It attempts to roughly 98 approximate the CM behavior by sharing information between existing 99 congestion controllers, akin to "Ensemble Sharing" in [RFC2140]. 101 2. Definitions 103 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 104 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 105 document are to be interpreted as described in RFC 2119 [RFC2119]. 107 Available Bandwidth: 108 The available bandwidth is the nominal link capacity minus the 109 amount of traffic that traversed the link during a certain time 110 interval, divided by that time interval. 112 Bottleneck: 113 The first link with the smallest available bandwidth along the 114 path between a sender and receiver. 116 Flow: 117 A flow is the entity that congestion control is operating on. 118 It could, for example, be a transport layer connection, an RTP 119 session, or a subsession that is multiplexed onto a single RTP 120 session together with other subsessions. 122 Flow Group Identifier (FGI): 123 A unique identifier for each subset of flows that is limited by 124 a common bottleneck. 126 Flow State Exchange (FSE): 127 The entity that maintains information that is exchanged between 128 flows. 130 Flow Group (FG): 131 A group of flows having the same FGI. 133 Shared Bottleneck Detection (SBD): 134 The entity that determines which flows traverse the same 135 bottleneck in the network, or the process of doing so. 137 3. Limitations 139 Sender-side only: 140 Coupled congestion control as described here only operates 141 inside a single host on the sender side. This is because, 142 irrespective of where the major decisions for congestion 143 control are taken, the sender of a flow needs to eventually 144 decide the transmission rate. Additionally, the necessary 145 information about how much data an application can currently 146 send on a flow is typically only available at the sender side, 147 making the sender an obvious choice for placement of the 148 elements and mechanisms described here. It is recognized that 149 flows that have different senders but the same receiver, or 150 different senders and different receivers can also share a 151 bottleneck; such scenarios have been omitted for simplicity, 152 and could be incorporated in future versions of this document. 153 Note that limiting the flows on which coupled congestion 154 control operates merely limits the benefits derived from the 155 mechanism. 157 Shared bottlenecks do not change quickly: 158 As per the definition above, a bottleneck depends on cross 159 traffic, and since such traffic can heavily fluctuate, 160 bottlenecks can change at a high frequency (e.g., there can be 161 oscillation between two or more links). This means that, when 162 flows are partially routed along different paths, they may 163 quickly change between sharing and not sharing a bottleneck. 164 For simplicity, here it is assumed that a shared bottleneck is 165 valid for a time interval that is significantly longer than the 166 interval at which congestion controllers operate. Note that, 167 for the only SBD mechanism defined in this document 168 (multiplexing on the same five-tuple), the notion of a shared 169 bottleneck stays correct even in the presence of fast traffic 170 fluctuations: since all flows that are assumed to share a 171 bottleneck are routed in the same way, if the bottleneck 172 changes, it will still be shared. 174 4. Architectural overview 176 Figure 1 shows the elements of the architecture for coupled 177 congestion control: the Flow State Exchange (FSE), Shared Bottleneck 178 Detection (SBD) and Flows. The FSE is a storage element. It is 179 passive in that it does not actively initiate communication with 180 flows and the SBD; its only active role is internal state maintenance 181 (e.g., an implementation could use soft state to remove a flow's data 182 after long periods of inactivity). Every time a flow's congestion 183 control mechanism would normally update its sending rate, the flow 184 instead updates information in the FSE and performs a query on the 185 FSE, leading to a sending rate that can be different from what the 186 congestion controller originally determined. Using information 187 about/from the currently active flows, SBD updates the FSE with the 188 correct Flow State Identifiers (FSIs). 190 ------- <--- Flow 1 191 | FSE | <--- Flow 2 .. 192 ------- <--- .. Flow N 193 ^ 194 | | 195 ------- | 196 | SBD | <-------| 197 ------- 199 Figure 1: Coupled congestion control architecture 201 Since everything shown in Figure 1 is assumed to operate on a single 202 host (the sender) only, this document only describes aspects that 203 have an influence on the resulting on-the-wire behavior. It does, 204 for instance, not define how many bits must be used to represent 205 FSIs, or in which way the entities communicate. Implementations can 206 take various forms: for instance, all the elements in the figure 207 could be implemented within a single application, thereby operating 208 on flows generated by that application only. Another alternative 209 could be to implement both the FSE and SBD together in a separate 210 process which different applications communicate with via some form 211 of Inter-Process Communication (IPC). Such an implementation would 212 extend the scope to flows generated by multiple applications. The 213 FSE and SBD could also be included in the Operating System kernel. 215 5. Roles 217 This section gives an overview of the roles of the elements of 218 coupled congestion control, and provides an example of how coupled 219 congestion control can operate. 221 5.1. SBD 223 SBD uses knowledge about the flows to determine which flows belong in 224 the same Flow Group (FG), and assigns FGIs accordingly. This 225 knowledge can be derived from measurements, by considering 226 correlations among measured delay and loss as an indication of a 227 shared bottleneck, or it can be based on the simple assumption that 228 packets sharing the same five-tuple (IP source and destination 229 address, protocol, and transport layer port number pair) and having 230 the same Differentiated Services Code Point (DSCP) in the IP header 231 are typically treated in the same way along the path. The latter 232 method is the only one specified in this document: SBD MAY consider 233 all flows that use the same five-tuple and DSCP to belong to the same 234 FG. This classification applies to certain tunnels, or RTP flows 235 that are multiplexed over one transport (cf. [transport-multiplex]). 236 In one way or another, such multiplexing will probably be recommended 237 for use with rtcweb [rtcweb-rtp-usage]. 239 5.2. FSE 241 The FSE contains a list of all flows that have registered with it. 242 For each flow, it stores: 244 o a unique flow number to identify the flow 246 o the FGI of the FG that it belongs to (based on the definitions in 247 this document, a flow has only one bottleneck, and can therefore 248 be in only one FG) 250 o a priority P, which here is assumed to be represented as a 251 floating point number in the range from 0.1 (unimportant) to 1 252 (very important). A negative value is used to indicate that a 253 flow has terminated. 255 o The calculated rate CR, i.e. the rate that was most recently 256 calculated by the flow's congestion controller. 258 o The desired rate DR. This can be smaller than the calculated rate 259 if the application feeding into the flow has less data to send 260 than the congestion controller would allow. In case of a greedy 261 flow, DR must be set to CR received from the flow's congestion 262 module. 264 In the FSE, each FG contains one static variable S_CR which is meant 265 to be the sum of the calculated rates of all flows in the same FG 266 (including the flow itself). This value is used to calculate the 267 sending rate. In the algorithm given in the next section, it is 268 limited to increase or decrease as conservatively as a flow's 269 congestion controller decides in order to prohibit sudden rate jumps. 271 The FSE also contains one static variable per FG called TLO (Total 272 Leftover Rate -- used to let a flow 'take' bandwidth from non-greedy 273 or terminated flows) which is initialized to 0. 275 The information listed here is enough to implement the sample flow 276 algorithm given below. FSE implementations could easily be extended 277 to store, e.g., a flow's current sending rate for statistics 278 gathering or future potential optimizations. 280 5.3. Flows 282 Flows register themselves with SBD and FSE when they start, 283 deregister from the FSE when they stop, and carry out an UPDATE 284 function call every time their congestion controller calculates a new 285 sending rate. Via UPDATE, they provide the newly calculated rate and 286 the desired rate (less than the calculated rate in case of non-greedy 287 flows, the same otherwise). UPDATE returns a rate that should be 288 used instead of the rate that the congestion controller has 289 determined. 291 Below, an example algorithm is described. While other algorithms 292 could be used instead, the same algorithm must be applied to all 293 flows. The way the algorithm is described here, the operations are 294 carried out by the flows, but they are the same for all flows. This 295 means that the algorithm could, for example, be implemented in a 296 library that provides registration, deregistration functions and the 297 UPDATE function. To minimize the number of changes to existing 298 applications, one could, however, also embed this functionality in 299 the FSE element. 301 5.3.1. Example algorithm 302 (1) When a flow f starts, it registers itself with SBD and the FSE. 303 CR and DR are initialized with the congestion controller's 304 initial rate. SBD will assign the correct FGI. When a flow is 305 assigned an FGI, it adds its CR to S_CR. 307 (2) When a flow f stops, it sets its DR to 0 and sets P to -1. 309 (3) Every time the congestion controller of the flow f determines a 310 new sending rate new_CR, assuming the flow's new desired rate 311 new_DR to be "infinity" in case of a greedy flow with an unknown 312 maximum rate, the flow calls UPDATE, which carries out the tasks 313 listed below to derive the flow's new sending rate, Rate. A 314 flow's UPDATE function uses a few local (i.e. per-flow) 315 temporary variables, which are all initialized to 0: DELTA, 316 new_S_CR and S_P. 318 (a) For all the flows in its FG (including itself), it 319 calculates the sum of all the calculated rates, new_S_CR. 320 Then it calculates the difference between CR(f) and new_CR, 321 DELTA. 323 for all flows i in FG do 324 new_S_CR = new_S_CR + CR(i) 325 end for 326 DELTA = new_CR - CR(f) 328 (b) It updates S_CR, CR(f) and DR(f). 330 CR(f) = new_CR 331 if DELTA > 0 then // the flow's rate has increased 332 S_CR = S_CR + DELTA 333 else if DELTA < 0 then 334 S_CR = new_S_CR + DELTA 335 end if 336 DR(f) = min(new_DR,CR(f)) 338 (c) It calculates the leftover rate TLO, removes the terminated 339 flows from the FSE and calculates the sum of all the 340 priorities, S_P. 342 for all flows i in FG do 343 if P(i)<0 then 344 delete flow 345 else 346 S_P = S_P + P(i) 347 end if 348 end for 349 if DR(f) < CR(f) then 350 TLO = TLO + (P(f)/S_P) * S_CR - DR(f)) 351 end if 353 (d) It calculates the sending rate, Rate. 355 Rate = min(new_DR, (P(f)*S_CR)/S_P + TLO) 357 if Rate != new_DR and TLO > 0 then 358 TLO = 0 // f has 'taken' TLO 359 end if 361 (e) It updates DR(f) and CR(f) with Rate. 363 if Rate > DR(f) then 364 DR(f) = Rate 365 end if 366 CR(f) = Rate 368 The goals of the flow algorithm are to achieve prioritization, 369 improve network utilization in the face of non-greedy flows, and 370 impose limits on the increase behavior such that the negative impact 371 of multiple flows trying to increase their rate together is 372 minimized. It does that by assigning a flow a sending rate that may 373 not be what the flow's congestion controller expected. It therefore 374 builds on the assumption that no significant inefficiencies arise 375 from temporary non-greedy behavior or from quickly jumping to a rate 376 that is higher than the congestion controller intended. How 377 problematic these issues really are depends on the controllers in use 378 and requires careful per-controller experimentation. The coupled 379 congestion control mechanism described here also does not require all 380 controllers to be equal; effects of heterogeneous controllers, or 381 homogeneous controllers being in different states, are also subject 382 to experimentation. 384 This algorithm gives all the leftover rate of non-greedy flows to the 385 first flow that updates its sending rate, provided that this flow 386 needs it all (otherwise, its own leftover rate can be taken by the 387 next flow that updates its rate). Other policies could be applied, 388 e.g. to divide the leftover rate of a flow equally among all other 389 flows in the FGI. 391 5.3.2. Example operation 393 In order to illustrate the operation of the coupled congestion 394 control algorithm, this section presents a toy example of two flows 395 that use it. Let us assume that both flows traverse a common 10 396 Mbit/s bottleneck and use a simplistic congestion controller that 397 starts out with 1 Mbit/s, increases its rate by 1 Mbit/s in the 398 absence of congestion and decreases it by 2 Mbit/s in the presence of 399 congestion. For simplicity, flows are assumed to always operate in a 400 round-robin fashion. Rate numbers below without units are assumed to 401 be in Mbit/s. For illustration purposes, the actual sending rate is 402 also shown for every flow in FSE diagrams even though it is not 403 really stored in the FSE. 405 Flow #1 begins. It is greedy and considers itself to have top 406 priority. This is the FSE after the flow algorithm's step 1: 408 -------------------------------------- 409 | # | FGI | P | CR | DR | Rate | 410 | | | | | | | 411 | 1 | 1 | 1 | 1 | 1 | 1 | 412 -------------------------------------- 413 S_CR = 1, TLO = 0 415 Its congestion controller gradually increases its rate. Eventually, 416 at some point, the FSE should look like this: 418 -------------------------------------- 419 | # | FGI | P | CR | DR | Rate | 420 | | | | | | | 421 | 1 | 1 | 1 | 10 | 10 | 10 | 422 -------------------------------------- 423 S_CR = 10, TLO = 0 425 Now another flow joins. It is also greedy, and has a lower priority 426 (0.5): 428 ---------------------------------------- 429 | # | FGI | P | CR | DR | Rate | 430 | | | | | | | 431 | 1 | 1 | 1 | 10 | 10 | 10 | 432 | 2 | 1 | 0.5 | 1 | 1 | 1 | 433 ---------------------------------------- 434 S_CR = 11, TLO = 0 436 Now assume that the first flow updates its rate to 8, because the 437 total sending rate of 11 exceeds the total capacity. Let us take a 438 closer look at what happens in step 3 of the flow algorithm. 440 new_CR = 8. new_DR = infinity. 441 3 a) new_S_CR = 11; DELTA = 8 - 10 = -2. 442 3 b) CR(f) = 8. DELTA is negative, hence S_CR = 9; 443 DR(f) = 8. 444 3 c) S_P = 1.5. 445 3 d) new sending rate = min(infinity, 1/1.5 * 9 + 0) = 6. 446 3 e) CR(f) = 6. 448 The resulting FSE looks as follows: 449 ---------------------------------------- 450 | # | FGI | P | CR | DR | Rate | 451 | | | | | | | 452 | 1 | 1 | 1 | 6 | 8 | 6 | 453 | 2 | 1 | 0.5 | 1 | 1 | 1 | 454 ---------------------------------------- 455 S_CR = 9, TLO = 0 457 The effect is that flow #1 is sending with 6 Mbit/s instead of the 8 458 Mbit/s that the congestion controller derived. Let us now assume 459 that flow #2 updates its rate. Its congestion controller detects 460 that the network is not fully saturated (the actual total sending 461 rate is 6+1=7) and increases its rate. 463 new_CR=2. new_DR = infinity. 464 3 a) new_S_CR = 7; DELTA = 2 - 1 = 1. 465 3 b) CR(f) = 2. DELTA is positive, hence S_CR = 9 + 1 = 10; 466 DR(f) = 2. 467 3 c) S_P = 1.5. 468 3 d) new sending rate = min(infinity, 0.5/1.5 * 10 + 0) = 3.33. 469 3 e) DR(f) = CR(f) = 3.33. 471 The resulting FSE looks as follows: 472 ---------------------------------------- 473 | # | FGI | P | CR | DR | Rate | 474 | | | | | | | 475 | 1 | 1 | 1 | 6 | 8 | 6 | 476 | 2 | 1 | 0.5 | 3.33 | 3.33 | 3.33 | 477 ---------------------------------------- 478 S_CR = 10, TLO = 0 480 The effect is that flow #2 is now sending with 3.33 Mbit/s, which is 481 close to half of the rate of flow #1 and leads to a total utilization 482 of 6(#1) + 3.33(#2) = 9.33 Mbit/s. Flow #2's congestion controller 483 has increased its rate faster than the controller actually expected. 484 Now, flow #1 updates its rate. Its congestion controller detects 485 that the network is not fully saturated and increases its rate. 486 Additionally, the application feeding into flow #1 limits the flow's 487 sending rate to at most 2 Mbit/s. 489 new_CR=7. new_DR=2. 490 3 a) new_S_CR = 9.33; DELTA = 1. 491 3 b) CR(f) = 7, DELTA is positive, hence S_CR = 10 + 1 = 11; 492 DR = min(2, 7) = 2. 493 3 c) S_P = 1.5; DR(f) < CR(f), hence TLO = 1/1.5 * 11 - 2 = 5.33. 494 3 d) new sending rate = min(2, 1/1.5 * 11 + 5.33) = 2. 495 3 e) CR(f) = 2. 497 The resulting FSE looks as follows: 498 ---------------------------------------- 499 | # | FGI | P | CR | DR | Rate | 500 | | | | | | | 501 | 1 | 1 | 1 | 2 | 2 | 2 | 502 | 2 | 1 | 0.5 | 3.33 | 3.33 | 3.33 | 503 ---------------------------------------- 504 S_CR = 11, TLO = 5.33 505 Now, the total rate of the two flows is 2 + 3.33 = 5.33 Mbit/s, i.e. 506 the network is significantly underutilized due to the limitation of 507 flow #1. Flow #2 updates its rate. Its congestion controller 508 detects that the network is not fully saturated and increases its 509 rate. 511 new_CR=4.33. new_DR = infinity. 512 3 a) new_S_CR = 5.33; DELTA = 1. 513 3 b) CR(f) = 4.33. DELTA is positive, hence S_CR = 12; 514 DR(f) = 4.33. 515 3 c) S_P = 1.5. 516 3 d) new sending rate: min(infinity, 0.5/1.5 * 12 + 5.33 ) = 9.33. 517 3 e) CR(f) = 9.33, DR(f) = 9.33. 519 The resulting FSE looks as follows: 520 ---------------------------------------- 521 | # | FGI | P | CR | DR | Rate | 522 | | | | | | | 523 | 1 | 1 | 1 | 2 | 2 | 2 | 524 | 2 | 1 | 0.5 | 9.33 | 9.33 | 9.33 | 525 ---------------------------------------- 526 S_CR = 12, TLO = 0 528 Now, the total rate of the two flows is 2 + 9.33 = 11.33 Mbit/s. 529 Finally, flow #1 terminates. It sets P to -1 and DR to 0. Let us 530 assume that it terminated late enough for flow #2 to still experience 531 the network in a congested state, i.e. flow #2 decreases its rate in 532 the next iteration. 534 new_CR = 7.33. new_DR = infinity. 535 3 a) new_S_CR = 11.33; DELTA = -2. 536 3 b) CR(f) = 7.33. DELTA is negative, hence S_CR = 9.33; 537 DR(f) = 7.33. 538 3 c) Flow 1 has P = -1, hence it is deleted from the FSE. 539 S_P = 0.5. 540 3 d) new sending rate: min(infinity, 0.5/0.5*9.33 + 0) = 9.33. 541 3 e) CR(f) = DR(f) = 9.33. 543 The resulting FSE looks as follows: 544 ---------------------------------------- 545 | # | FGI | P | CR | DR | Rate | 546 | | | | | | | 547 | 2 | 1 | 0.5 | 9.33 | 9.33 | 9.33 | 548 ---------------------------------------- 549 S_CR = 9.33, TLO = 0 551 6. Acknowledgements 553 This document has benefitted from discussions with and feedback from 554 David Hayes, Andreas Petlund, and David Ros (who also gave the FSE 555 its name). 557 This work was partially funded by the European Community under its 558 Seventh Framework Programme through the Reducing Internet Transport 559 Latency (RITE) project (ICT-317700). 561 7. IANA Considerations 563 This memo includes no request to IANA. 565 8. Security Considerations 567 In scenarios where the architecture described in this document is 568 applied across applications, various cheating possibilities arise: 569 e.g., supporting wrong values for the calculated rate, the desired 570 rate, or the priority of a flow. In the worst case, such cheating 571 could either prevent other flows from sending or make them send at a 572 rate that is unreasonably large. The end result would be unfair 573 behavior at the network bottleneck, akin to what could be achieved 574 with any UDP based application. Hence, since this is no worse than 575 UDP in general, there seems to be no significant harm in using this 576 in the absence of UDP rate limiters. 578 In the case of a single-user system, it should also be in the 579 interest of any application programmer to give the user the best 580 possible experience by using reasonable flow priorities or even 581 letting the user choose them. In a multi-user system, this interest 582 may not be given, and one could imagine the worst case of an "arms 583 race" situation, where applications end up setting their priorities 584 to the maximum value. If all applications do this, the end result is 585 a fair allocation in which the priority mechanism is implicitly 586 eliminated, and no major harm is done. 588 9. References 589 9.1. Normative References 591 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 592 Requirement Levels", BCP 14, RFC 2119, March 1997. 594 [RFC2140] Touch, J., "TCP Control Block Interdependence", RFC 2140, 595 April 1997. 597 [RFC3124] Balakrishnan, H. and S. Seshan, "The Congestion Manager", 598 RFC 3124, June 2001. 600 9.2. Informative References 602 [rtcweb-rtp-usage] 603 Perkins, C., Westerlund, M., and J. Ott, "Web Real-Time 604 Communication (WebRTC): Media Transport and Use of RTP", 605 draft-ietf-rtcweb-rtp-usage-06.txt (work in progress), 606 February 2013. 608 [rtcweb-usecases] 609 Holmberg, C., Hakansson, S., and G. Eriksson, "Web Real- 610 Time Communication Use-cases and Requirements", 611 draft-ietf-rtcweb-use-cases-and-requirements-10.txt (work 612 in progress), December 2012. 614 [transport-multiplex] 615 Westerlund, M. and C. Perkins, "Multiple RTP Sessions on a 616 Single Lower-Layer Transport", 617 draft-westerlund-avtcore-transport-multiplexing-05.txt 618 (work in progress), February 2013. 620 Appendix A. Changes from -00 to -01 622 Updated the example algorithm and its operation. 624 Authors' Addresses 626 Michael Welzl 627 University of Oslo 628 PO Box 1080 Blindern 629 Oslo, N-0316 630 Norway 632 Phone: +47 22 85 24 20 633 Email: michawe@ifi.uio.no 634 Safiqul Islam 635 University of Oslo 636 PO Box 1080 Blindern 637 Oslo, N-0316 638 Norway 640 Phone: +47 22 84 08 37 641 Email: safiquli@ifi.uio.no 643 Stein Gjessing 644 University of Oslo 645 PO Box 1080 Blindern 646 Oslo, N-0316 647 Norway 649 Phone: +47 22 85 24 44 650 Email: steing@ifi.uio.no