idnits 2.17.1 draft-burmeister-avt-rtcp-feedback-sim-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 23 longer pages, the longest (page 3) being 59 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 1256 has weird spacing: '...for the purpo...' == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 2003) is 7592 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? '1' on line 865 looks like a reference -- Missing reference section? '6' on line 885 looks like a reference -- Missing reference section? '7' on line 885 looks like a reference -- Missing reference section? '4' on line 205 looks like a reference -- Missing reference section? '2' on line 211 looks like a reference -- Missing reference section? '10' on line 1121 looks like a reference -- Missing reference section? '5' on line 414 looks like a reference Summary: 4 errors (**), 0 flaws (~~), 5 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 C. Burmeister 3 Internet Draft R. Hakenberg 4 draft-burmeister-avt-rtcp-feedback-sim-02.txt A. Miyazaki 5 Expires: December 2003 Matsushita 7 J. Ott 8 University of Bremen TZI 10 N. Sato 11 S. Fukunaga 12 Oki 14 June 2003 16 Extended RTP Profile for RTCP-based Feedback 17 - Results of the Timing Rule Simulations - 19 Status of this Memo 21 This document is an Internet-Draft and is in full conformance 22 with all provisions of Section 10 of RFC 2026. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF), its areas, and its working groups. Note that 26 other groups may also distribute working documents as Internet- 27 Drafts. 29 Internet-Drafts are draft documents valid for a maximum of six 30 months and may be updated, replaced, or obsoleted by other 31 documents at any time. It is inappropriate to use Internet-Drafts 32 as reference material or to cite them other than as "work in 33 progress." 35 The list of current Internet-Drafts can be accessed at 36 http://www.ietf.org/ietf/1id-abstracts.txt 37 The list of Internet-Draft Shadow Directories can be accessed at 38 http://www.ietf.org/shadow.html. 40 Copyright Notice 42 Copyright (C) The Internet Society (2003). All Rights 43 Reserved. 45 Abstract 47 This document describes the results achieved when simulating the 48 timing rules of the Extended RTP Profile for RTCP-based Feedback, 50 Burmeister et al. Expires December 2003 1 51 denoted AVPF. Unicast and multicast topologies are considered as 52 well as several protocol and environment configurations. The 53 results show that the timing rules result in better performance 54 regarding feedback delay and still preserve the well accepted RTP 55 rules regarding allowed bit rates for control traffic. 57 Table of Contents 59 1 Introduction 60 2 Conventions used in this document 61 3 Timing rules of the extended RTP profile for RTCP-based feedback 62 4 Simulation Environment 63 5 RTCP Bit Rate Measurements 64 5.1 Unicast 65 6 Feedback Measurements 66 6.1 Unicast 67 7 Investigations on "l" 68 8 Applications Using AVPF 69 9 Summary 70 References 71 IPR Notices 72 Authors' Address 73 Full Copyright Statement 75 1 Introduction 77 The Real-time Transport Protocol (RTP) is widely used for the 78 transmission of real-time or near real-time media data over the 79 Internet. While it was originally designed to work well for 80 multicast groups in very large scales, its scope is not limited to 81 that. More and more applications use RTP for small multicast 82 groups (e.g. video conferences) or even unicast (e.g. IP telephony 83 and media streaming applications). 85 RTP comes together with its companion protocol Real-time Transport 86 Control Protocol (RTCP), which is used to monitor the transmission 87 of the media data and provide feedback of the reception quality. 88 Furthermore, it can be used for loose session control. Having the 89 scope of large multicast groups in mind, the rules when to send 90 feedback were carefully restricted to avoid feedback explosion or 91 feedback related congestion in the network. RTP and RTCP have 92 proven to work well in the Internet, especially in large multicast 93 groups, which is shown by its widespread usage today. 95 However the applications that transmit the media data only to 96 small multicast groups or unicast may benefit from more frequent 97 feedback. The source of the packets may be able to react to 98 changes in the reception quality, which may be due to varying 100 Burmeister et al. Expires December 2003 2 101 network utilization (e.g. congestion) or other changes. Possible 102 reactions include transmission rate adaptation according to a 103 congestion control algorithm or the invocation of error resilience 104 features for the media stream (e.g. retransmissions, reference 105 picture selection, NEWPRED, etc.). 107 As mentioned before, more frequent feedback may be desirable to 108 increase the reception quality, but RTP restricts the use of RTCP 109 feedback. Hence it was decided to create a new extended RTP 110 profile, which redefines some of the RTCP timing rules, but keeps 111 most of the algorithms for RTP and RTCP, which have proven to work 112 well. The new rules should scale from unicast to multicast, where 113 unicast or small multicast applications have the most gain from 114 it. A detailed description of the new profile and its timing 115 rules can be found in [1]. 117 This document investigates the new algorithms by the means of 118 simulations. We show that the new timing rules scale well and 119 behave in a network-friendly manner. Firstly, the key features of 120 the new RTP profile that are important for our simulations are 121 roughly described in Section 3. After that, we describe the 122 environment that is used to conduct the simulations in Section 4. 123 Section 5 describes simulation results that show the backwards 124 compatibility to RTP and that the new profile is network-friendly 125 in terms of used bandwidth for RTCP traffic. In Section 6, we 126 show the benefit that applications could get from implementing the 127 new profile. In Section 7 we investigated the effect of the 128 parameter "l" (used to calculate the T_dither_max value) upon the 129 algorithm performance and finally in Section 8 we show the 130 performance gain we could get for a special application, namely 131 NEWPRED in [6] and [7]. 133 2 Conventions used in this document 135 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL 136 NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and 137 "OPTIONAL" in this document are to be interpreted as described in 138 RFC 2119. 140 3 Timing rules of the extended RTP profile for RTCP-based feedback 142 As said above, RTP restricts the usage of RTCP feedback. The main 143 restrictions on RTCP are as follows: 145 - RTCP messages are sent in compound packets, i.e. every RTCP 146 packet 147 contains at least one sender report (SR) or receiver report (RR) 148 message and a source description (SDES) message. 150 Burmeister et al. Expires December 2003 3 151 - The RTCP compound packets are sent in time intervals (T_rr), 152 which 153 are computed as a function of the average packet size, the 154 number 155 of senders and receivers in the group and the session bandwidth 156 (5% of the session bandwidth is used for RTCP messages; this 157 bandwidth is shared between all session members, where the 158 senders 159 may get a larger share than the receivers.) 160 - The average minimum interval between two RTCP packets from the 161 same source 162 is 5 seconds. 164 We see that these rules prevent feedback explosion and scale well 165 to large multicast groups. However, they not allow timely 166 feedback at all. While the second rule scales also to small 167 groups or unicast (in this cases the interval might be as small as 168 a few milliseconds), the third rule may prevent the receivers from 169 sending feedback timely. 171 The timing rules to send RTCP feedback from the new RTP profile 172 [1] consist of two key components. First the minimum interval of 173 5 seconds is abolished. Second, receivers get once during their 174 (now quite small) RTCP interval the chance to send an RTCP packet 175 "early", i.e. not according to the calculated interval, but 176 virtually immediately. It is important to note that the RTCP 177 interval calculation is still inherited from the original RTP 178 specification. 180 The specification and all the details of the extended timing rules 181 can be found in [1]. We shall describe the algorithms here, but 182 rather reference these from the original specification where 183 needed. Therefore we use also the same variable names and 184 abbreviations as in [1]. 186 4 Simulation Environment 188 This section describes the simulation testbed that was used for 189 the investigations and its key features. The extensions to the 190 simulator that were necessary are roughly described in the 191 following sections. 193 4.1 Network Simulator Version 2 195 The simulations were conducted using the network simulator version 196 2 (ns2). ns2 is an open source project, written in a combination 197 of Tool Command Language (TCL) and C++. The scenarios are set-up 198 using TCL. Using the scripts it is possible to specify the 199 topologies (nodes and links, bandwidths, queue sizes or error 201 Burmeister et al. Expires December 2003 4 202 rates for links) and the parameters of the "agents", i.e. protocol 203 configurations. The protocols themselves are implemented in C++ 204 in the agents, which are connected to the nodes. The 205 documentation for ns2 and the newest version can be found in [4]. 207 4.2 RTP Agent 209 We implemented a new agent, based on RTP/RTCP. RTP packets are 210 sent at a constant packet rate with the correct header sizes. 211 RTCP packets are sent according to the timing rules of [2] and 212 also its algorithms for group membership maintenance are 213 implemented. Sender and receiver reports are sent. 215 Further, we extended the agent to support the extended profile 216 [1]. The use of the new timing rules can be turned on and off via 217 parameter settings in TCL. 219 4.3 Scenarios 221 The scenarios that are simulated are defined in TCL scripts. We 222 set-up several different topologies, ranging from unicast with two 223 session members to multicast with up to 25 session members. 224 Depending on the sending rates used and the corresponding link 225 bandwidths, congestion losses may occur. In some scenarios, bit 226 errors are inserted on certain links. We simulated groups with 227 RTP/AVP agents, RTP/AVPF agents and mixed groups. 229 The feedback messages are generally NACK messages as defined in 230 [1] and are triggered by packet loss. 232 4.4 Topologies 234 Mainly four different topologies are simulated to show the key 235 features of the extended profile. However, for some specific 236 simulations we used different topologies. This is then indicated 237 in the description of the simulation results. The main four 238 topologies are named after the number of participating RTP agents, 239 i.e. T-2, T-4, T-8 and T-16, where T-2 is a unicast scenario, T-4 240 contains four agents, etc. The figures below illustrate the main 241 topologies. 243 Burmeister et al. Expires December 2003 5 244 A5 245 A5 | A6 246 / | / 247 / | /--A7 248 / |/ 249 A2 A2-----A6 A2--A8 250 / / / A9 251 / / / / 252 / / / /---A10 253 A1-----A2 A1-----A3 A1-----A3-----A7 A1------A3< 254 \ \ \ \---A11 255 \ \ \ \ 256 \ \ \ A12 257 A4 A4-----A8 A4--A13 258 |\ 259 | \--A14 260 | \ 261 | A15 262 A16 264 T-2 T-4 T-8 T-16 266 Figure 1: Simulated Topologies. 268 5 RTCP Bit Rate Measurements 270 The new timing rules allow more frequent RTCP feedback for small 271 multicast groups. In large groups the algorithm behaves similarly 272 to the normal RTCP timing rules. While it is generally good to 273 have more frequent feedback it cannot be allowed at all to 274 increase the bit rate used for RTCP above a fixed limit, i.e. 5% 275 of the total RTP bandwidth according to RTP. This section shows 276 that the new timing rules keep RTCP bandwidth usage under the 5% 277 limit for all investigated scenarios, topologies and group sizes. 278 Furthermore, we show that mixed groups, i.e. some members using 279 AVP some AVPF, can be allowed and that each session member behaves 280 fairly according to its corresponding specification. Note that 281 other values for the RTCP bandwidth limit may be specified using 282 the RTCP bandwidth modifiers as in [10]. 284 5.1 Unicast 286 First we measured the RTCP bandwidth share in the unicast topology 287 T-2. Even for a fixed topology and group size, there are several 288 protocol parameters which are varied to simulate a large range of 289 different scenarios. We varied the configurations of the agents 290 in the sense that the agents may use the AVP or AVPF. Thereby it 291 is possible that one agent uses AVP and the other AVPF in one RTP 293 Burmeister et al. Expires December 2003 6 294 session. This is done to test the backwards compatibility of the 295 AVPF profile. 297 First we consider scenarios where no losses occur. In this case 298 both RTP session members transmit the RTCP compound packets at 299 regular intervals, calculated as T_rr, if they use the AVPF, and 300 use a minimum interval of 5s (in average) if they implement the 301 AVP. No early packets are sent, because the need to send early 302 feedback is not given. Still it is important to see that not more 303 than 5% of the session bandwidth is used for RTCP and that AVP and 304 AVPF members can co-exist without interference. The results can 305 be found in table 1. 307 | | | | | | Used RTCP Bit Rate | 308 | Session | Send | Rec. | AVP | AVPF | (% of session bw) | 309 |Bandwidth|Agents|Agents|Agents|Agents| A1 | A2 | sum | 310 +---------+------+------+------+------+------+------+------+ 311 | 2 Mbps | 1 | 2 | - | 1,2 | 2.42 | 2.56 | 4.98 | 312 | 2 Mbps | 1,2 | - | - | 1,2 | 2.49 | 2.49 | 4.98 | 313 | 2 Mbps | 1 | 2 | 1 | 2 | 0.01 | 2.49 | 2.50 | 314 | 2 Mbps | 1,2 | - | 1 | 2 | 0.01 | 2.48 | 2.49 | 315 | 2 Mbps | 1 | 2 | 1,2 | - | 0.01 | 0.01 | 0.02 | 316 | 2 Mbps | 1,2 | - | 1,2 | - | 0.01 | 0.01 | 0.02 | 317 |200 kbps | 1 | 2 | - | 1,2 | 2.42 | 2.56 | 4.98 | 318 |200 kbps | 1,2 | - | - | 1,2 | 2.49 | 2.49 | 4.98 | 319 |200 kbps | 1 | 2 | 1 | 2 | 0.06 | 2.49 | 2.55 | 320 |200 kbps | 1,2 | - | 1 | 2 | 0.08 | 2.50 | 2.58 | 321 |200 kbps | 1 | 2 | 1,2 | - | 0.06 | 0.06 | 0.12 | 322 |200 kbps | 1,2 | - | 1,2 | - | 0.08 | 0.08 | 0.16 | 323 | 20 kbps | 1 | 2 | - | 1,2 | 2.44 | 2.54 | 4.98 | 324 | 20 kbps | 1,2 | - | - | 1,2 | 2.50 | 2.51 | 5.01 | 325 | 20 kbps | 1 | 2 | 1 | 2 | 0.58 | 2.48 | 3.06 | 326 | 20 kbps | 1,2 | - | 1 | 2 | 0.77 | 2.51 | 3.28 | 327 | 20 kbps | 1 | 2 | 1,2 | - | 0.58 | 0.61 | 1.19 | 328 | 20 kbps | 1,2 | - | 1,2 | - | 0.77 | 0.79 | 1.58 | 330 Table 1: Unicast simulations without packet loss. 332 We can see that in configurations where both agents use the new 333 timing rules each of them uses, at most, about 2.5% of the session 334 bandwidth for RTP, which sums up to 5% of the session bandwidth 335 for both. This is achieved regardless of the agent being a sender 336 or a receiver. In the cases where agent A1 uses AVP and agent A2 337 AVPF, the total RTCP session bandwidth is decreased. This is due 338 to the fact that agent A1 can send RTCP packets only with an 339 average minimum interval of 5 seconds. Thus only a small fraction 340 of the session bandwidth is used for its RTCP packets. For a high 341 bit rate session (session bandwidth = 2 Mbps) the fraction of the 342 RTCP packets from agent A1 is as small as 0.01%. For smaller 343 session bandwidths the fraction increases, because the same amount 344 of RTCP data is sent. The bandwidth share that is used by RTCP 346 Burmeister et al. Expires December 2003 7 347 packets from agent A2 is not different from what was used, when 348 both agents implemented the AVPF. Thus the interaction of AVP and 349 AVPF agents is not problematic in these scenarios at all. 351 In our second unicast experiment, we show that the allowed RTCP 352 bandwidth share is not exceeded, even if packet loss occurs. We 353 simulated a constant byte error rate (BYER) on the link. The byte 354 errors are inserted randomly according to an uniform distribution. 355 Packets with byte errors are discarded on the link; hence the 356 receiving agents will not see the loss immediately. The agents 357 detect packet loss by a gap in the sequence number. 359 When an AVPF agent detects a packet loss the early feedback 360 procedure is started. As described in AVPF [1], in unicast 361 T_dither_max is always zero, hence an early packet can be sent 362 immediately if allow_early is true. If the last packet was 363 already an early one (i.e. allow_early = false), the feedback 364 might be appended to the next regularly scheduled receiver report. 365 The max_feedback_delay parameter (which we set to 1 second in our 366 simulations) determines if that is allowed. 368 The results are shown in table 2, where we can see that there is 369 no difference in the RTCP bandwidth share, whether losses occur or 370 not. This is what we expected, because even though the RTCP 371 packet size grows and early packets are sent, the interval between 372 the packets increases and thus the RTCP bandwidth stays the same. 373 Only the RTCP bandwidth of the agents that use the AVP increases 374 slightly. This is because the interval between the packets is 375 still 5 seconds (in average), but the packet size increased 376 because of the feedback that is appended. 378 | | | | | | Used RTCP Bit Rate | 379 | Session | Send | Rec. | AVP | AVPF | (% of session bw) | 380 |Bandwidth|Agents|Agents|Agents|Agents| A1 | A2 | sum | 381 +---------+------+------+------+------+------+------+------+ 382 | 2 Mbps | 1 | 2 | - | 1,2 | 2.42 | 2.56 | 4.98 | 383 | 2 Mbps | 1,2 | - | - | 1,2 | 2.49 | 2.49 | 4.98 | 384 | 2 Mbps | 1 | 2 | 1 | 2 | 0.01 | 2.49 | 2.50 | 385 | 2 Mbps | 1,2 | - | 1 | 2 | 0.01 | 2.48 | 2.49 | 386 | 2 Mbps | 1 | 2 | 1,2 | - | 0.01 | 0.02 | 0.03 | 387 | 2 Mbps | 1,2 | - | 1,2 | - | 0.01 | 0.01 | 0.02 | 388 |200 kbps | 1 | 2 | - | 1,2 | 2.42 | 2.56 | 4.98 | 389 |200 kbps | 1,2 | - | - | 1,2 | 2.50 | 2.49 | 4.99 | 390 |200 kbps | 1 | 2 | 1 | 2 | 0.06 | 2.50 | 2.56 | 391 |200 kbps | 1,2 | - | 1 | 2 | 0.08 | 2.49 | 2.57 | 392 |200 kbps | 1 | 2 | 1,2 | - | 0.06 | 0.07 | 0.13 | 393 |200 kbps | 1,2 | - | 1,2 | - | 0.09 | 0.08 | 0.17 | 394 | 20 kbps | 1 | 2 | - | 1,2 | 2.42 | 2.57 | 4.99 | 395 | 20 kbps | 1,2 | - | - | 1,2 | 2.52 | 2.51 | 5.03 | 396 | 20 kbps | 1 | 2 | 1 | 2 | 0.58 | 2.54 | 3.12 | 398 Burmeister et al. Expires December 2003 8 399 | 20 kbps | 1,2 | - | 1 | 2 | 0.83 | 2.43 | 3.26 | 400 | 20 kbps | 1 | 2 | 1,2 | - | 0.58 | 0.73 | 1.31 | 401 | 20 kbps | 1,2 | - | 1,2 | - | 0.86 | 0.84 | 1.70 | 403 Table 2: Unicast simulations with packet loss. 405 5.2 Multicast 407 Next, we investigated the RTCP bandwidth share in multicast 408 scenarios, i.e. we simulated the topologies T-4, T-8 and T-16 and 409 measured the fraction of the session bandwidth that was used for 410 RTCP packets. Again we considered different situations and 411 protocol configurations (e.g. with or without bit errors, groups 412 with AVP and/or AVPF agents, etc.). For reasons of readability, 413 we present only selected results. For a documentation of all 414 results, see [5]. 416 The simulations of the different topologies in scenarios where no 417 losses occur (neither through bit errors nor through congestion) 418 show a similar behavior as in the unicast case. For all group 419 sizes the maximum RTCP bit rate share used is 5.06% of the session 420 bandwidth in a simulation of 16 session members in a low bit rate 421 scenario (session bandwidth = 20kbps) with several senders. In 422 all other scenarios without losses the RTCP bit rate share used is 423 below that. Thus, the requirement that not more than 5% of the 424 session bit rate should be used for RTCP is fulfilled with 425 reasonable accuracy. 427 Simulations where bit errors are randomly inserted in RTP and RTCP 428 packets and the corrupted packets are discarded give the same 429 results. The 5% rule is kept (at maximum 5.07% of the session 430 bandwidth is used for RTCP). 432 Finally we conducted simulations where we reduced the link 433 bandwidth and thereby caused congestion related losses. These 434 simulations are different from the previous bit error simulations, 435 in that the losses occur more in bursts and are more correlated, 436 also between different agents. The correlation and burstiness of 437 the packet loss is due to the queuing discipline in the routers we 438 simulated; we used simple FIFO queues with a drop-tail strategy to 439 handle congestion. Random Early Detection (RED) queues may 440 enhance the performance, because the burstiness of the packet loss 441 might be reduced, however this is not the subject of our 442 investigations, but is left for future research. The delay 443 between the agents, which also influences RTP and RTCP packets, is 444 much more variable because of the added queuing delay. Still the 445 RTCP bit rate share used does not increase beyond 5.09% of the 446 session bandwidth. Thus also for these special cases the 447 requirement is fulfilled. 449 Burmeister et al. Expires December 2003 9 450 5.3 Summary of the RTCP bit rate measurements 452 We have shown that for unicast and reasonable multicast scenarios, 453 feedback implosion does not happen. The requirement that at 454 maximum 5% of the session bandwidth is used for RTCP is fulfilled 455 for all investigated scenarios. 457 6 Feedback Measurements 459 In this chapter we describe the results of feedback delay 460 measurements, which we conducted in the simulations. Therefore we 461 use two metrics for measuring the performance of the algorithms, 462 these are the "mean waiting time" (MWT) and the number of feedback 463 packets that are sent, suppressed or not allowed. The waiting 464 time is the time, measured at a certain agent, between the 465 detection of a packet loss event and the time when the 466 corresponding feedback is sent. Assuming that the value of the 467 feedback decreases with its delay, we think that the mean waiting 468 time is a good metric to measure the performance gain we could get 469 by using AVPF instead of AVP. 471 The feedback an RTP/AVPF agent wants to send can be either sent or 472 not sent. If it was not sent, this could be due to the feedback 473 suppression, i.e. another receiver already sent the same feedback 474 or because the feedback was not allowed, i.e. the 475 max_feedback_delay was exceeded. We traced for every detected 476 loss, if the agent sent the corresponding feedback or not and if 477 not, why. The more feedback was not allowed, the worse the 478 performance of the algorithm. Together with the waiting times, 479 this gives us a good hint of the overall performance of the 480 scheme. 482 6.1 Unicast 484 In the unicast case, the maximum dithering interval T_dither_max 485 is fixed and set to zero. This is due to the fact that it does 486 not make sense for a unicast receiver to wait for other receivers 487 if they have the same feedback to send. But still feedback can be 488 delayed or might not be permitted to be sent at all. The 489 regularly scheduled packets are spaced according to T_rr, which 490 depends in the unicast case mainly on the session bandwidth. 492 Table 3 shows the mean waiting times (MWT) measured in seconds for 493 some configurations of the unicast topology T-2. The number of 494 feedback packets that are sent or discarded is listed also 495 (feedback sent (sent) or feedback discarded (disc)). We do not 496 list suppressed packets, because for the unicast case feedback 498 Burmeister et al. Expires December 2003 10 499 suppression does not apply. In the simulations, agent A1 was a 500 sender and agent A2 a pure receiver. 502 | | | Feedback Statistics | 503 | Session | | AVP | AVPF | 504 |Bandwidth| PLR | sent |disc| MWT | sent |disc| MWT | 505 +---------+-------+------+----+-------+------+----+-------+ 506 | 2 Mbps | 0.001 | 781 | 0 | 2.604 | 756 | 0 | 0.015 | 507 | 2 Mbps | 0.01 | 7480 | 0 | 2.591 | 7548 | 2 | 0.006 | 508 | 2 Mbps | cong. | 25 | 0 | 2.557 | 1741 | 0 | 0.001 | 509 | 20 kbps | 0.001 | 79 | 0 | 2.472 | 74 | 2 | 0.034 | 510 | 20 kbps | 0.01 | 780 | 0 | 2.605 | 709 | 64 | 0.163 | 511 | 20 kbps | cong. | 780 | 0 | 2.590 | 687 | 70 | 0.162 | 513 Table 3: Feedback Statistics for the unicast simulations. 515 From the table above we see that the mean waiting time can be 516 decreased dramatically by using AVPF instead of AVP. While the 517 waiting times for agents using AVP is always around 2.5 seconds 518 (half the minimum interval average) it can be decreased to a few 519 ms for most of the AVPF configurations. 521 In the configurations with high session bandwidth, normally all 522 triggered feedback is sent. This is because more RTCP bandwidth 523 is available. There are only very few exceptions, which are 524 probably due to more than one packet loss within one RTCP 525 interval, where the first loss was by chance sent quite early. In 526 this case it might be possible that the second feedback is 527 triggered after the early packet was sent, but possibly too early 528 to append it to the next regularly scheduled report, because of 529 the limitation of the max_feedback_delay. This is different for 530 the cases with a small session bandwidth, where the RTCP bandwidth 531 share is quite low and T_rr thus larger. After an early packet 532 was sent the time to the next regularly scheduled packet can be 533 very high. We saw that in some cases the time was larger than the 534 max_feedback_delay and in these cases the feedback is not allowed 535 to be sent at all. 537 With a different setting of max_feedback_delay it is possible to 538 have either more feedback that is not allowed and a decreased mean 539 waiting time or more feedback that is sent but an increased 540 waiting time. Thus the parameter should be set with care 541 according to the application's needs. 543 6.2 Multicast 545 In this section we describe some measurements of feedback 546 statistics in the multicast simulations. We picked out certain 547 characteristic and representative results. We considered the 549 Burmeister et al. Expires December 2003 11 550 topology T-16. Different scenarios and applications are simulated 551 for this topology. The parameters of the different links are set 552 as follows. The agents A2, A3 and A4 are connected to the middle 553 node of the multicast tree, i.e. agent A1, via high bandwidth and 554 low delay links. The other agents are connected to the nodes 2, 3 555 and 4 via different link characteristics. The agents connected to 556 node 2 represent mobile users. They suffer in certain 557 configurations from a certain byte error rate on their access 558 links and the delays are high. The agents that are connected to 559 node 3 have low bandwidth access links, but do not suffer from bit 560 errors. The last agents, that are connected to node 4 have high 561 bandwidth and low delay. 563 6.2.1 Shared Losses vs. Distributed Losses 565 In our first investigation, we wanted to see the effect of the 566 loss characteristic on the algorithm's performance. We 567 investigate the cases where packet loss occurs for several users 568 simultaneously (shared losses) or totally independently 569 (distributed losses). We first define agent A1 to be the sender. 570 In the case of shared losses, we inserted a constant byte error 571 rate on one of the middle links, i.e. the link between A1 and A2. 572 In the case of distributed losses, we inserted the same byte error 573 rate on all links downstream of A2. 575 These scenarios are especially interesting because of the feedback 576 suppression algorithm. When all receivers share the same loss, it 577 is only necessary for one of them to send the loss report. Hence 578 if a member receives feedback with the same content that it has 579 scheduled to be sent, it suppresses the scheduled feedback. Of 580 course, this suppressed feedback does not contribute to the mean 581 waiting times. So we expect reduced waiting times for shared 582 losses, because the probability is high that one of the receivers 583 can send the feedback more or less immediately. The results are 584 shown in the following table. 586 | | Feedback Statistics | 587 | | Shared Losses | Distributed Losses | 588 |Agent|sent|fbsp|disc|sum | MWT |sent|fbsp|disc|sum | MWT | 589 +-----+----+----+----+----+-----+----+----+----+----+-----+ 590 | A2 | 274| 351| 25| 650|0.267| -| -| -| -| -| 591 | A5 | 231| 408| 11| 650|0.243| 619| 2| 32| 653|0.663| 592 | A6 | 234| 407| 9| 650|0.235| 587| 2| 32| 621|0.701| 593 | A7 | 223| 414| 13| 650|0.253| 594| 6| 41| 641|0.658| 594 | A8 | 188| 443| 19| 650|0.235| 596| 1| 32| 629|0.677| 596 Table 4: Feedback statistics for multicast simulations. 598 Table 4 shows the feedback statistics for the simulation of a 599 large group size. All 16 agents of topology T-16 joined the RTP 600 session. However only agent A1 acts as an RTP sender, the other 602 Burmeister et al. Expires December 2003 12 603 agents are pure receivers. Only 4 or 5 agents suffer from packet 604 loss, i.e. A2, A5, A6, A7 and A8 for the case of shared losses and 605 A5, A6, A7 and A8 in the case of distributed losses. Since the 606 number of session members is the same for both cases, T_rr is also 607 the same on the average. Still the mean waiting times are reduced 608 by more than 50% in the case of shared losses. This proves our 609 assumption that shared losses enhance the performance of the 610 algorithm, regardless of the loss characteristic. 612 The feedback suppression mechanism seems to be working quite well. 613 Even though some feedback is sent from different receivers (i.e. 614 1150 loss reports are sent in total and only 650 packets were 615 lost, resulting in loss reports being received on the average 1.8 616 times) most of the redundant feedback was suppressed. That is, 617 2023 loss reports were suppressed from 3250 individual detected 618 losses, which means that more than 60% of the feedback was 619 actually suppressed. 621 7 Investigations on "l" 623 In this section we want to investigate the effect of the parameter 624 "l" on the T_dither_max calculation in RTP/AVPF agents. We 625 investigate the feedback suppression performance as well as the 626 report delay for three sample scenarios. 628 For all receivers the T_dither_max value is calculated as 629 T_dither_max = l * T_rr, with l = 0.5. The rationale for this is 630 that, in general, if the receiver has no RTT estimation, it does 631 not know how long it should wait for other receivers to send 632 feedback. The feedback suppression algorithm would certainly fail 633 if the time selected is too short. However, the waiting time is 634 increased unnecessarily (and thus the value of the feedback is 635 decreased) in case the chosen value is too large. Ideally, the 636 optimum time value could be found for each case but this is not 637 always feasible. On the other hand, it is not dangerous if the 638 optimum time is not used. A decreased feedback value and a 639 failure of the feedback suppression mechanism do not hurt the 640 network stability. We have shown for the cases of distributed 641 losses that the overall bandwidth constraints are kept in any case 642 and thus we could only lose some performance by choosing the wrong 643 time value. On the other hand, a good measure for T_dither_max 644 however is the RTCP interval T_rr. This value increases with the 645 number of session members. Also, we know that we can send 646 feedback at least every T_rr. Thus increasing T_dither max beyond 647 T_rr would certainly make no sense. So by choosing T_rr/2 we 648 guarantee that at least sometimes (i.e. when a loss is detected in 649 the first half of the interval between two regularly scheduled 650 RTCP packets) we are allowed to send early packets. Because of 651 the randomness of T_dither we still have a good chance to send the 652 early packet in time. 654 Burmeister et al. Expires December 2003 13 655 The AVPF profile specifies that the calculation of T_dither_max, 656 as given above, is common to session members having an RTT 657 estimation and to those not having it. If this were not so, 658 participants using different calculations for T_dither_max might 659 also have very different mean waiting times before sending 660 feedback, which translates into different reporting priorities. 661 For example, in an scenario where T_rr = 1s and the RTT = 100 ms, 662 receivers using the RTT estimation would, on average, send more 663 feedback than those not using it. This might partially cancel out 664 the feedback suppression mechanism and even cause feedback 665 implosion. Also note that, in a general case where the losses are 666 shared, the feedback suppression mechanism works if the feedback 667 packets from each receiver have enough time to reach each of the 668 other ones before the calculated T_dither_max seconds. Therefore, 669 in scenarios of very high bandwidth (small T_rr) the calculated 670 T_dither_max could be much smaller than the propagation delay 671 between receivers, which would translate into a failure of the 672 feedback suppression mechanism. In these cases, one solution 673 could be to limit the bandwidth available to receivers (see [10]) 674 such that this does not happen. Another solution could be to 675 develop a mechanism for feedback suppression based on the RTT 676 estimation between senders. This will not be discussed here and 677 may be object of another document. Note, however, that a really 678 high bandwidth media stream is not that likely to rely on this 679 kind of error repair in the first place. 681 In the following, we define three representative sample scenarios. 682 We use the topology from the previous section, T-16. Most of the 683 agents contribute only little to the simulations, because we 684 introduced an error rate only on the link between the sender A1 685 and the agent A2. 687 The first scenario represents those cases, where losses are shared 688 between two agents. One agent is located upstream on the path 689 between the other agent and the sender. Therefore, agent A2 and 690 agent A5 see the same losses that are introduced on the link 691 between the sender and agent A2. Agents A6, A7 and A8 do not join 692 the RTP session. From the other agents only agents A3 and A9 693 join. All agents are pure receivers, except A1 which is the 694 sender. 696 The second scenario represents also cases, where losses are shared 697 between two agents, but this time the agents are located on 698 different branches of the multicast tree. The delays to the 699 sender are roughly of the same magnitude. Agents A5 and A6 share 700 the same losses. Agents A3 and A9 join the RTP session, but are 701 pure receivers and do not see any losses. 703 Finally, in the third scenario, the losses are shared between two 704 agents, A5 and A6. The same agents as in the second scenario are 706 Burmeister et al. Expires December 2003 14 707 active. However, the delays of the links are different. The 708 delay of the link between agent A2 and A5 is reduced to 20ms and 709 between A2 and A6 to 40ms. 711 All agents beside agent A1 are pure RTP receivers. Thus these 712 agents do not have an RTT estimation to the source. T_dither_max 713 is calculated with the above given formula, depending only on T_rr 714 and l, which means that all agents should calculate roughly the 715 same T_dither_max. 717 7.1 Feedback Suppression Performance 719 The feedback suppression rate for an agent is defined as the ratio 720 of the total number of feedback packets not sent out of the total 721 number of feedback packets the agent intended to send (i.e. the 722 sum of sent and not sent). The reasons for not sending a packet 723 include: the receiver already saw the same loss reported in a 724 receiver report coming from another session member or the 725 max_feedback_delay (application-specific) was surpassed. 727 The results for the feedback suppression rate of the agent Af that 728 is further away from the sender, are depicted in Table 10. In 729 general it can be seen that the feedback suppression rate 730 increases with an increasing l. However there is a threshold, 731 depending on the environment, from which the additional gain is 732 not significant anymore. 734 | | Feedback Suppression Rate | 735 | l | Scen. 1 | Scen. 2 | Scen. 3 | 736 +------+---------+---------+---------+ 737 | 0.10 | 0.671 | 0.051 | 0.089 | 738 | 0.25 | 0.582 | 0.060 | 0.210 | 739 | 0.50 | 0.524 | 0.114 | 0.361 | 740 | 0.75 | 0.523 | 0.180 | 0.370 | 741 | 1.00 | 0.523 | 0.204 | 0.369 | 742 | 1.25 | 0.506 | 0.187 | 0.372 | 743 | 1.50 | 0.536 | 0.213 | 0.414 | 744 | 1.75 | 0.526 | 0.215 | 0.424 | 745 | 2.00 | 0.535 | 0.216 | 0.400 | 746 | 3.00 | 0.522 | 0.220 | 0.405 | 747 | 4.00 | 0.522 | 0.220 | 0.405 | 749 Table 10: Fraction of feedback that was suppressed at agent Af of 750 the total number of feedback messages the agent wanted to send 752 Similar results can be seen for the agent that is nearer to the 753 sender in Table 11. 755 | | Feedback Suppression Rate | 756 | l | Scen. 1 | Scen. 2 | Scen. 3 | 758 Burmeister et al. Expires December 2003 15 759 +------+---------+---------+---------+ 760 | 0.10 | 0.056 | 0.056 | 0.090 | 761 | 0.25 | 0.063 | 0.055 | 0.166 | 762 | 0.50 | 0.116 | 0.099 | 0.255 | 763 | 0.75 | 0.141 | 0.141 | 0.312 | 764 | 1.00 | 0.179 | 0.175 | 0.352 | 765 | 1.25 | 0.206 | 0.176 | 0.361 | 766 | 1.50 | 0.193 | 0.193 | 0.337 | 767 | 1.75 | 0.197 | 0.204 | 0.341 | 768 | 2.00 | 0.207 | 0.207 | 0.368 | 769 | 3.00 | 0.196 | 0.203 | 0.359 | 770 | 4.00 | 0.196 | 0.203 | 0.359 | 772 Table 11: Fraction of feedback that was suppressed at agent An of 773 the total number of feedback messages the agent wanted to send 775 The rate of feedback suppression failure is depicted in Table 12. 776 The trend of additional performance increase is not significant 777 beyond a certain threshold. Dependence on the scenario is 778 noticeable here as well. 780 | |Feedback Suppr. Failure Rate | 781 | l | Scen. 1 | Scen. 2 | Scen. 3 | 782 +------+---------+---------+---------+ 783 | 0.10 | 0.273 | 0.893 | 0.822 | 784 | 0.25 | 0.355 | 0.885 | 0.624 | 785 | 0.50 | 0.364 | 0.787 | 0.385 | 786 | 0.75 | 0.334 | 0.679 | 0.318 | 787 | 1.00 | 0.298 | 0.621 | 0.279 | 788 | 1.25 | 0.289 | 0.637 | 0.267 | 789 | 1.50 | 0.274 | 0.595 | 0.249 | 790 | 1.75 | 0.274 | 0.580 | 0.235 | 791 | 2.00 | 0.258 | 0.577 | 0.233 | 792 | 3.00 | 0.282 | 0.577 | 0.236 | 793 | 4.00 | 0.282 | 0.577 | 0.236 | 795 Table 12: The ratio of feedback suppression failures. 797 Summarizing the feedback suppression results, it can be said that 798 in general the feedback suppression performance increases with an 799 increasing l. However, beyond a certain threshold, depending on 800 environment parameters such as propagation delays or session 801 bandwidth, the additional increase is not significant anymore. 802 This threshold is not uniform across all scenarios; a value of 803 l=0.5 seems to produce reasonable results with acceptable (though 804 not optimal) overhead. 806 7.2 Loss Report Delay 808 Burmeister et al. Expires December 2003 16 809 In this section we show the results for the measured report delay 810 during the simulations of the three sample scenarios. This 811 measurement is a metric of the performance of the algorithms, 812 because the value of the feedback for the sender typically 813 decreases with the delay of its reception. The loss report delay 814 is measured as the time at the sender between sending a packet and 815 receiving the first corresponding loss report. 817 | | Mean Loss Report Delay | 818 | l | Scen. 1 | Scen. 2 | Scen. 3 | 819 +------+---------+---------+---------+ 820 | 0.10 | 0.124 | 0.282 | 0.210 | 821 | 0.25 | 0.168 | 0.266 | 0.234 | 822 | 0.50 | 0.243 | 0.264 | 0.284 | 823 | 0.75 | 0.285 | 0.286 | 0.325 | 824 | 1.00 | 0.329 | 0.305 | 0.350 | 825 | 1.25 | 0.351 | 0.329 | 0.370 | 826 | 1.50 | 0.361 | 0.363 | 0.388 | 827 | 1.75 | 0.360 | 0.387 | 0.392 | 828 | 2.00 | 0.367 | 0.412 | 0.400 | 829 | 3.00 | 0.368 | 0.507 | 0.398 | 830 | 4.00 | 0.368 | 0.568 | 0.398 | 832 Table 13: The mean loss report delay, measured at the sender. 834 As can be seen from Table 13 the delay increases in general with 835 an increasing value of l. Also, a similar effect as for the 836 feedback suppression performance is present: beyond a certain 837 threshold, the additional increase in delay is not significant 838 anymore. The threshold is environment dependent and seems to be 839 related to the threshold, where the feedback suppression gain 840 would not increase anymore. 842 7.3 Summary of "l" investigations 844 We have shown experimentally that the performance of the feedback 845 suppression mechanisms increases with an increasing value of l. 846 The same applies for the report delay, which increases also with 847 an increasing l. This leads to a threshold where both the 848 performance and the delay does not increase any further. The 849 threshold is dependent upon the environment. 851 So finding an optimum value of l is not possible because it is 852 always a trade-off between delay and feedback suppression 853 performance. With l=0.5 we think that a tradeoff was found that 854 is acceptable for typical applications and environments. 856 8 Applications Using AVPF 858 Burmeister et al. Expires December 2003 17 859 NEWPRED is one of the error resilience tools, which is defined in 860 both ISO/IEC MPEG-4 visual part and ITU-T H.263. NEWPRED achieves 861 fast error recovery using feedback messages. We simulated the 862 behavior of NEWPRED in the network simulator environment as 863 described above and measured the waiting time statistics, in order 864 to verify that the extended RTP profile for RTCP-based feedback 865 (AVPF)[1] is appropriate for the NEWPRED feedback messages. 866 Simulation results, which are presented in the following sections, 867 show that the waiting time is small enough to get the expected 868 performance of NEWPRED. 870 8.1 NEWPRED Implementation in NS2 872 The agent that performs the NEWPRED functionality, called NEWPRED 873 agent, is different from the RTP agent we described above. Some 874 of the added features and functionalities are described in the 875 following points: 877 Application Feedback 878 The "Application Layer Feedback Messages" format is used to 879 transmit the NEWPRED feedback messages. Thereby the NEWPRED 880 functionality is added to the RTP agent. The NEWPRED agent 881 creates one NACK message for each lost segment of a video frame, 882 and then assembles multiple NACK messages corresponding 883 to the segments in the same video frame into one Application 884 Layer Feedback Message. Although there are two modes, namely 885 NACK mode and ACK mode, in NEWPRED [6][7], only NACK mode is 886 used 887 in these simulations. 889 The parameters of NEWPRED agent are as follows: 890 f: Frame Rate(frames/sec) 891 seg: Number of segments in one video frame 892 bw: RTP session bandwidth(kbps) 894 Generation of NEWPRED's NACK Messages 895 The NEWPRED agent generates NACK messages when segments are 896 lost. 897 a. The NEWPRED agent generates multiple NACK messages per 898 one video frame when multiple segments are lost. These 899 are assembled into one FCI message per video frame. If there 900 is no lost segment, no message is generated and sent. 901 b. The length of one NACK message is 4 bytes. Let num be the 902 number of NACK messages in one video frame (1 <= num <= seg). 903 Thus, 12+4*num bytes is the size of the low delay RTCP 904 feedback 905 message. 907 Measurements 908 We defined two values to be measured: 910 Burmeister et al. Expires December 2003 18 911 - Recovery time 912 The recovery time is measured as the time between the 913 detection 914 of a lost segment and reception of a recovered segment. We 915 measured this "recovery time" for each lost segment. 916 - Waiting time 917 The waiting time is the additional delay due to the feedback 918 limitation of RTP. 920 Fig.1 depicts the behavior of a NEWPRED agent when a loss 921 occurs. 922 The recovery time is approximated as follows: 923 (Recovery time) = (Waiting time) + 924 (Transmission time for feedback message) + 925 (Transmission time for media data) 927 Therefore, the waiting time is derived as follows: 929 (Waiting time) = (Recovery time) - (Round-trip delay), where 931 (Round-trip delay ) = (Transmission time for feedback message) 932 + 933 (Transmission time for media data) 935 Picture Reference |: Picture 936 Segment 937 ____________________ %: Lost Segment 938 /_ _ _ _ \ 939 v/ \ / \ / \ / \ \ 940 v \v \v \v \ \ 941 Sender ---|----|----|----|----|----|---|-------------> 942 \ \ ^ \ 943 \ \ / \ 944 \ \ / \ 945 \ v / \ 946 \ x / \ 947 \ Lost / \ 948 \ x / \ 949 _____ 950 v x / NACK v 951 Receiver ---------------|----%===-%----%----%----|-----> 952 |-a-| | 953 |------- b -------| 955 a: Waiting time 956 b: Recover time (%: Video segments are 957 lost) 959 Burmeister et al. Expires December 2003 19 960 Fig.1: Relation between the measured values at the NEWPRED agent 962 8.2 Simulation 964 We conducted two simulations (Simulation A and Simulation B). In 965 Simulation A, the packets are dropped with a fixed packet loss 966 rate on a link between two NEWPRED agents. In Simulation B, 967 packet loss occurs due to congestion from other traffic sources, 968 i.e. ftp sessions. 970 8.2.1. Simulation A - Constant Packet Loss Rate 972 The network topology, used for this simulation is shown in Fig.2. 974 Link 1 Link 2 Link 3 975 +--------+ +------+ +------+ +--------+ 976 | Sender |------|Router|-------|Router|------|Receiver| 977 +--------+ +------+ +------+ +--------+ 978 10(msec) x(msec) 10(msec) 980 Fig2. Network topology that is used for Simulation A 982 Link1 and link3 are error free, and each link delay is 10 msec. 983 Packets may get dropped on link2. The packet loss rates (Plr) and 984 link delay (D) are as follows: 986 D [ms] = {10, 50, 100, 200, 500} 987 Plr = {0.005, 0.01, 0.02, 0.03, 0.05, 0.1, 0.2} 988 Session band width, frame rate and the number of segments are 989 shown in Table 14 991 +------------+----------+-------------+-----+ 992 |Parameter ID| bw(kbps) |f (frame/sec)| seg | 993 +------------+----------+-------------+-----+ 994 | 32k-4-3 | 32 | 4 | 3 | 995 | 32k-5-3 | 32 | 5 | 3 | 996 | 64k-5-3 | 64 | 5 | 3 | 997 | 64k-10-3 | 64 | 10 | 3 | 998 | 128k-10-6 | 128 | 10 | 6 | 999 | 128k-15-6 | 128 | 15 | 6 | 1000 | 384k-15-6 | 384 | 15 | 6 | 1001 | 384k-30-6 | 384 | 30 | 6 | 1002 | 512k-30-6 | 512 | 30 | 6 | 1003 | 1000k-30-9 | 1000 | 30 | 9 | 1004 | 2000k-30-9 | 2000 | 30 | 9 | 1006 Burmeister et al. Expires December 2003 20 1007 +------------+----------+-------------+-----+ 1009 Table 14: Parameter sets of the NEWPRED agents 1011 Figure3 shows the packet loss rate vs. mean of waiting time. A 1012 plotted line represents a parameter ID ( "[session bandwidth] - 1013 [frame rate] - [the number of segments] - [link2 delay]" ). E.g. 1014 384k-15-9-100 means the session of 384kbps session bandwidth, 15 1015 frames per second, 9 segments per frame and 100msec link delay. 1017 When the packet loss rate is 5% and the session bandwidth is 1018 32kbps, the waiting time is around 400msec, which is just 1019 allowable for reasonable NEWPRED performance. 1021 When the packet loss rate is less than 1%, the waiting time is 1022 less than 200msec. In such a case, the NEWPRED allows as much as 1023 200msec additional link delay. 1025 When the packet loss rate is less than 5% and the session 1026 bandwidth is 64kbps, the waiting time is also less than 200msec. 1028 In 128kbps cases, the result shows that when the packet loss rate 1029 is 20%, the waiting time is around 200msec. In cases with more 1030 than 512kbps session bandwidth, there is no significant delay. 1031 This means that the waiting time due to the feedback limitation of 1032 RTCP is neglectable for the NEWPRED performance. 1034 +------------------------------------------------------------+ 1035 | | Packet Loss Rate = | 1036 | Bandwidth | 0.005| 0.01 | 0.02 | 0.03 | 0.05 |0.10 |0.20 | 1037 |-----------+------+------+------+------+------+------+------| 1038 | 32k |130- |200- |230- |280- |350- |470- |560- | 1039 | | 180| 250| 320| 390| 430| 610| 780| 1040 | 64k | 80- |100- |120- |150- |180- |210- |290- | 1041 | | 130| 150| 180| 190| 210| 300| 400| 1042 | 128k | 60- | 70- | 90- |110- |130- |170- |190- | 1043 | | 70| 80| 100| 120| 140| 190| 240| 1044 | 384k | 30- | 30- | 30- | 40- | 50- | 50- | 50- | 1045 | | 50| 50| 50| 50| 60| 70| 90| 1046 | 512k | < 50 | < 50 | < 50 | < 50 | < 50 | < 50 | < 60 | 1047 | | | | | | | | | 1048 | 1000k | < 50 | < 50 | < 50 | < 50 | < 50 | < 50 | < 55 | 1049 | | | | | | | | | 1050 | 2000k | < 30 | < 30 | < 30 | < 30 | < 30 | < 35 | < 35 | 1051 +------------------+------+------+------+------+------+------+ 1053 Fig. 3 The result of simulation A 1055 8.2.2. Simulation B - Packet Loss due to Congestion 1057 Burmeister et al. Expires December 2003 21 1058 The configuration of link1, link2, and link3 are the same as in 1059 simulation A except that link2 is also error-free, regarding bit 1060 errors. However in addition, some FTP agents are deployed to 1061 overload link2. See Figure 4 for the simulation topology. 1063 Link1 Link2 Link3 1064 +--------+ +------+ +------+ +--------+ 1065 | Sender |------|Router|-------|Router|------|Receiver| 1066 +--------+ /|+------+ +------+|\ +--------+ 1067 +---+/ | | \+---+ 1068 +-|FTP|+---+ +---+|FTP|-+ 1069 | +---+|FTP| ... |FTP|+---+ | ... 1070 +---+ +---+ +---+ +---+ 1072 FTP Agents FTP Agents 1074 Fig4. Network Topology of Simulation B 1076 The parameters are defined as for Simulation A with the following 1077 values assigned: 1079 D[ms] ={10, 50, 100, 200, 500} 1080 32 FTP agents are deployed at each edge, for a total of 64 FTP 1081 agents active. 1082 The sets of session bandwidth, frame rate, the number of 1083 segments 1084 are the same as in Simulation A (Table 14) 1086 We provide the results for the cases with 64 FTP agents, because 1087 these are the cases where packet losses could be detected to be 1088 stable. The results are similar to the Simulation A except for a 1089 constant additional offset of 50..100ms. This is due to the delay 1090 incurred by the routers' buffers. 1092 8.3 Summary of Application Simulations 1094 We have shown that the limitations of RTP AVPF profile do not 1095 generate such high delay in the feedback messages that the 1096 performance of NEWPRED is degraded for sessions from 32kbps to 1097 2Mbps. We could see that the waiting time increases with a 1098 decreasing session bandwidth and/or an increasing packet loss 1099 rate. The cause of the packet loss is not significant; congestion 1100 and constant packet loss rates behave similarly. Still we see 1102 Burmeister et al. Expires December 2003 22 1103 that for reasonable conditions and parameters the AVPF is well 1104 suited to support the feedback needed for NEWPRED. 1106 9 Summary 1108 The new RTP profile AVPF was investigated regarding performance 1109 and potential risks to the network stability. Simulations were 1110 conducted using the network simulator, simulating unicast and 1111 several differently sized multicast topologies. The results were 1112 shown in this document. 1114 Regarding the network stability, it was important to show that the 1115 new profile does not lead to any feedback implosion, or use more 1116 bandwidth as it is allowed. Thus we measured the bandwidth that 1117 was used for RTCP in relation to the RTP session bandwidth. We 1118 have shown that, more or less exactly, 5% of the session bandwidth 1119 is used for RTCP, in all considered scenarios. Other RTCP 1120 bandwidth values could be set using the RTCP bandwidth modifiers 1121 [10]. The scenarios included unicast with and without errors, 1122 different sized multicast groups, with and without errors or 1123 congestion on the links. Thus we can say that the new profile 1124 behaves network-friendly in the sense that it uses only the 1125 allowed RTCP bandwidth, as defined by RTP. 1127 Secondly, we have shown that receivers using the new profile 1128 experience a performance gain. This was measured by capturing the 1129 delay that the sender sees for the received feedback. Using the 1130 new profile this delay can be decreased by orders of magnitude. 1132 In the third place, we investigated the effect of the parameter 1133 "l" on the new algorithms. We have shown that there does not 1134 exist an optimum value for it but only a trade-off can be 1135 achieved. The influence of this parameter is highly environment- 1136 specific and a trade-off between performance of the feedback 1137 suppression algorithm and the experienced delay has to be met. 1138 The recommended value of l= 0.5 given in the draft seems to be 1139 reasonable for most applications and environments. 1141 References 1143 1 J. Ott, S. Wenger, N. Sato, C. Burmeister, and J. Rey, "Extended 1144 RTP Profile for RTCP-based Feedback", Internet Draft, draft- 1145 ietf-avt-rtcp-feedback-05.txt, Work in Progress, February 2003. 1147 2 H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson, " RTP 1148 - A Transport Protocol for Real-time Applications, Internet 1149 Draft, draft-ietf-avt-rtp-new-11.txt, Work in Progress, May 1150 2002. 1152 Burmeister et al. Expires December 2003 23 1153 3 H. Schulzrinne, S. Casner, "RTP Profile for Audio and Video 1154 Conferences with Minimal Control", Internet Draft, draft-ietf- 1155 avt-profile-new-11.txt, Work in Progress, July 2001. 1157 4 Network Simulator Version 2 - ns-2, available from 1158 http://www.isi.edu/nsnam/ns. 1160 5 C. Burmeister, T. Klinner, "Low Delay Feedback RTCP - Timing 1161 Rules Simulation Results". Technical Report of the Panasonic 1162 European Laboratories, September 2001, available from: 1163 http://www.informatik.uni-bremen.de/~jo/misc/SimulationResults- 1164 A.pdf. 1166 6 ISO/IEC 14496-2:1999/Amd.1:2000, "Information technology - 1167 Coding of audio-visual objects - Part2: Visual", July 2000. 1169 7 ITU-T Recommendation, H.263. Video encoding for low bitrate 1170 communication. 1998. 1172 8 S. Fukunaga, T. Nakai, and H. Inoue, "Error Resilient Video 1173 Coding by Dynamic Replacing of Reference Pictures," IEEE Global 1174 Telecommunications Conference (GLOBECOM), pp.1503-1508, 1996. 1176 9 H. Kimata, Y. Tomita, H. Yamaguchi, S. Ichinose, T. Ichikawa, 1177 "Receiver-Oriented Real-Time Error Resilient Video Communication 1178 System: Adaptive Recovery from Error Propagation in Accordance 1179 with Memory Size at Receiver," Electronics and Communications in 1180 Japan, Part 1, vol.84, no.2, pp.8-17, 2001. 1182 10 S. Casner, "SDP bandwidth modifiers for RTCP bandwidth", draft- 1183 ietf-avt-rtcp-bw-05.txt, May 2002. 1185 IPR Notices 1187 The IETF takes no position regarding the validity or scope of any 1188 intellectual property or other rights that might be claimed to 1189 pertain to the implementation or use of the technology described 1190 in this document or the extent to which any license under such 1191 rights might or might not be available; neither does it represent 1192 that it has made any effort to identify any such rights. 1193 Information on the IETF's procedures with respect to rights in 1194 standards-track and standards-related documentation can be found 1195 in BCP 11 [13]. Copies of claims of rights made available for 1196 publication and any assurances of licenses to be made available, 1197 or the result of an attempt made to obtain a general license or 1198 permission for the use of such proprietary rights by implementers 1199 or users of this specification can be obtained from the IETF 1200 Secretariat. 1202 Burmeister et al. Expires December 2003 24 1203 The IETF invites any interested party to bring to its attention 1204 any copyrights, patents or patent applications, or other 1205 proprietary rights which may cover technology that may be required 1206 to practice this standard. Please address the information to the 1207 IETF Executive Director. 1209 Authors' Address 1211 Carsten Burmeister 1212 Panasonic European Laboratories GmbH 1213 Monzastr. 4c, 63225 Langen, Germany 1214 mailto: burmeister@panasonic.de 1216 Rolf Hakenberg 1217 Panasonic European Laboratories GmbH 1218 Monzastr. 4c, 63225 Langen, Germany 1219 mailto: hakenberg@panasonic.de 1221 Akihiro Miyazaki 1222 Matsushita Electric Industrial Co., Ltd 1223 1006, Kadoma, Kadoma City, Osaka, Japan 1224 mailto: akihiro@isl.mei.co.jp 1226 Joerg Ott 1227 Universitaet Bremen TZI 1228 MZH 5180, Bibliothekstr. 1, 28359 Bremen, Germany 1229 {sip,mailto}: jo@tzi.uni-bremen.de 1231 Noriyuki Sato 1232 Oki Electric Industry Co., Ltd. 1233 1-2-27 Shiromi, Chuo-ku, Osaka 540-6025 Japan 1234 mailto: sato652@oki.co.jp 1236 Shigeru Fukunaga 1237 Oki Electric Industry Co., Ltd. 1238 1-2-27 Shiromi, Chuo-ku, Osaka 540-6025 Japan 1239 mailto: fukunaga444@oki.co.jp 1241 Full Copyright Statement 1243 "Copyright (C) The Internet Society (2003). All Rights Reserved. 1245 This document and translations of it may be copied and furnished 1246 to others, and derivative works that comment on or otherwise 1247 explain it or assist in its implementation may be prepared, 1248 copied, published and distributed, in whole or in part, without 1249 restriction of any kind, provided that the above copyright notice 1250 and this paragraph are included on all such copies and derivative 1251 works. However, this document itself may not be modified in any 1253 Burmeister et al. Expires December 2003 25 1254 way, such as by removing the copyright notice or references to the 1255 Internet Society or other Internet organizations, except as needed 1256 for the purpose of developing Internet standards in which case 1257 the procedures for copyrights defined in the Internet Standards 1258 process must be followed, or as required to translate it into 1259 languages other than English. 1261 The limited permissions granted above are perpetual and will 1262 not be revoked by the Internet Society or its successors or 1263 assigns. 1265 This document and the information contained herein is provided on 1266 an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET 1267 ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR 1268 IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1269 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1270 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR 1271 PURPOSE." 1273 Burmeister et al. Expires December 2003 26