idnits 2.17.1 draft-zhu-rmcat-video-traffic-source-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** There is 1 instance of too long lines in the document, the longest one being 8 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet has text resembling RFC 2119 boilerplate text. -- The document date (July 3, 2015) is 3220 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group X. Zhu 3 Internet-Draft S. Mena 4 Intended status: Informational Cisco Systems 5 Expires: January 4, 2016 Z. Sarker 6 Ericsson AB 7 July 3, 2015 9 Modeling Video Traffic Sources for RMCAT Evaluations 10 draft-zhu-rmcat-video-traffic-source-02 12 Abstract 14 This document describes two reference video traffic source models for 15 evaluating RMCAT candidate algorithms. The first model statistically 16 characterizes the behavior of a live video encoder in response to 17 changing requests on target video rate. The second model is trace- 18 driven, and emulates the encoder output by scaling the pre-encoded 19 video frame sizes from a widely used video test sequence. Both 20 models are designed to strike a balance between simplicity, 21 repeatability, and authenticity in modeling the interactions between 22 a video traffic source and the congestion control module. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on January 4, 2016. 41 Copyright Notice 43 Copyright (c) 2015 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 59 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 60 3. Desired Behavior of A Synthetic Video Traffic Model . . . . . 3 61 4. Interactions Between Synthetic Video Traffic Source and 62 Other Components at the Sender . . . . . . . . . . . . . . . 4 63 5. A Statistical Reference Model . . . . . . . . . . . . . . . . 6 64 5.1. Time-damped response to target rate update . . . . . . . 7 65 5.2. Temporary burst/oscillation during transient . . . . . . 7 66 5.3. Output rate fluctuation at steady state . . . . . . . . . 8 67 5.4. Rate range limit imposed by video content . . . . . . . . 8 68 6. A Trace-Driven Model . . . . . . . . . . . . . . . . . . . . 8 69 6.1. Choosing the video sequence and generating the traces . . 9 70 6.2. Using the traces in the syntethic codec . . . . . . . . . 10 71 6.2.1. Main algorithm . . . . . . . . . . . . . . . . . . . 10 72 6.2.2. Notes to the main algorithm . . . . . . . . . . . . . 12 73 6.3. Varying frame rate and resolution . . . . . . . . . . . . 12 74 7. Comparing and Combining The Two Models . . . . . . . . . . . 13 75 8. Implementation Status . . . . . . . . . . . . . . . . . . . . 14 76 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 77 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 14 78 10.1. Normative References . . . . . . . . . . . . . . . . . . 14 79 10.2. Informative References . . . . . . . . . . . . . . . . . 14 80 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 82 1. Introduction 84 When evaluating candidate congestion control algorithms designed for 85 real-time interactive media, it is important to account for the 86 characteristics of traffic patterns generated from a live video 87 encoder. Unlike synthetic traffic sources that can conform perfectly 88 to the rate changing requests from the congestion control module, a 89 live video encoder can be sluggish in reacting to such changes. 90 Output rate of a live video encoder also typically deviates from the 91 target rate due to uncertainties in the encoder rate control process. 92 Consequently, end-to-end delay and loss performance of a real-time 93 media flow can be further impacted by rate variations introduced by 94 the live encoder. 96 On the other hand, evaluation results of a candidate RMCAT algorithm 97 should mostly reflect performance of the congestion control module, 98 and somewhat decouple from pecularities of any specific video codec. 99 It is also desirable that evaluation tests are repeatable, and be 100 easily duplicated across different candidate algorithms. 102 One way to strike a balance between the above considerations is to 103 evaluate RMCAT algorithms using a synthetic video traffic source 104 model that captures key characteristics of the behavior of a live 105 video encoder. To this end, this draft presents two reference 106 models. The first is based on statistical modelling; the second is 107 trace-driven. The draft also discusses the pros and cons of each 108 approach, as well as the possibility to combine both. 110 2. Terminology 112 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 113 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 114 document are to be interpreted as described RFC2119 [RFC2119]. 116 3. Desired Behavior of A Synthetic Video Traffic Model 118 A live video encoder employs encoder rate control to meet a target 119 rate by varying its encoding parameters, such as quantization step 120 size, frame rate, and picture resolution, based on its estimate of 121 the video content (e.g., motion and scene complexity). In practice, 122 however, several factors prevent the output video rate from perfectly 123 conforming to the input target rate. 125 Due to uncertainties in the captured video scene, the output rate 126 typically deviates from the specified target. In the presence of a 127 significant change in target rate, it sometimes takes several frames 128 before the encoder output rate converges to the new target. Finally, 129 while most of the frames in a live session are encoded in predictive 130 mode, the encoder can occasionally generate a large intra-coded frame 131 (or a frame partially containing intra-coded blocks) in an attempt to 132 recover from losses, to re-sync with the receiver, or during the 133 transient period of responding to target rate or spatial resolution 134 changes. 136 Hence, a synthetic video source should have the following 137 capabilities: 139 o To change bitrate. This includes ability to change framerate and/ 140 or spatial resolution, or to skip frames when required. 142 o To fluctuate around the target bitrate specified by the congestion 143 control module. 145 o To delay in convergence to the target bitrate. 147 o To generate intra-coded or repair frames on demand. 149 While there exists many different approaches in developing a 150 synthetic video traffic model, it is desirable that the outcome 151 follows a few common characteristics, as outlined below. 153 o Low computational complexity: The model should be computationally 154 lightweight, otherwise it defeats the whole purpose of serving as 155 a substitute for a live video encoder. 157 o Temporal pattern similarity: The individual traffic trace 158 instances generated by the model should mimic the temporal pattern 159 of those from a real video encoder. 161 o Statistical resemblance: The synthetic traffic should match the 162 outcome of the real video encoder in terms of statistical 163 characteristics, such as the mean, variance, peak, and 164 autocorrelation coefficients of the bitrate. It is also important 165 that the statistical resemblance should hold across different time 166 scales, ranging from tens of milliseconds to sub-seconds. 168 o Wide range of coverage: The model should be easily configurable to 169 cover a wide range of codec behaviors (e.g., with either fast or 170 slow reaction time in live encoder rate control) and video content 171 variations (e.g, ranging from high-motion to low-motion). 173 These distinct behavior features can be characterized via simple 174 statistical models, or a trace-driven approach. We present an 175 example of each in Section 5 and Section 6 177 4. Interactions Between Synthetic Video Traffic Source and Other 178 Components at the Sender 180 Figure 1 depitcs the interactions of the synthetic video encoder with 181 other components at the sender, such as the application, the 182 congestion control module, the media packet transport module, etc. 183 Both reference models, as described later in Section 5 and Section 6, 184 follow the same set of interactions. 186 The synthetic video encoder takes in raw video frames captured by the 187 camera and then dynamically generates a sequence of encoded video 188 frames with varying size and interval. These encoded frames are 189 processed by other modules in order to transmit the video stream over 190 the network. During the lifetime of a video transmission session, 191 the synthetic video encoder will typically be required to adapt its 192 encoding bitrate, and sometimes the spatial resolution and frame 193 rate. 195 In our model, the synthetic video encoder module has group of 196 incoming and outgoing interface calls that allow for interaction with 197 other modules. The following are some of the possible incoming 198 interface calls --- marked as (a) in Figure 1 --- that the synthetic 199 video encoder may accept. The list is not exhaustive and can be 200 complemented by other interface calls if deemed necessary. 202 o Target rate R_v(t): requested at time t, typically from the 203 congestion control module. Depending on the congestion control 204 algorithm in use, the update requests can either be periodic 205 (e.g., once per 1 second), or on-demand (e.g., only when a drastic 206 bandwidth change over the network is observed). 208 o Target frame rate FPS(t): the instantaneous frame rate measured in 209 frames-per-second at time t. This depends on the native camera 210 capture frame rate as well as the target/preferred frame rate 211 configured by the application or user. 213 o Frame resolution XY(t): the 2-dimensional vector indicating the 214 preferred frame resolution in pixels at time t. Several factors 215 govern the resolution requested to the synthetic video encoder 216 over time. Examples of such factors are the capturing resolution 217 of the native camera; or the current target rate R_v(t), since 218 very small resolutions do not make sense with very high bitrates, 219 and vice-versa. 221 o Instant frame skipping: the request to skip the encoding of one or 222 several captured video frames, for instance when a drastic 223 decrease in available network bandwidth is detected. 225 o On-demand generation of intra (I) frame: the request to encode 226 another I frame to avoid further error propagation at the 227 receiver, if severe packet losses are observed. This request 228 typically comes from the error control module. 230 An example of outgoing interface call --- marked as (b) in Figure 1 231 --- is the rate range, that is, the dynamic range of the video 232 encoder's output rate for the current video contents: [R_min, R_max]. 233 Here, R_min and R_max are meant to capture the dynamic rate range the 234 encoder is capable of outputting. This typically depends on the 235 video content complexity and/or display type (e.g., higher R_max for 236 video contents with higher motion complexity, or for displays of 237 higher resolution). Therefore, these values will not change with 238 R_v, but may change over time if the content is changing. 240 +-------------+ 241 raw video | | encoded video 242 frames | Synthetic | frames 243 ------------> | Video | --------------> 244 | Encoder | 245 | | 246 +--------+----+ 247 /|\ | 248 | | 249 -------------------+ +--------------------> 250 interface from interface to 251 other modules (a) other modules (b) 253 Figure 1: Interaction between synthetic video encoder, congestion 254 control, and packet transport module. 256 5. A Statistical Reference Model 258 In this section, we describe one simple statistical model of the live 259 video encoder traffic source. Figure 2 summarizes the list of tuable 260 parameters in this statistical model. A more comprehensive survey of 261 popular methods for modelling video traffic source behavior can be 262 found in [Tanwir2013]. 264 +---------------+--------------------------------+----------------+ 265 | Notation | Parameter Name | Example Value | 266 +--------------+---------------------------------+----------------+ 267 | R_v(t) | Target rate request at time t | 1 Mbps | 268 | R_o(t) | Output rate at time t | 1.2 Mbps | 269 | tau_v | Encoder reaction latency | 0.2 s | 270 | K_d | Burst duration during transient | 5 frames | 271 | K_r | Burst size dur transient | 5:1 | 272 | R_e(t) | Error in output rate at time t | 0.2 Mbps | 273 | SIGMA | standard deviation of normally | 0.1 | 274 | | distributed relative rate error | | 275 | DELTA | upper and lower bound (+/-) of | 0.1 | 276 | | uniformally distributed relative| | 277 | | rate error | | 278 | R_min | minimum rate supported by video | 150 Kbps | 279 | | encoder or content activity | | 280 | R_max | maximum rate supported by video | 1.5Mbps | 281 | | encoder or content activity | | 282 +--------------+---------------------------------+----------------+ 284 Figure 2: List of tunable parameters in a statistical video traffic 285 source model. 287 5.1. Time-damped response to target rate update 289 While the congestion control module can update its target rate 290 request R_v(t) at any time, our model dictates that the encoder will 291 only react to such changes after tau_v seconds from a previous rate 292 transition. In other words, when the encoder has reacted to a rate 293 change request at time t, it will simply ignore all subsequent rate 294 change requests until time t+tau_v. 296 5.2. Temporary burst/oscillation during transient 298 The output rate R_o during the period [t, t+tau_v] is considered to 299 be in transient. Based on observations from video encoder output 300 data, we model the transient behavior of an encoder upon reacting to 301 a new target rate request in the form of largely varying output 302 sizes. It is assumed that the overall average output rate R_o during 303 this period matches the target rate R_v. Consequently, the 304 occasional burst of large frames are followed by smaller-than average 305 encoded frames. 307 This temporary burst is characterized by two parameters: 309 o burst duration K_d: number frames in the burst event; and 310 o burst size K_r: ratio of a burst frame and average frame size at 311 steady state. 313 It can be noted that these burst parameters can also be used to mimic 314 the insersion of a large on-demand I frame in the presence of severe 315 packet losses. The values of K_d and K_r are fitted to reflect the 316 typical ratio between I and P frames for a given video content. 318 5.3. Output rate fluctuation at steady state 320 We model output rate R_o as randomly fluctuating around the target 321 rate R_v after convergence. There are two variants in modeling the 322 random fluctuation R_e = R_o - R_v: 324 o As normal distribution: with a mean of zero and a standard 325 deviation SIGMA specified in terms of percentage of the target 326 rate. A typical value of SIGMA is 10 percent of target rate. 328 o As uniform distribution bounded between -DELTA and DELTA. A 329 typical value of DELTA is 10 percent of target rate. 331 The distribution type (normal or uniform) and model parameters (SIGMA 332 or DELTA) can be learned from data samples gathered from a live 333 encoder output. 335 5.4. Rate range limit imposed by video content 337 The output rate R_o is further clipped within the dynamic range 338 [R_min, R_max], which in reality are dictated by scene and motion 339 complexity of the captured video content. In our model, these 340 parameters are specified by the application. 342 6. A Trace-Driven Model 344 We now present the second approach to model a video traffic source. 345 This approach is based on running an actual live video encoder 346 offline on a set of chosen raw video sequences and using the 347 encoder's output traces for constructing a synthetic live encoder. 348 With this approach, the recorded video traces naturally exhibit 349 temporal fluctuations around a given target rate request R_v(t) from 350 the congestion control module. 352 The following list summarizes this approach's main steps: 354 1) Choose one or more representative raw video sequences. 356 2) Using an actual live video encoder, encode the sequences at 357 various bitrates. Keep just the sequences of frame sizes for each 358 bitrate. 360 3) Construct a data structure that contains the output of the 361 previous step. The data structure should allow for easy bitrate 362 lookup. 364 4) Upon a target bitrate request R_v(t) from the controller, look up 365 the closest bitrates among those previously stored. Use the frame 366 size sequences stored for those bitrates to approximate the frame 367 sizes to output. 369 5) The output of the synthetic encoder contains "encoded" frames with 370 random contents but with realistic sizes. 372 Section 6.1 explains steps 1), 2), and 3), Section 6.2 elaborates on 373 steps 4) and 5). Finally, Section 6.3 briefly discusses the 374 possibility to extend the model for supporting variable frame rate 375 and/or variable frame resolution. 377 6.1. Choosing the video sequence and generating the traces 379 The first step we need to perform is a careful choice of a set of 380 video sequences that are representative of the use cases we want to 381 model. Our use case here is video conferencing, so we must choose a 382 low-motion sequence that resembles a "talking head", for instance a 383 news broadcast or a video capture of an actual conference call. 385 The length of the chosen video sequence is a tradeoff. If it is too 386 long, it will be difficult to manage the data structures containing 387 the traces we will produce in the next steps. If it is too short, 388 there will be an obvious periodic pattern in the output frame sizes, 389 leading to biased results when evaluating congestion controller 390 performance. In our experience, a one-minute-long sequence is a fair 391 tradeoff. 393 Once we have chosen the raw video sequence, denoted S, we use a live 394 encoder, e.g. [H264] or [HEVC] to produce a set of encoded 395 sequences. As discussed in Section 3, a live encoder's output 396 bitrate can be tuned by varying three input parameters, namely, 397 quantization step size, frame rate, and picture resolution. In order 398 to simplify the choice of these parameters for a given target rate, 399 we assume a fixed frame rate (e.g. 25 fps) and a fixed resolution 400 (e.g., 480p). See section 6.3 for a discussion on how to relax these 401 assumptions. 403 Following these simplifications, we run the chosen encoder by setting 404 a constant target bitrate at the beginning, then letting the encoder 405 vary the quantization step size internally while encoding the input 406 video sequence. Besides, we assume that the first frame is encoded 407 as an I-frame and the rest are P-frames. We further assume that the 408 encoder algorithm does not use knowledge of frames in the future so 409 as to encode a given frame. 411 We define R_min and R_max as the minimum and maximum bitrate at which 412 the synthetic codec is to operate. We divide the bitrate range 413 between R_min and R_max in n_s + 1 bitrate steps of length l = (R_max 414 - R_min) / n_s. We then use the following simple algorithm to encode 415 the raw video sequence. 417 r = R_min 418 while r <= R_max do 419 Traces[r] = encode_sequence(S, r, e) 420 r = r + l 422 where function encode_sequence takes as parameters, respectively, a 423 raw video sequence, a constant target rate, and an encoder algorithm; 424 it returns a vector with the sizes of frames in the order they were 425 encoded. The output vector is stored in a map structure called 426 Traces, whose keys are bitrates and values are frame size vectors. 428 The choice of a value for n_s is important, as it determines the 429 number of frame size vectors stored in map Traces. The minimum value 430 one can choose for n_s is 1, and its maximum value depends on the 431 amount of memory available for holding map Traces. A reasonable 432 value for n_s is one that makes the steps' length l = 200 kbps. We 433 will further discuss step length l in the next section. 435 6.2. Using the traces in the syntethic codec 437 The main idea behind the trace-based synthetic codec is that it 438 mimics a real live codec's rate adaptation when the congestion 439 controller updates the target rate R_v(t). It does so by switching 440 to a different frame size vector stored in the map Traces when 441 needed. 443 6.2.1. Main algorithm 445 We maintain two variables r_current and t_current: 447 * r_current points to one of the keys of the map Traces. Upon a 448 change in the value of R_v(t), typically because the congestion 449 controller detects that the network conditions have changed, 450 r_current is updated to the greatest key in Traces that is less than 451 or equal to the new value of R_v(t). For the moment, we assume the 452 value of R_v(t) to be clipped in the range [R_min, R_max]. 454 r_current = r 455 such that 456 ( r in keys(Traces) and 457 r <= R_v(t) and 458 (not(exists) r' in keys(Traces) such that r < r' <= R_v(t)) ) 460 * t_current is an index to the frame size vector stored in 461 Traces[r_current]. It is updated every time a new frame is due. We 462 assume all vectors stored in Traces to have the same size, denoted 463 size_traces. The following equation governs the update of t_current: 465 if t_current < SkipFrames then 466 t_current = t_current + 1 467 else 468 t_current = ((t_current+1-SkipFrames) % (size_traces- SkipFrames)) 469 + SkipFrames 471 where operator % denotes modulo, and SkipFrames is a predefined 472 constant that denotes the number of frames to be skipped at the 473 beginning of frame size vectors after t_current has wrapped around. 474 The point of constant SkipFrames is avoiding the effect of 475 periodically sending a (big) I-frame followed by several smaller- 476 than-normal P-frames. We typically set SkipFrames to 20, although it 477 could be set to 0 if we are interested in studying the effect of 478 sending I-frames periodically. 480 We initialize r_current to R_min, and t_current to 0. 482 When a new frame is due, we need to calculate its size. There are 483 three cases: 485 a) R_min <= R_v(t) < Rmax: In this case we use linear interpolation 486 of the frame sizes appearing in Traces[r_current] and 487 Traces[r_current + l]. The interpolation is done as follows: 489 size_lo = Traces[r_current][t_current] 490 size_hi = Traces[r_current + l][t_current] 491 distance_lo = ( R_v(t) - r_current ) / l 492 framesize = size_hi * distance_lo + size_lo * (1 - distance_lo) 494 b) R_v(t) < R_min: In this case, we scale the trace sequence with 495 the lowest bitrate, in the following way: 497 factor = R_v(t) / R_min 498 framesize = max(1, factor * Traces[R_min][t_current]) 500 c) R_v(t) >= R_max: We also use scaling for this case. We use the 501 trace sequence with the greatest bitrate: 503 factor = R_v(t) / R_max 504 framesize = factor * Traces[R_max][t_current] 506 In case b), we set the minimum to 1 byte, since the value of factor 507 can be arbitrarily close to 0. 509 6.2.2. Notes to the main algorithm 511 * Reacting to changes in target bitrate. Similarly to the 512 statistical model presented in Section 5, the trace-based synthetic 513 codec has a time bound, tau_v, to reacting to target bitrate changes. 514 If the codec has reacted to an update in R_v(t) at time t, it will 515 delay any further update to R_v(t) to time t + tau_v. Note that, in 516 any case, the value of tau_v cannot be chosen shorter than the time 517 between frames, i.e. the inverse of the frame rate. 519 * I-frames on demand. The synthetic codec could be extended to 520 simulate the sending of I-frames on demand, e.g., as a reaction to 521 losses. To implement this extension, the codec's API is augmented 522 with a new function to request a new I-frame. Upon calling such 523 function, t_current is reset to 0. 525 * Variable length l of steps defined between R_min and R_max. In the 526 main algorithm's description, the step length l is fixed. However, 527 if the range [R_min, R_max] is very wide, it is also possible to 528 define a set of steps with a non-constant length. The idea behind 529 this modification is that the difference between 400 kbps and 600 530 kbps as bitrate is much more important than the difference between 531 4400 kbps and 4600 kbps. For example, one could define steps of 532 length 200 Kbps under 1 Mbps, then length 300 kbps between 1 Mbps and 533 2 Mbps, 400 kbps between 2 Mbps and 3 Mbps, and so on. 535 6.3. Varying frame rate and resolution 537 The trace-based synthetic codec model explained in this section is 538 relatively simple because we have fixed the frame rate and the frame 539 resolution. The model could be extended to have variable frame rate, 540 variable spatial resolution, or both. 542 When the encoded picture quality at a given bitrate is low, one can 543 potentially decrease the frame rate (if the video sequence is 544 currently in low motion) or the spatial resolution in order to 545 improve quality-of-experince (QoE) in the overall encoded video. On 546 the other hand, if target bitrate increases to a point where there is 547 no longer a perceptible improvement in the picture quality of 548 individual frames, then one might afford to increase the spatial 549 resolution or the frame rate (useful if the video is currently in 550 high motion). 552 Many techniques have been proposed to choose over time the best 553 combination of encoder quatization step size, frame rate, and spatial 554 resolution in order to maximize the quality of live video codecs 555 [Ozer2011][Hu2010]. Future work may consider extending the trace- 556 based codec to accommodate variable frame rate and/or resolution. 558 From the perspective of congestion control, varying the spatial 559 resolution typically requires a new intra-coded frame to be 560 generated, thereby incurring a temporary burst in the output traffic 561 pattern. The impact of frame rate change tends to be more subtle: 562 reducing frame rate from high to low leads to sparsely spaced larger 563 encoded packets instead of many densely spaced smaller packets. Such 564 difference in traffic profiles may still affect the performance of 565 congestion control, especially when outgoing packets are not paced at 566 the transport module. We leave the investigation of varying frame 567 rate to future work. 569 7. Comparing and Combining The Two Models 571 It is worthwhile noting that the statistical and trace-based models 572 each has its own advantages and drawbacks. Both models are fairly 573 simple to implement. However, it takes significantly more effort to 574 fit the parameters of a statistical model to actual encoder output 575 data whereas trace-based models does not require such fitting. On 576 the other hand, once validated, the statistical model is more 577 flexible in mimicking a wide range of encoder/content behavior by 578 simply varying the correponding parameters in the model. In 579 contrast, a trace-driven model relies, by definition, on additional 580 data collection efforts for accommodating new codecs or video 581 contents. 583 In general, trace-based model is more realistic for mimicking 584 ongoing, steady-state behavior of a video traffic source whereas 585 statistical model is more versatile for simulating transient events 586 (e.g., when target rate changes from A to B with temporary bursts 587 during the transition). Therefore, it may be desirable to combine 588 both approaches into a hybrid model, using traces for steady-state 589 and statistical model for transients. 591 8. Implementation Status 593 The statistical model has been implemented as a traffic generator 594 module within the [ns-2] network simulation platform. The trace- 595 driven model has been implemented as a stand-alone traffic source 596 module which can be easily integrated into the [ns-3] network 597 simulation platform. 599 Authors of this draft are currently in the process of providing open 600 source access to both implementations. 602 9. IANA Considerations 604 There are no IANA impacts in this memo. 606 10. References 608 10.1. Normative References 610 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 611 Requirement Levels", BCP 14, RFC 2119, March 1997. 613 [H264] ITU-T Recommendation H.264, "Advanced video coding for 614 generic audiovisual services", 615 . 617 [HEVC] ITU-T Recommendation H.265, "High efficiency video 618 coding", . 620 10.2. Informative References 622 [Hu2010] Hu, H., Ma, Z., and Y. Wang, "Optimization of Spatial, 623 Temporal and Amplitude Resolution for Rate-Constrained 624 Video Coding and Scalable Video Adaptation", inproceedings 625 in Proc. 19th IEEE International Conference on Image 626 Processing (ICIP'12), September 2012. 628 [Ozer2011] 629 Ozer, J., "Video Compression for Flash, Apple Devices and 630 HTML5", ISBN ISBN-13:978-0976259503, 2011. 632 [Tanwir2013] 633 Tanwir, S. and H. Perros, "A Survey of VBR Video Traffic 634 Models", journal IEEE Communications Surveys and 635 Tutorials, vol. 15, no. 5, pp. 1778-1802., October 2013. 637 [ns-2] "The Network Simulator - ns-2", 638 . 640 [ns-3] "The Network Simulator - ns-3", . 642 Authors' Addresses 644 Xiaoqing Zhu 645 Cisco Systems 646 12515 Research Blvd., Building 4 647 Austin, TX 78759 648 USA 650 Email: xiaoqzhu@cisco.com 652 Sergio Mena de la Cruz 653 Cisco Systems 654 EPFL, Quartier de l'Innovation, Batiment E 655 Ecublens, Vaud 1015 656 Switzerland 658 Email: semena@cisco.com 660 Zaheduzzaman Sarker 661 Ericsson AB 662 Luleae, SE 977 53 663 Sweden 665 Phone: +46 10 717 37 43 666 Email: zaheduzzaman.sarker@ericsson.com