idnits 2.17.1 draft-ietf-rmcat-video-traffic-model-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** There is 1 instance of too long lines in the document, the longest one being 8 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet has text resembling RFC 2119 boilerplate text. -- The document date (July 8, 2016) is 2842 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group X. Zhu 3 Internet-Draft S. Mena 4 Intended status: Informational Cisco Systems 5 Expires: January 9, 2017 Z. Sarker 6 Ericsson AB 7 July 8, 2016 9 Modeling Video Traffic Sources for RMCAT Evaluations 10 draft-ietf-rmcat-video-traffic-model-01 12 Abstract 14 This document describes two reference video traffic source models for 15 evaluating RMCAT candidate algorithms. The first model statistically 16 characterizes the behavior of a live video encoder in response to 17 changing requests on target video rate. The second model is trace- 18 driven, and emulates the encoder output by scaling the pre-encoded 19 video frame sizes from a widely used video test sequence. Both 20 models are designed to strike a balance between simplicity, 21 repeatability, and authenticity in modeling the interactions between 22 a video traffic source and the congestion control module. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on January 9, 2017. 41 Copyright Notice 43 Copyright (c) 2016 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 59 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 60 3. Desired Behavior of A Synthetic Video Traffic Model . . . . . 3 61 4. Interactions Between Synthetic Video Traffic Source and 62 Other Components at the Sender . . . . . . . . . . . . . . . 4 63 5. A Statistical Reference Model . . . . . . . . . . . . . . . . 6 64 5.1. Time-damped response to target rate update . . . . . . . 7 65 5.2. Temporary burst/oscillation during transient . . . . . . 7 66 5.3. Output rate fluctuation at steady state . . . . . . . . . 8 67 5.4. Rate range limit imposed by video content . . . . . . . . 8 68 6. A Trace-Driven Model . . . . . . . . . . . . . . . . . . . . 8 69 6.1. Choosing the video sequence and generating the traces . . 9 70 6.2. Using the traces in the syntethic codec . . . . . . . . . 10 71 6.2.1. Main algorithm . . . . . . . . . . . . . . . . . . . 10 72 6.2.2. Notes to the main algorithm . . . . . . . . . . . . . 12 73 6.3. Varying frame rate and resolution . . . . . . . . . . . . 12 74 7. Combining The Two Models . . . . . . . . . . . . . . . . . . 13 75 8. Implementation Status . . . . . . . . . . . . . . . . . . . . 14 76 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 77 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 15 78 10.1. Normative References . . . . . . . . . . . . . . . . . . 15 79 10.2. Informative References . . . . . . . . . . . . . . . . . 15 80 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 82 1. Introduction 84 When evaluating candidate congestion control algorithms designed for 85 real-time interactive media, it is important to account for the 86 characteristics of traffic patterns generated from a live video 87 encoder. Unlike synthetic traffic sources that can conform perfectly 88 to the rate changing requests from the congestion control module, a 89 live video encoder can be sluggish in reacting to such changes. 90 Output rate of a live video encoder also typically deviates from the 91 target rate due to uncertainties in the encoder rate control process. 92 Consequently, end-to-end delay and loss performance of a real-time 93 media flow can be further impacted by rate variations introduced by 94 the live encoder. 96 On the other hand, evaluation results of a candidate RMCAT algorithm 97 should mostly reflect performance of the congestion control module, 98 and somewhat decouple from peculiarities of any specific video codec. 99 It is also desirable that evaluation tests are repeatable, and be 100 easily duplicated across different candidate algorithms. 102 One way to strike a balance between the above considerations is to 103 evaluate RMCAT algorithms using a synthetic video traffic source 104 model that captures key characteristics of the behavior of a live 105 video encoder. To this end, this draft presents two reference 106 models. The first is based on statistical modelling; the second is 107 trace-driven. The draft also discusses the pros and cons of each 108 approach, as well as the how both approaches can be combined. 110 2. Terminology 112 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 113 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 114 document are to be interpreted as described RFC2119 [RFC2119]. 116 3. Desired Behavior of A Synthetic Video Traffic Model 118 A live video encoder employs encoder rate control to meet a target 119 rate by varying its encoding parameters, such as quantization step 120 size, frame rate, and picture resolution, based on its estimate of 121 the video content (e.g., motion and scene complexity). In practice, 122 however, several factors prevent the output video rate from perfectly 123 conforming to the input target rate. 125 Due to uncertainties in the captured video scene, the output rate 126 typically deviates from the specified target. In the presence of a 127 significant change in target rate, it sometimes takes several frames 128 before the encoder output rate converges to the new target. Finally, 129 while most of the frames in a live session are encoded in predictive 130 mode, the encoder can occasionally generate a large intra-coded frame 131 (or a frame partially containing intra-coded blocks) in an attempt to 132 recover from losses, to re-sync with the receiver, or during the 133 transient period of responding to target rate or spatial resolution 134 changes. 136 Hence, a synthetic video source should have the following 137 capabilities: 139 o To change bitrate. This includes ability to change framerate and/ 140 or spatial resolution, or to skip frames when required. 142 o To fluctuate around the target bitrate specified by the congestion 143 control module. 145 o To delay in convergence to the target bitrate. 147 o To generate intra-coded or repair frames on demand. 149 While there exist many different approaches in developing a synthetic 150 video traffic model, it is desirable that the outcome follows a few 151 common characteristics, as outlined below. 153 o Low computational complexity: The model should be computationally 154 lightweight, otherwise it defeats the whole purpose of serving as 155 a substitute for a live video encoder. 157 o Temporal pattern similarity: The individual traffic trace 158 instances generated by the model should mimic the temporal pattern 159 of those from a real video encoder. 161 o Statistical resemblance: The synthetic traffic should match the 162 outcome of the real video encoder in terms of statistical 163 characteristics, such as the mean, variance, peak, and 164 autocorrelation coefficients of the bitrate. It is also important 165 that the statistical resemblance should hold across different time 166 scales, ranging from tens of milliseconds to sub-seconds. 168 o Wide range of coverage: The model should be easily configurable to 169 cover a wide range of codec behaviors (e.g., with either fast or 170 slow reaction time in live encoder rate control) and video content 171 variations (e.g, ranging from high-motion to low-motion). 173 These distinct behavior features can be characterized via simple 174 statistical models, or a trace-driven approach. We present an 175 example of each in Section 5 and Section 6 177 4. Interactions Between Synthetic Video Traffic Source and Other 178 Components at the Sender 180 Figure 1 depitcs the interactions of the synthetic video encoder with 181 other components at the sender, such as the application, the 182 congestion control module, the media packet transport module, etc. 183 Both reference models, as described later in Section 5 and Section 6, 184 follow the same set of interactions. 186 The synthetic video encoder takes in raw video frames captured by the 187 camera and then dynamically generates a sequence of encoded video 188 frames with varying size and interval. These encoded frames are 189 processed by other modules in order to transmit the video stream over 190 the network. During the lifetime of a video transmission session, 191 the synthetic video encoder will typically be required to adapt its 192 encoding bitrate, and sometimes the spatial resolution and frame 193 rate. 195 In our model, the synthetic video encoder module has a group of 196 incoming and outgoing interface calls that allow for interaction with 197 other modules. The following are some of the possible incoming 198 interface calls --- marked as (a) in Figure 1 --- that the synthetic 199 video encoder may accept. The list is not exhaustive and can be 200 complemented by other interface calls if deemed necessary. 202 o Target rate R_v(t): requested at time t, typically from the 203 congestion control module. Depending on the congestion control 204 algorithm in use, the update requests can either be periodic 205 (e.g., once per second), or on-demand (e.g., only when a drastic 206 bandwidth change over the network is observed). 208 o Target frame rate FPS(t): the instantaneous frame rate measured in 209 frames-per-second at time t. This depends on the native camera 210 capture frame rate as well as the target/preferred frame rate 211 configured by the application or user. 213 o Frame resolution XY(t): the 2-dimensional vector indicating the 214 preferred frame resolution in pixels at time t. Several factors 215 govern the resolution requested to the synthetic video encoder 216 over time. Examples of such factors are the capturing resolution 217 of the native camera; or the current target rate R_v(t), since 218 very small resolutions do not make sense with very high bitrates, 219 and vice-versa. 221 o Instant frame skipping: the request to skip the encoding of one or 222 several captured video frames, for instance when a drastic 223 decrease in available network bandwidth is detected. 225 o On-demand generation of intra (I) frame: the request to encode 226 another I frame to avoid further error propagation at the 227 receiver, if severe packet losses are observed. This request 228 typically comes from the error control module. 230 An example of outgoing interface call --- marked as (b) in Figure 1 231 --- is the rate range, that is, the dynamic range of the video 232 encoder's output rate for the current video contents: [R_min, R_max]. 233 Here, R_min and R_max are meant to capture the dynamic rate range the 234 encoder is capable of outputting. This typically depends on the 235 video content complexity and/or display type (e.g., higher R_max for 236 video contents with higher motion complexity, or for displays of 237 higher resolution). Therefore, these values will not change with 238 R_v, but may change over time if the content is changing. 240 +-------------+ 241 raw video | | encoded video 242 frames | Synthetic | frames 243 ------------> | Video | --------------> 244 | Encoder | 245 | | 246 +--------+----+ 247 /|\ | 248 | | 249 -------------------+ +--------------------> 250 interface from interface to 251 other modules (a) other modules (b) 253 Figure 1: Interaction between synthetic video encoder and other 254 modules at the sender 256 5. A Statistical Reference Model 258 In this section, we describe one simple statistical model of the live 259 video encoder traffic source. Figure 2 summarizes the list of 260 tunable parameters in this statistical model. A more comprehensive 261 survey of popular methods for modelling video traffic source behavior 262 can be found in [Tanwir2013]. 264 +---------------+--------------------------------+----------------+ 265 | Notation | Parameter Name | Example Value | 266 +--------------+---------------------------------+----------------+ 267 | R_v(t) | Target rate request at time t | 1 Mbps | 268 | R_o(t) | Output rate at time t | 1.2 Mbps | 269 | tau_v | Encoder reaction latency | 0.2 s | 270 | K_d | Burst duration during transient | 5 frames | 271 | K_r | Burst size during transient | 5:1 | 272 | R_e(t) | Error in output rate at time t | 0.2 Mbps | 273 | SIGMA | standard deviation of normally | 0.1 | 274 | | distributed relative rate error | | 275 | DELTA | upper and lower bound (+/-) of | 0.1 | 276 | | uniformly distributed relative | | 277 | | rate error | | 278 | R_min | minimum rate supported by video | 150 Kbps | 279 | | encoder or content activity | | 280 | R_max | maximum rate supported by video | 1.5Mbps | 281 | | encoder or content activity | | 282 +--------------+---------------------------------+----------------+ 284 Figure 2: List of tunable parameters in a statistical video traffic 285 source model. 287 5.1. Time-damped response to target rate update 289 While the congestion control module can update its target rate 290 request R_v(t) at any time, our model dictates that the encoder will 291 only react to such changes after tau_v seconds from a previous rate 292 transition. In other words, when the encoder has reacted to a rate 293 change request at time t, it will simply ignore all subsequent rate 294 change requests until time t+tau_v. 296 5.2. Temporary burst/oscillation during transient 298 The output rate R_o during the period [t, t+tau_v] is considered to 299 be in transient. Based on observations from video encoder output 300 data, we model the transient behavior of an encoder upon reacting to 301 a new target rate request in the form of largely varying output 302 sizes. It is assumed that the overall average output rate R_o during 303 this period matches the target rate R_v. Consequently, the 304 occasional burst of large frames are followed by smaller-than average 305 encoded frames. 307 This temporary burst is characterized by two parameters: 309 o burst duration K_d: number frames in the burst event; and 310 o burst size K_r: ratio of a burst frame and average frame size at 311 steady state. 313 It can be noted that these burst parameters can also be used to mimic 314 the insertion of a large on-demand I frame in the presence of severe 315 packet losses. The values of K_d and K_r are fitted to reflect the 316 typical ratio between I and P frames for a given video content. 318 5.3. Output rate fluctuation at steady state 320 We model output rate R_o as randomly fluctuating around the target 321 rate R_v after convergence. There are two variants in modeling the 322 random fluctuation R_e = R_o - R_v: 324 o As normal distribution: with a mean of zero and a standard 325 deviation SIGMA specified in terms of percentage of the target 326 rate. A typical value of SIGMA is 10 percent of target rate. 328 o As uniform distribution bounded between -DELTA and DELTA. A 329 typical value of DELTA is 10 percent of target rate. 331 The distribution type (normal or uniform) and model parameters (SIGMA 332 or DELTA) can be learned from data samples gathered from a live 333 encoder output. 335 5.4. Rate range limit imposed by video content 337 The output rate R_o is further clipped within the dynamic range 338 [R_min, R_max], which in reality are dictated by scene and motion 339 complexity of the captured video content. In our model, these 340 parameters are specified by the application. 342 6. A Trace-Driven Model 344 We now present the second approach to model a video traffic source. 345 This approach is based on running an actual live video encoder 346 offline on a set of chosen raw video sequences and using the 347 encoder's output traces for constructing a synthetic live encoder. 348 With this approach, the recorded video traces naturally exhibit 349 temporal fluctuations around a given target rate request R_v(t) from 350 the congestion control module. 352 The following list summarizes this approach's main steps: 354 1) Choose one or more representative raw video sequences. 356 2) Using an actual live video encoder, encode the sequences at 357 various bitrates. Keep just the sequences of frame sizes for each 358 bitrate. 360 3) Construct a data structure that contains the output of the 361 previous step. The data structure should allow for easy bitrate 362 lookup. 364 4) Upon a target bitrate request R_v(t) from the controller, look up 365 the closest bitrates among those previously stored. Use the frame 366 size sequences stored for those bitrates to approximate the frame 367 sizes to output. 369 5) The output of the synthetic encoder contains "encoded" frames with 370 zeros as contents but with realistic sizes. 372 Section 6.1 explains steps 1), 2), and 3), Section 6.2 elaborates on 373 steps 4) and 5). Finally, Section 6.3 briefly discusses the 374 possibility to extend the model for supporting variable frame rate 375 and/or variable frame resolution. 377 6.1. Choosing the video sequence and generating the traces 379 The first step we need to perform is a careful choice of a set of 380 video sequences that are representative of the use cases we want to 381 model. Our use case here is video conferencing, so we must choose a 382 low-motion sequence that resembles a "talking head", for instance a 383 news broadcast or a video capture of an actual conference call. 385 The length of the chosen video sequence is a tradeoff. If it is too 386 long, it will be difficult to manage the data structures containing 387 the traces. If it is too short, there will be an obvious periodic 388 pattern in the output frame sizes, leading to biased results when 389 evaluating congestion controller performance. In our experience, a 390 one-minute-long sequence is a fair tradeoff. 392 Once we have chosen the raw video sequence, denoted S, we use a live 393 encoder, e.g. [H264] or [HEVC] to produce a set of encoded 394 sequences. As discussed in Section 3, a live encoder's output 395 bitrate can be tuned by varying three input parameters, namely, 396 quantization step size, frame rate, and picture resolution. In order 397 to simplify the choice of these parameters for a given target rate, 398 we assume a fixed frame rate (e.g. 25 fps) and a fixed resolution 399 (e.g., 480p). See section 6.3 for a discussion on how to relax these 400 assumptions. 402 Following these simplifications, we run the chosen encoder by setting 403 a constant target bitrate at the beginning, then letting the encoder 404 vary the quantization step size internally while encoding the input 405 video sequence. Besides, we assume that the first frame is encoded 406 as an I-frame and the rest are P-frames. We further assume that the 407 encoder algorithm does not use knowledge of frames in the future so 408 as to encode a given frame. 410 We define R_min and R_max as the minimum and maximum bitrate at which 411 the synthetic codec is to operate. We divide the bitrate range 412 between R_min and R_max in n_s + 1 bitrate steps of length l = (R_max 413 - R_min) / n_s. We then use the following simple algorithm to encode 414 the raw video sequence. 416 r = R_min 417 while r <= R_max do 418 Traces[r] = encode_sequence(S, r, e) 419 r = r + l 421 where function encode_sequence takes as parameters, respectively, a 422 raw video sequence, a constant target rate, and an encoder algorithm; 423 it returns a vector with the sizes of frames in the order they were 424 encoded. The output vector is stored in a map structure called 425 Traces, whose keys are bitrates and values are frame size vectors. 427 The choice of a value for n_s is important, as it determines the 428 number of frame size vectors stored in map Traces. The minimum value 429 one can choose for n_s is 1, and its maximum value depends on the 430 amount of memory available for holding the map Traces. A reasonable 431 value for n_s is one that makes the steps' length l = 200 kbps. We 432 will further discuss step length l in the next section. 434 6.2. Using the traces in the syntethic codec 436 The main idea behind the trace-driven synthetic codec is that it 437 mimics a real live codec's rate adaptation when the congestion 438 controller updates the target rate R_v(t). It does so by switching 439 to a different frame size vector stored in the map Traces when 440 needed. 442 6.2.1. Main algorithm 444 We maintain two variables r_current and t_current: 446 * r_current points to one of the keys of the map Traces. Upon a 447 change in the value of R_v(t), typically because the congestion 448 controller detects that the network conditions have changed, 449 r_current is updated to the greatest key in Traces that is less than 450 or equal to the new value of R_v(t). For the moment, we assume the 451 value of R_v(t) to be clipped in the range [R_min, R_max]. 453 r_current = r 454 such that 455 ( r in keys(Traces) and 456 r <= R_v(t) and 457 (not(exists) r' in keys(Traces) such that r < r' <= R_v(t)) ) 459 * t_current is an index to the frame size vector stored in 460 Traces[r_current]. It is updated every time a new frame is due. We 461 assume all vectors stored in Traces to have the same size, denoted 462 size_traces. The following equation governs the update of t_current: 464 if t_current < SkipFrames then 465 t_current = t_current + 1 466 else 467 t_current = ((t_current+1-SkipFrames) % (size_traces- SkipFrames)) 468 + SkipFrames 470 where operator % denotes modulo, and SkipFrames is a predefined 471 constant that denotes the number of frames to be skipped at the 472 beginning of frame size vectors after t_current has wrapped around. 473 The point of constant SkipFrames is avoiding the effect of 474 periodically sending a (big) I-frame followed by several smaller- 475 than-normal P-frames. We typically set SkipFrames to 20, although it 476 could be set to 0 if we are interested in studying the effect of 477 sending I-frames periodically. 479 We initialize r_current to R_min, and t_current to 0. 481 When a new frame is due, we need to calculate its size. There are 482 three cases: 484 a) R_min <= R_v(t) < Rmax: In this case we use linear interpolation 485 of the frame sizes appearing in Traces[r_current] and 486 Traces[r_current + l]. The interpolation is done as follows: 488 size_lo = Traces[r_current][t_current] 489 size_hi = Traces[r_current + l][t_current] 490 distance_lo = ( R_v(t) - r_current ) / l 491 framesize = size_hi * distance_lo + size_lo * (1 - distance_lo) 493 b) R_v(t) < R_min: In this case, we scale the trace sequence with 494 the lowest bitrate, in the following way: 496 factor = R_v(t) / R_min 497 framesize = max(1, factor * Traces[R_min][t_current]) 499 c) R_v(t) >= R_max: We also use scaling for this case. We use the 500 trace sequence with the greatest bitrate: 502 factor = R_v(t) / R_max 503 framesize = factor * Traces[R_max][t_current] 505 In case b), we set the minimum to 1 byte, since the value of factor 506 can be arbitrarily close to 0. 508 6.2.2. Notes to the main algorithm 510 * Reacting to changes in target bitrate. Similarly to the 511 statistical model presented in Section 5, the trace-driven synthetic 512 codec can have a time bound, tau_v, to reacting to target bitrate 513 changes. If the codec has reacted to an update in R_v(t) at time t, 514 it will delay any further update to R_v(t) to time t + tau_v. Note 515 that, in any case, the value of tau_v cannot be chosen shorter than 516 the time between frames, i.e. the inverse of the frame rate. 518 * I-frames on demand. The synthetic codec could be extended to 519 simulate the sending of I-frames on demand, e.g., as a reaction to 520 losses. To implement this extension, the codec's API is augmented 521 with a new function to request a new I-frame. Upon calling such 522 function, t_current is reset to 0. 524 * Variable length l of steps defined between R_min and R_max. In the 525 main algorithm's description, the step length l is fixed. However, 526 if the range [R_min, R_max] is very wide, it is also possible to 527 define a set of steps with a non-constant length. The idea behind 528 this modification is that the difference between 400 kbps and 600 529 kbps as bitrate is much more important than the difference between 530 4400 kbps and 4600 kbps. For example, one could define steps of 531 length 200 Kbps under 1 Mbps, then length 300 kbps between 1 Mbps and 532 2 Mbps, 400 kbps between 2 Mbps and 3 Mbps, and so on. 534 6.3. Varying frame rate and resolution 536 The trace-driven synthetic codec model explained in this section is 537 relatively simple because we have fixed the frame rate and the frame 538 resolution. The model could be extended to have variable frame rate, 539 variable spatial resolution, or both. 541 When the encoded picture quality at a given bitrate is low, one can 542 potentially decrease the frame rate (if the video sequence is 543 currently in low motion) or the spatial resolution in order to 544 improve quality-of-experince (QoE) in the overall encoded video. On 545 the other hand, if target bitrate increases to a point where there is 546 no longer a perceptible improvement in the picture quality of 547 individual frames, then one might afford to increase the spatial 548 resolution or the frame rate (useful if the video is currently in 549 high motion). 551 Many techniques have been proposed to choose over time the best 552 combination of encoder quatization step size, frame rate, and spatial 553 resolution in order to maximize the quality of live video codecs 554 [Ozer2011][Hu2010]. Future work may consider extending the trace- 555 driven codec to accommodate variable frame rate and/or resolution. 557 From the perspective of congestion control, varying the spatial 558 resolution typically requires a new intra-coded frame to be 559 generated, thereby incurring a temporary burst in the output traffic 560 pattern. The impact of frame rate change tends to be more subtle: 561 reducing frame rate from high to low leads to sparsely spaced larger 562 encoded packets instead of many densely spaced smaller packets. Such 563 difference in traffic profiles may still affect the performance of 564 congestion control, especially when outgoing packets are not paced at 565 the transport module. We leave the investigation of varying frame 566 rate to future work. 568 7. Combining The Two Models 570 It is worthwhile noting that the statistical and trace-driven models 571 each has its own advantages and drawbacks. While both models are 572 fairly simple to implement, it takes significantly greater effort to 573 fit the parameters of a statistical model to actual encoder output 574 data whereas it is straightforward for a trace-driven model to obtain 575 encoded frame size data. On the other hand, once validated, the 576 statistical model is more flexible in mimicking a wide range of 577 encoder/content behaviors by simply varying the correponding 578 parameters in the model. In this regard, a trace-driven model relies 579 -- by definition -- on additional data collection efforts for 580 accommodating new codecs or video contents. 582 In general, trace-driven model is more realistic for mimicking 583 ongoing, steady-state behavior of a video traffic source whereas 584 statistical model is more versatile for simulating transient events 585 (e.g., when target rate changes from A to B with temporary bursts 586 during the transition). It is also possible to combine both models 587 into a hybrid approach, using traces during steady-state and 588 statistical model during transients. 590 +---------------+ 591 transient | Generate next | 592 +------>| K_d transient | 593 +-------------+ / | frames | 594 R_v(t) | Compare | / +---------------+ 595 ------->| against |/ 596 | previous | 597 | target rate |\ 598 +-------------+ \ +---------------+ 599 \ | Generate next | 600 +------>| frame from | 601 steady-state | trace | 602 +---------------+ 604 Figure 3: Hybrid approach for modeling video traffic 606 As shown in Figure 3, the video traffic model operates in transient 607 state if the requested target rate R_v(t) is substantially higher 608 than the previous target, or else it operates in steady state. 609 During transient state, a total of K_d frames are generated by the 610 statistical model, resulting in 1 big burst frame (on average K_r 611 times larger than average frame size at the target rate) followed by 612 K_d-1 small frames. When operating in steady-state, the video 613 traffic model simply generates a frame according to the trace-driven 614 model given the target rate. One example criteria for determining 615 whether the traffic model should operate in transient state is 616 whether the rate increase exceeds 20% of previous target rate. 618 8. Implementation Status 620 The statistical model has been implemented as a traffic generator 621 module within the [ns-2] network simulation platform. 623 More recently, both the statistical and trace-driven models have been 624 implemented as a stand-alone traffic source module. This can be 625 easily integrated into network simulation platforms such as [ns-2] 626 and [ns-3], as well as testbeds using a real network. The stand- 627 alone traffic source module is available as an open source 628 implementation at [Syncodecs]. 630 9. IANA Considerations 632 There are no IANA impacts in this memo. 634 10. References 636 10.1. Normative References 638 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 639 Requirement Levels", BCP 14, RFC 2119, 640 DOI 10.17487/RFC2119, March 1997, 641 . 643 [H264] ITU-T Recommendation H.264, "Advanced video coding for 644 generic audiovisual services", 2003, 645 . 647 [HEVC] ITU-T Recommendation H.265, "High efficiency video 648 coding", 2015. 650 10.2. Informative References 652 [Hu2010] Hu, H., Ma, Z., and Y. Wang, "Optimization of Spatial, 653 Temporal and Amplitude Resolution for Rate-Constrained 654 Video Coding and Scalable Video Adaptation", in Proc. 19th 655 IEEE International Conference on Image 656 Processing, (ICIP'12), September 2012. 658 [Ozer2011] 659 Ozer, J., "Video Compression for Flash, Apple Devices and 660 HTML5", ISBN 13:978-0976259503, 2011. 662 [Tanwir2013] 663 Tanwir, S. and H. Perros, "A Survey of VBR Video Traffic 664 Models", IEEE Communications Surveys and Tutorials, vol. 665 15, no. 5, pp. 1778-1802., October 2013. 667 [ns-2] "The Network Simulator - ns-2", 668 . 670 [ns-3] "The Network Simulator - ns-3", . 672 [Syncodecs] 673 Mena, S., D'Aronco, S., and X. Zhu, "Syncodecs: Synthetic 674 codecs for evaluation of RMCAT work", 675 . 677 Authors' Addresses 678 Xiaoqing Zhu 679 Cisco Systems 680 12515 Research Blvd., Building 4 681 Austin, TX 78759 682 USA 684 Email: xiaoqzhu@cisco.com 686 Sergio Mena de la Cruz 687 Cisco Systems 688 EPFL, Quartier de l'Innovation, Batiment E 689 Ecublens, Vaud 1015 690 Switzerland 692 Email: semena@cisco.com 694 Zaheduzzaman Sarker 695 Ericsson AB 696 Luleae, SE 977 53 697 Sweden 699 Phone: +46 10 717 37 43 700 Email: zaheduzzaman.sarker@ericsson.com