idnits 2.17.1 draft-daede-netvc-testing-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 19, 2015) is 3112 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 2 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group T. Daede 3 Internet-Draft J. Moffitt 4 Intended status: Informational Mozilla 5 Expires: April 21, 2016 October 19, 2015 7 Video Codec Testing and Quality Measurement 8 draft-daede-netvc-testing-02 10 Abstract 12 This document describes guidelines and procedures for evaluating an 13 internet video codec specified at the IETF. This covers subjective 14 and objective tests, test conditions, and materials used for the 15 test. 17 Status of This Memo 19 This Internet-Draft is submitted in full conformance with the 20 provisions of BCP 78 and BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF). Note that other groups may also distribute 24 working documents as Internet-Drafts. The list of current Internet- 25 Drafts is at http://datatracker.ietf.org/drafts/current/. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 This Internet-Draft will expire on April 21, 2016. 34 Copyright Notice 36 Copyright (c) 2015 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (http://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with respect 44 to this document. Code Components extracted from this document must 45 include Simplified BSD License text as described in Section 4.e of 46 the Trust Legal Provisions and are provided without warranty as 47 described in the Simplified BSD License. 49 Table of Contents 51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 52 2. Subjective Metrics . . . . . . . . . . . . . . . . . . . . . 2 53 2.1. Still Image Pair Comparison . . . . . . . . . . . . . . . 2 54 3. Objective Metrics . . . . . . . . . . . . . . . . . . . . . . 3 55 3.1. PSNR . . . . . . . . . . . . . . . . . . . . . . . . . . 3 56 3.2. PSNR-HVS-M . . . . . . . . . . . . . . . . . . . . . . . 3 57 3.3. SSIM . . . . . . . . . . . . . . . . . . . . . . . . . . 4 58 3.4. Fast Multi-Scale SSIM . . . . . . . . . . . . . . . . . . 4 59 4. Comparing and Interpreting Results . . . . . . . . . . . . . 4 60 4.1. Graphing . . . . . . . . . . . . . . . . . . . . . . . . 4 61 4.2. Bjontegaard . . . . . . . . . . . . . . . . . . . . . . . 4 62 4.3. Ranges . . . . . . . . . . . . . . . . . . . . . . . . . 5 63 5. Test Sequences . . . . . . . . . . . . . . . . . . . . . . . 5 64 5.1. Sources . . . . . . . . . . . . . . . . . . . . . . . . . 5 65 5.2. Test Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 66 5.3. Operating Points . . . . . . . . . . . . . . . . . . . . 7 67 5.3.1. High Latency . . . . . . . . . . . . . . . . . . . . 7 68 5.3.2. Unconstrained Low Latency . . . . . . . . . . . . . . 8 69 5.3.3. Constrained Low Latency . . . . . . . . . . . . . . . 8 70 6. Automation . . . . . . . . . . . . . . . . . . . . . . . . . 8 71 7. Informative References . . . . . . . . . . . . . . . . . . . 9 72 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 10 74 1. Introduction 76 When developing an internet video codec, changes and additions to the 77 codec need to be decided based on their performance tradeoffs. In 78 addition, measurements are needed to determine when the codec has met 79 its performance goals. This document specifies how the tests are to 80 be carried about to ensure valid comparisons and good decisions. 82 2. Subjective Metrics 84 Subjective testing is the preferable method of testing video codecs. 86 Because the IETF does not have testing resources of its own, it has 87 to rely on the resources of its participants. For this reason, even 88 if the group agrees that a particular test is important, if no one 89 volunteers to do it, or if volunteers do not complete it in a timely 90 fashion, then that test should be discarded. This ensures that only 91 important tests be done in particular, the tests that are important 92 to participants. 94 2.1. Still Image Pair Comparison 95 A simple way to determine superiority of one compressed image over 96 another is to visually compare two compressed images, and have the 97 viewer judge which one has a higher quality. This is mainly used for 98 rapid comparisons during development. For this test, the two 99 compressed images should have similar compressed file sizes, with one 100 image being no more than 5% larger than the other. In addition, at 101 least 5 different images should be compared. 103 3. Objective Metrics 105 Objective metrics are used in place of subjective metrics for easy 106 and repeatable experiments. Most objective metrics have been 107 designed to correlate with subjective scores. 109 The following descriptions give an overview of the operation of each 110 of the metrics. Because implementation details can sometimes vary, 111 the exact implementation is specified in C in the Daala tools 112 repository [DAALA-GIT]. 114 All of the metrics described in this document are to be applied to 115 the luma plane only. In addition, they are single frame metrics. 116 When applied to the video, the scores of each frame are averaged to 117 create the final score. 119 Codecs are allowed to internally use downsampling, but must include a 120 normative upsampler, so that the metrics run at the same resolution 121 as the source video. In addition, some metrics, such as PSNR and 122 FASTSSIM, have poor behavior on downsampled images, so it must be 123 noted in test results if downsampling is in effect. 125 3.1. PSNR 127 PSNR is a traditional signal quality metric, measured in decibels. 128 It is directly drived from mean square error (MSE), or its square 129 root (RMSE). The formula used is: 131 20 * log10 ( MAX / RMSE ) 133 or, equivalently: 135 10 * log10 ( MAX^2 / MSE ) 137 which is the method used in the dump_psnr.c reference implementation. 139 3.2. PSNR-HVS-M 141 The PSNR-HVS metric performs a DCT transform of 8x8 blocks of the 142 image, weights the coefficients, and then calculates the PSNR of 143 those coefficients. Several different sets of weights have been 144 considered. [PSNRHVS] The weights used by the dump_pnsrhvs.c tool in 145 the Daala repository have been found to be the best match to real MOS 146 scores. 148 3.3. SSIM 150 SSIM (Structural Similarity Image Metric) is a still image quality 151 metric introduced in 2004 [SSIM]. It computes a score for each 152 individual pixel, using a window of neighboring pixels. These scores 153 can then be averaged to produce a global score for the entire image. 154 The original paper produces scores ranging between 0 and 1. 156 For the metric to appear more linear on BD-rate curves, the score is 157 converted into a nonlinear decibel scale: 159 -10 * log10 (1 - SSIM) 161 3.4. Fast Multi-Scale SSIM 163 Multi-Scale SSIM is SSIM extended to multiple window sizes [MSSSIM]. 164 This is implemented in the Fast implementation by downscaling the 165 image a number of times, and computing SSIM over the same number of 166 pixels, then averaging the SSIM scores together [FASTSSIM]. The 167 final score is converted to decibels in the same manner as SSIM. 169 4. Comparing and Interpreting Results 171 4.1. Graphing 173 When displayed on a graph, bitrate is shown on the X axis, and the 174 quality metric is on the Y axis. For clarity, the X axis bitrate is 175 always graphed in the log domain. The Y axis metric should also be 176 chosen so that the graph is approximately linear. For metrics such 177 as PSNR and PSNR-HVS, the metric result is already in the log domain 178 and is left as-is. SSIM and FASTSSIM, on the other hand, return a 179 result between 0 and 1. To create more linear graphs, this result is 180 converted to a value in decibels: 182 -1 * log10 ( 1 - SSIM ) 184 4.2. Bjontegaard 186 The Bjontegaard rate difference, also known as BD-rate, allows the 187 comparison of two different codecs based on a metric. This is 188 commonly done by fitting a curve to each set of data points on the 189 plot of bitrate versus metric score, and then computing the 190 difference in area between each of the curves. A cubic polynomial 191 fit is common, but will be overconstrained with more than four 192 samples. For higher accuracy, at least 10 samples and a linear 193 piecewise fit should be used. In addition, if using a truncated BD- 194 rate curve, there should be at least 4 samples within the point of 195 interest. 197 4.3. Ranges 199 The curve is split into three regions, for low, medium, and high 200 bitrate. The ranges are defined as follows: 202 o Low bitrate: 0.005 - 0.02 bpp 204 o Medium bitrate: 0.02 - 0.06 bpp 206 o High bitrate: 0.06 - 0.2 bpp 208 Bitrate can be calculated from bits per pixel (bpp) as follows: 210 bitrate = bpp * width * height * framerate 212 5. Test Sequences 214 5.1. Sources 216 Lossless test clips are preferred for most tests, because the 217 structure of compression artifacts in already-compressed clips may 218 introduce extra noise in the test results. However, a large amount 219 of content on the internet needs to be recompressed at least once, so 220 some sources of this nature are useful. The encoder should run at 221 the same bit depth as the original source. In addition, metrics need 222 to support operation at high bit depth. If one or more codecs in a 223 comparison do not support high bit depth, sources need to be 224 converted once before entering the encoder. 226 The JCT-VC standards organization includes a set of standard test 227 clips for video codec testing, and parameters to run the clips with 228 [L1100]. These clips are not publicly available, but are very useful 229 for comparing to published results. 231 Xiph publishes a variety of test clips collected from various 232 sources. 234 The Blender Open Movie projects provide a large test base of lossless 235 cinematic test material. The lossless sources are available, hosted 236 on Xiph. 238 5.2. Test Sets 240 Sources are divided into several categories to test different 241 scenarios the codec will be required to operate in. For easier 242 comaprison, all videos in each set should have the same color 243 subsampling, same resolution, and same number of frames. In 244 addition, all test videos must be publicly available for testing use, 245 to allow any results 247 o Still images are useful when comparing intra coding performance. 248 Xiph.org has four sets of lossless, one megapixel images that have 249 been converted into YUV 4:2:0 format. There are four sets that 250 can be used: 252 * subset1 (50 images) 254 * subset2 (50 images) 256 * subset3 (1000 images) 258 * subset4 (1000 images) 260 o video-hd-2, a set that consists of the following 1920x1080 clips 261 from [DERFVIDEO], cropped to 50 frames (and converted to 4:2:0 if 262 necessary) 264 * aspen 266 * blue_sky 268 * crowd_run 270 * ducks_take_off 272 * factory 274 * life 276 * old_town_cross 278 * park_joy 280 * pedestrian_area 282 * red_kayak 284 * riverbed 285 * rush_hour 287 * station2 289 o A video conferencing test set, with 1280x720 content at 60 frames 290 per second. Unlike other sets, the videos in this set are 10 291 seconds long. 293 * TBD 295 o Game streaming content: 1920x1080, 60 frames per second, 4:2:0 296 chroma subsampling. 1080p is chosen as it is currently the most 297 common gaming monitor resolution [STEAM]. All clips should be two 298 seconds long. 300 * TBD 302 o Screensharing content is low framerate, high resolution content 303 typical of a computer desktop. 305 * screenshots - desktop screenshots of various resolutions, with 306 4:2:0 subsampling 308 * Video sets TBD 310 5.3. Operating Points 312 Two operating modes are defined. High latency is intended for on 313 demand streaming, one-to-many live streaming, and stored video. Low 314 latency is intended for videoconferencing and remote access. 316 5.3.1. High Latency 318 The encoder should be run at the best quality mode available, using 319 the mode that will provide the best quality per bitrate (VBR or 320 constant quality mode). Lookahead and/or two-pass are allowed, if 321 supported. One parameter is provided to adjust bitrate, but the 322 units are arbitrary. Example configurations follow: 324 o x264: -crf=x 326 o x265: -crf=x 328 o daala: -v=x 330 o libvpx: -codec=vp9 -end-usage=q -cq-level=x -lag-in-frames=25 331 -auto-alt-ref=1 333 5.3.2. Unconstrained Low Latency 335 The encoder should be run at the best quality mode available, using 336 the mode that will provide the best quality per bitrate (VBR or 337 constant quality mode), but no frame delay, buffering, or lookahead 338 is allowed. One parameter is provided to adjust bitrate, but the 339 units are arbitrary. Example configurations follow: 341 o x264: -crf-x -tune zerolatency 343 o x265: -crf=x -tune zerolatency 345 o daala: -v=x 347 o libvpx: -codec=vp9 -end-usage=q -cq-level=x -lag-in-frames=0 348 -auto-alt-ref=0 350 5.3.3. Constrained Low Latency 352 The encoder is given one parameter, which is absolute bitrate. No 353 frame delay, buffering, or lookahead is allowed. The maximum 354 achieved bitrate deviation from the supplied parameter is determined 355 by a buffer model: 357 o The buffer starts out empty. 359 o After each frame is encoded, the buffer is filled by the number of 360 bits spent for the frame. 362 o The buffer is then emptied by (bitrate * frame duration) bits. 364 o The buffer fill level is checked. If it is over the limit, the 365 test is considered a failure. 367 The buffer size limit is defined by the bitrate target * 0.3 seconds. 369 6. Automation 371 Frequent objective comparisons are extremely beneficial while 372 developing a new codec. Several tools exist in order to automate the 373 process of objective comparisons. The Compare-Codecs tool allows BD- 374 rate curves to be generated for a wide variety of codecs 375 [COMPARECODECS]. The Daala source repository contains a set of 376 scripts that can be used to automate the various metrics used. In 377 addition, these scripts can be run automatically utilizing 378 distributed computer for fast results [AWCY]. 380 7. Informative References 382 [AWCY] Xiph.Org, "Are We Compressed Yet?", 2015, . 385 [COMPARECODECS] 386 Alvestrand, H., "Compare Codecs", 2015, 387 . 389 [DAALA-GIT] 390 Xiph.Org, "Daala Git Repository", 2015, 391 . 393 [DERFVIDEO] 394 Terriberry, T., "Xiph.org Video Test Media", n.d., . 397 [FASTSSIM] 398 Chen, M. and A. Bovik, "Fast structural similarity index 399 algorithm", 2010, . 402 [L1100] Bossen, F., "Common test conditions and software reference 403 configurations", JCTVC L1100, 2013, 404 . 406 [MSSSIM] Wang, Z., Simoncelli, E., and A. Bovik, "Multi-Scale 407 Structural Similarity for Image Quality Assessment", n.d., 408 . 410 [PSNRHVS] Egiazarian, K., Astola, J., Ponomarenko, N., Lukin, V., 411 Battisti, F., and M. Carli, "A New Full-Reference Quality 412 Metrics Based on HVS", 2002. 414 [SSIM] Wang, Z., Bovik, A., Sheikh, H., and E. Simoncelli, "Image 415 Quality Assessment: From Error Visibility to Structural 416 Similarity", 2004, 417 . 419 [STEAM] Valve Corporation, "Steam Hardware & Software Survey: June 420 2015", June 2015, 421 . 423 Authors' Addresses 425 Thomas Daede 426 Mozilla 428 Email: tdaede@mozilla.com 430 Jack Moffitt 431 Mozilla 433 Email: jack@metajack.im