idnits 2.17.1
draft-ietf-netvc-testing-01.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
** The document seems to lack a Security Considerations section.
** The document seems to lack an IANA Considerations section. (See Section
2.2 of https://www.ietf.org/id-info/checklist for how to handle the case
when there are no actions for IANA.)
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (February 29, 2016) is 2973 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Unused Reference: 'L1100' is defined on line 433, but no explicit
reference was found in the text
== Unused Reference: 'STEAM' is defined on line 450, but no explicit
reference was found in the text
Summary: 2 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Network Working Group T. Daede
3 Internet-Draft Mozilla
4 Intended status: Informational A. Norkin
5 Expires: September 01, 2016 Netflix
6 February 29, 2016
8 Video Codec Testing and Quality Measurement
9 draft-ietf-netvc-testing-01
11 Abstract
13 This document describes guidelines and procedures for evaluating a
14 video codec specified at the IETF. This covers subjective and
15 objective tests, test conditions, and materials used for the test.
17 Status of This Memo
19 This Internet-Draft is submitted in full conformance with the
20 provisions of BCP 78 and BCP 79.
22 Internet-Drafts are working documents of the Internet Engineering
23 Task Force (IETF). Note that other groups may also distribute
24 working documents as Internet-Drafts. The list of current Internet-
25 Drafts is at http://datatracker.ietf.org/drafts/current/.
27 Internet-Drafts are draft documents valid for a maximum of six months
28 and may be updated, replaced, or obsoleted by other documents at any
29 time. It is inappropriate to use Internet-Drafts as reference
30 material or to cite them other than as "work in progress."
32 This Internet-Draft will expire on September 01, 2016.
34 Copyright Notice
36 Copyright (c) 2016 IETF Trust and the persons identified as the
37 document authors. All rights reserved.
39 This document is subject to BCP 78 and the IETF Trust's Legal
40 Provisions Relating to IETF Documents
41 (http://trustee.ietf.org/license-info) in effect on the date of
42 publication of this document. Please review these documents
43 carefully, as they describe your rights and restrictions with respect
44 to this document. Code Components extracted from this document must
45 include Simplified BSD License text as described in Section 4.e of
46 the Trust Legal Provisions and are provided without warranty as
47 described in the Simplified BSD License.
49 Table of Contents
51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
52 2. Subjective quality tests . . . . . . . . . . . . . . . . . . 2
53 2.1. Still Image Pair Comparison . . . . . . . . . . . . . . . 3
54 2.2. Subjective viewing test . . . . . . . . . . . . . . . . . 3
55 2.3. Expert viewing . . . . . . . . . . . . . . . . . . . . . 3
56 3. Objective Metrics . . . . . . . . . . . . . . . . . . . . . . 3
57 3.1. Overall PSNR . . . . . . . . . . . . . . . . . . . . . . 4
58 3.2. Frame-averaged PSNR . . . . . . . . . . . . . . . . . . . 4
59 3.3. PSNR-HVS-M . . . . . . . . . . . . . . . . . . . . . . . 4
60 3.4. SSIM . . . . . . . . . . . . . . . . . . . . . . . . . . 5
61 3.5. Multi-Scale SSIM . . . . . . . . . . . . . . . . . . . . 5
62 3.6. Fast Multi-Scale SSIM . . . . . . . . . . . . . . . . . . 5
63 3.7. CIEDE2000 . . . . . . . . . . . . . . . . . . . . . . . . 5
64 3.8. VMAF . . . . . . . . . . . . . . . . . . . . . . . . . . 5
65 4. Comparing and Interpreting Results . . . . . . . . . . . . . 5
66 4.1. Graphing . . . . . . . . . . . . . . . . . . . . . . . . 5
67 4.2. Bjontegaard . . . . . . . . . . . . . . . . . . . . . . . 6
68 4.3. Ranges . . . . . . . . . . . . . . . . . . . . . . . . . 6
69 5. Test Sequences . . . . . . . . . . . . . . . . . . . . . . . 6
70 5.1. Sources . . . . . . . . . . . . . . . . . . . . . . . . . 6
71 5.2. Test Sets . . . . . . . . . . . . . . . . . . . . . . . . 7
72 5.3. Operating Points . . . . . . . . . . . . . . . . . . . . 7
73 5.3.1. Common settings . . . . . . . . . . . . . . . . . . . 7
74 5.3.2. High Latency . . . . . . . . . . . . . . . . . . . . 8
75 5.3.3. Unconstrained Low Latency . . . . . . . . . . . . . . 8
76 6. Automation . . . . . . . . . . . . . . . . . . . . . . . . . 8
77 6.1. Regression tests . . . . . . . . . . . . . . . . . . . . 9
78 6.2. Objective performance tests . . . . . . . . . . . . . . . 9
79 6.3. Periodic tests . . . . . . . . . . . . . . . . . . . . . 9
80 7. Informative References . . . . . . . . . . . . . . . . . . . 9
81 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 10
83 1. Introduction
85 When developing a video codec, changes and additions to the codec
86 need to be decided based on their performance tradeoffs. In
87 addition, measurements are needed to determine when the codec has met
88 its performance goals. This document specifies how the tests are to
89 be carried about to ensure valid comparisons and good decisions.
91 2. Subjective quality tests
93 Subjective testing is the preferable method of testing video codecs.
95 Because the IETF does not have testing resources of its own, it has
96 to rely on the resources of its participants. For this reason, even
97 if the group agrees that a particular test is important, if no one
98 volunteers to do it, or if volunteers do not complete it in a timely
99 fashion, then that test should be discarded. This ensures that only
100 important tests be done in particular, the tests that are important
101 to participants.
103 2.1. Still Image Pair Comparison
105 A simple way to determine superiority of one compressed image over
106 another is to visually compare two compressed images, and have the
107 viewer judge which one has a higher quality. This is mainly used for
108 rapid comparisons during development. For this test, the two
109 compressed images should have similar compressed file sizes, with one
110 image being no more than 5% larger than the other. In addition, at
111 least 5 different images should be compared.
113 2.2. Subjective viewing test
115 A subjective viewing test is the preferred method of evaluating the
116 quality. The subjective test should be performed as either
117 consecutively showing the video sequences on one screen or on two
118 screens located side-by-side. The testing procedure should normally
119 follow rules described in [BT500] and be performed with non-expert
120 test subjects. The result of the test could be (depending on the
121 test procedure) mean opinion scores (MOS) or differential mean
122 opinion scores (DMOS). Normally, confidence intervals are also
123 calculated to judge whether the difference between two encodings is
124 statistically significant.
126 2.3. Expert viewing
128 An expert viewing test can be performed in the case when an answer to
129 a particular question should be found. An example of such test can
130 be a test involving video coding experts on evaluation of a
131 particular problem, for example such as comparing the results of two
132 de-ringing filters. Depending on what information is sought, the
133 appropriate test procedure can be chosen.
135 3. Objective Metrics
137 Objective metrics are used in place of subjective metrics for easy
138 and repeatable experiments. Most objective metrics have been
139 designed to correlate with subjective scores.
141 The following descriptions give an overview of the operation of each
142 of the metrics. Because implementation details can sometimes vary,
143 the exact implementation is specified in C in the Daala tools
144 repository [DAALA-GIT].
146 All of the metrics described in this document are to be applied to
147 the luma plane only. In addition, they are single frame metrics.
148 When applied to the video, the scores of each frame are averaged to
149 create the final score.
151 Codecs are allowed to internally use downsampling, but must include a
152 normative upsampler, so that the metrics run at the same resolution
153 as the source video. In addition, some metrics, such as PSNR and
154 FASTSSIM, have poor behavior on downsampled images, so it must be
155 noted in test results if downsampling is in effect.
157 3.1. Overall PSNR
159 PSNR is a traditional signal quality metric, measured in decibels.
160 It is directly drived from mean square error (MSE), or its square
161 root (RMSE). The formula used is:
163 20 * log10 ( MAX / RMSE )
165 or, equivalently:
167 10 * log10 ( MAX^2 / MSE )
169 where the error is computed over all the pixels in the video, which
170 is the method used in the dump_psnr.c reference implementation.
172 This metric may be applied to both the luma and chroma planes, with
173 all planes reported separately.
175 3.2. Frame-averaged PSNR
177 PSNR can also be calculated per-frame, and then the values averaged
178 together. This is reported in the same way as overall PSNR.
180 3.3. PSNR-HVS-M
182 The PSNR-HVS metric performs a DCT transform of 8x8 blocks of the
183 image, weights the coefficients, and then calculates the PSNR of
184 those coefficients. Several different sets of weights have been
185 considered. [PSNRHVS] The weights used by the dump_pnsrhvs.c tool in
186 the Daala repository have been found to be the best match to real MOS
187 scores.
189 3.4. SSIM
191 SSIM (Structural Similarity Image Metric) is a still image quality
192 metric introduced in 2004 [SSIM]. It computes a score for each
193 individual pixel, using a window of neighboring pixels. These scores
194 can then be averaged to produce a global score for the entire image.
195 The original paper produces scores ranging between 0 and 1.
197 For the metric to appear more linear on BD-rate curves, the score is
198 converted into a nonlinear decibel scale:
200 -10 * log10 (1 - SSIM)
202 3.5. Multi-Scale SSIM
204 Multi-Scale SSIM is SSIM extended to multiple window sizes [MSSSIM].
206 3.6. Fast Multi-Scale SSIM
208 Fast MS-SSIM is a modified implementation of MS-SSIM which operates
209 on a limited number of scales and with modified weights [FASTSSIM].
210 The final score is converted to decibels in the same manner as SSIM.
212 3.7. CIEDE2000
214 CIEDE2000 is a metric based on CIEDE color distances [CIEDE2000]. It
215 generates a single score taking into account all three chroma planes.
216 It does not take into consideration any structural similarity or
217 other psychovisual effects.
219 3.8. VMAF
221 Video Multi-method Assessment Fusion (VMAF) is a full-reference
222 perceptual video quality metric that aims to approximate human
223 perception of video quality [VMAF]. This metric is focused on
224 quality degradation due compression and rescaling. VMAF estimates
225 the perceived quality score by computing scores from multiple quality
226 assessment algorithms, and fusing them using a support vector machine
227 (SVM). Currently, three image fidelity metrics and one temporal
228 signal have been chosen as features to the SVM, namely Anti-noise SNR
229 (ANSNR), Detail Loss Measure (DLM), Visual Information Fidelity
230 (VIF), and the mean co-located pixel difference of a frame with
231 respect to the previous frame.
233 4. Comparing and Interpreting Results
235 4.1. Graphing
236 When displayed on a graph, bitrate is shown on the X axis, and the
237 quality metric is on the Y axis. For publication, the X axis should
238 be linear. The Y axis metric should be plotted in decibels. If the
239 quality metric does not natively report quality in decibels, it
240 should be converted as described in the previous section.
242 4.2. Bjontegaard
244 The Bjontegaard rate difference, also known as BD-rate, allows the
245 comparison of two different codecs based on a metric. This is
246 commonly done by fitting a curve to each set of data points on the
247 plot of bitrate versus metric score, and then computing the
248 difference in area between each of the curves. A cubic polynomial
249 fit is common, but will be overconstrained with more than four
250 samples. For higher accuracy, at least 10 samples and a cubic spline
251 fit should be used. In addition, if using a truncated BD-rate curve,
252 there should be at least 4 samples within the point of interest.
254 4.3. Ranges
256 The curve is split into three regions, for low, medium, and high
257 bitrate. The ranges are defined as follows:
259 o Low bitrate: 0.005 - 0.02 bpp
261 o Medium bitrate: 0.02 - 0.06 bpp
263 o High bitrate: 0.06 - 0.2 bpp
265 Bitrate can be calculated from bits per pixel (bpp) as follows:
267 bitrate = bpp * width * height * framerate
269 5. Test Sequences
271 5.1. Sources
273 Lossless test clips are preferred for most tests, because the
274 structure of compression artifacts in already-compressed clips may
275 introduce extra noise in the test results. However, a large amount
276 of content on the internet needs to be recompressed at least once, so
277 some sources of this nature are useful. The encoder should run at
278 the same bit depth as the original source. In addition, metrics need
279 to support operation at high bit depth. If one or more codecs in a
280 comparison do not support high bit depth, sources need to be
281 converted once before entering the encoder.
283 5.2. Test Sets
285 Sources are divided into several categories to test different
286 scenarios the codec will be required to operate in. For easier
287 comparison, all videos in each set should have the same color
288 subsampling, same resolution, and same number of frames. In
289 addition, all test videos must be publicly available for testing use,
290 to allow for reproducibility of results. All current test sets are
291 available for download [TESTSEQUENCES].
293 o Still images are useful when comparing intra coding performance.
294 Xiph.org has four sets of lossless, one megapixel images that have
295 been converted into YUV 4:2:0 format. There are four sets that
296 can be used:
298 * subset1 (50 images)
300 * subset2 (50 images)
302 * subset3 (1000 images)
304 * subset4 (1000 images)
306 o video-hd-3, a set that consists of 1920x1080 clips from
307 [DERFVIDEO] (1500 frames total)
309 o vc-360p-1, a low quality video conferencing set (2700 frames
310 total)
312 o vc-720p-1, a high quality video conferencing set (2750 frames
313 total)
315 o netflix-4k-1, a cinematic 4K video test set (2280 frames total)
317 o netflix-2k-1, a 2K scaled version of netflix-4k-1 (2280 frames
318 total)
320 o twitch-1, a game sequence set (2280 frames total)
322 5.3. Operating Points
324 Two operating modes are defined. High latency is intended for on
325 demand streaming, one-to-many live streaming, and stored video. Low
326 latency is intended for videoconferencing and remote access.
328 5.3.1. Common settings
329 Encoders should be configured to their best settings when being
330 compared against each other:
332 o vp10: -codec=vp10 -ivf -frame-parallel=0 -tile-columns=0 -cpu-
333 used=0 -threads=1
335 5.3.2. High Latency
337 The encoder should be run at the best quality mode available, using
338 the mode that will provide the best quality per bitrate (VBR or
339 constant quality mode). Lookahead and/or two-pass are allowed, if
340 supported. One parameter is provided to adjust bitrate, but the
341 units are arbitrary. Example configurations follow:
343 o x264: -crf=x
345 o x265: -crf=x
347 o daala: -v=x -b 2
349 o vp10: -end-usage=q -cq-level=x -lag-in-frames=25 -auto-alt-ref=2
351 5.3.3. Unconstrained Low Latency
353 The encoder should be run at the best quality mode available, using
354 the mode that will provide the best quality per bitrate (VBR or
355 constant quality mode), but no frame delay, buffering, or lookahead
356 is allowed. One parameter is provided to adjust bitrate, but the
357 units are arbitrary. Example configurations follow:
359 o x264: -crf-x -tune zerolatency
361 o x265: -crf=x -tune zerolatency
363 o daala: -v=x
365 o vp10: -end-usage=q -cq-level=x -lag-in-frames=0
367 6. Automation
369 Frequent objective comparisons are extremely beneficial while
370 developing a new codec. Several tools exist in order to automate the
371 process of objective comparisons. The Compare-Codecs tool allows BD-
372 rate curves to be generated for a wide variety of codecs
373 [COMPARECODECS]. The Daala source repository contains a set of
374 scripts that can be used to automate the various metrics used. In
375 addition, these scripts can be run automatically utilizing
376 distributed computers for fast results, with the AreWeCompressedYet
377 tool [AWCY]. Because of computational constraints, several levels of
378 testing are specified.
380 6.1. Regression tests
382 Regression tests run on a small number of short sequences. The
383 regression tests should include a number of various test conditions.
384 The purpose of regression tests is to ensure bug fixes (and similar
385 patches) do not negatively affect the performance.
387 6.2. Objective performance tests
389 Changes that are expected to affect the quality of encode or
390 bitstream should run an objective performance test. The performance
391 tests should be run on a wider number of sequences. If the option
392 for the objective performance test is chosen, wide range and full
393 length simulations are run on the site and the results (including all
394 the objective metrics) are generated.
396 6.3. Periodic tests
398 Periodic tests are run on a wide range of bitrates in order to gauge
399 progress over time, as well as detect potential regressions missed by
400 other tests.
402 7. Informative References
404 [AWCY] Xiph.Org, "Are We Compressed Yet?", 2015, .
407 [BT500] ITU-R, "Recommendation ITU-R BT.500-13", 2012, .
411 [CIEDE2000]
412 Yang, Y., Ming, J., and N. Yu, "Color Image Quality
413 Assessment Based on CIEDE2000", 2012,
414 .
416 [COMPARECODECS]
417 Alvestrand, H., "Compare Codecs", 2015,
418 .
420 [DAALA-GIT]
421 Xiph.Org, "Daala Git Repository", 2015,
422 .
424 [DERFVIDEO]
425 Terriberry, T., "Xiph.org Video Test Media", n.d., .
428 [FASTSSIM]
429 Chen, M. and A. Bovik, "Fast structural similarity index
430 algorithm", 2010, .
433 [L1100] Bossen, F., "Common test conditions and software reference
434 configurations", JCTVC L1100, 2013,
435 .
437 [MSSSIM] Wang, Z., Simoncelli, E., and A. Bovik, "Multi-Scale
438 Structural Similarity for Image Quality Assessment", n.d.,
439 .
441 [PSNRHVS] Egiazarian, K., Astola, J., Ponomarenko, N., Lukin, V.,
442 Battisti, F., and M. Carli, "A New Full-Reference Quality
443 Metrics Based on HVS", 2002.
445 [SSIM] Wang, Z., Bovik, A., Sheikh, H., and E. Simoncelli, "Image
446 Quality Assessment: From Error Visibility to Structural
447 Similarity", 2004,
448 .
450 [STEAM] Valve Corporation, "Steam Hardware & Software Survey: June
451 2015", June 2015,
452 .
454 [TESTSEQUENCES]
455 Daede, T., "Test Sets", n.d., .
458 [VMAF] Aaron, A., Li, Z., Manohara, M., Lin, J., Wu, E., and C.
459 Kuo, "Challenges in cloud based ingest and encoding for
460 high quality streaming media", 2015, .
463 Authors' Addresses
465 Thomas Daede
466 Mozilla
468 Email: tdaede@mozilla.com
469 Andrey Norkin
470 Netflix
472 Email: anorkin@netflix.com