The dataset aims to find the algorithms that produce the most visually pleasant image possible and generalize well to a broad range of content. It consists of 30 clips and contains 15 2D-animated segments losslessly recorded from various video games and 15 camera-shot segments from high-bitrate YUV444 sources. The complexity of clips varies significantly in terms of spatial and temporal indexes. Multiple bicubic downscaling mixed with sharpening is used to simulate complex real-world camera degradation. The authors used slight compression and YUV420 conversion to simulate a practical use case. 1920×1080 sources were downscaled to 480×270 input.
Paper | Code | Results | Date | Stars |
---|