Neural Side-by-Side: Predicting Human Preferences for No-Reference Super-Resolution Evaluation
Super-resolution based on deep convolutional networks is currently gaining much attention from both academia and industry. However, lack of proper evaluation measures makes it difficult to compare approaches, hampering progress in the field. Traditional measures, such as PSNR or SSIM, are known to poorly correlate with the human perception of image quality. Therefore, in existing works common practice is also to report Mean-Opinion-Score (MOS) -- the results of human evaluation of super-resolved images. Unfortunately, the MOS values from different papers are not directly comparable, due to the varying number of raters, their subjectivity, etc. By this paper, we introduce Neural Side-By-Side -- a new measure that allows super-resolution models to be compared automatically, effectively approximating human preferences. Namely, we collect a large dataset of aligned image pairs, which were produced by different super-resolution models. Then each pair is annotated by several raters, who were instructed to choose a more visually appealing image. Given the dataset and the labels, we trained a CNN model that obtains a pair of images and for each image predicts a probability of being more preferable than its counterpart. In this work, we show that Neural Side-By-Side generalizes across both new models and new data. Hence, it can serve as a natural approximation of human preferences, which can be used to compare models or tune hyperparameters without raters' assistance. We open-source the dataset and the pretrained model and expect that it will become a handy tool for researchers and practitioners.
PDF Abstract