Hallucinating Very Low-Resolution Unaligned and Noisy Face Images by Transformative Discriminative Autoencoders

CVPR 2017  ·  Xin Yu, Fatih Porikli ·

Most of the conventional face hallucination methods assume the input image is sufficiently large and aligned, and all require the input image to be noise-free. Their performance degrades drastically if the input image is tiny, unaligned, and contaminated by noise. In this paper, we introduce a novel transformative discriminative autoencoder to 8X super-resolve unaligned noisy and tiny (16X16) low-resolution face images. In contrast to encoder-decoder based autoencoders, our method uses decoder-encoder-decoder networks. We first employ a transformative discriminative decoder network to upsample and denoise simultaneously. Then we use a transformative encoder network to project the intermediate HR faces to aligned and noise-free LR faces. Finally, we use the second decoder to generate hallucinated HR images. Our extensive evaluations on a very large face dataset show that our method achieves superior hallucination results and outperforms the state-of-the-art by a large margin of 1.82dB PSNR.

PDF Abstract

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Image Super-Resolution VggFace2 - 8x upscaling TDAE PSNR 20.19 # 7
Image Super-Resolution WebFace - 8x upscaling TDAE PSNR 20.24 # 7

Methods


No methods listed for this paper. Add relevant methods here