Face Super-resolution Guided by Facial Component Heatmaps

State-of-the-art face super-resolution methods use deep convolutional neural networks to learn a mapping between low-resolution (LR) facial patterns and their corresponding high-resolution (HR) counterparts by exploring local information. However, most of them do not account for face structure and suffer from degradations due to large pose variations and misalignments of faces. Our method incorporates structural information of faces explicitly into face super-resolution by using a multi-task convolutional neural network (CNN). Our CNN has two branches: one for super-resolving face images and the other branch for predicting salient regions of a face coined facial component heatmaps. These heatmaps guide the up-sampling stream for generating better super-resolved faces with high-quality details. Our method uses not only the low-level information (ie intensity similarity), but also middle-level information (ie face structure) to further explore spatial constraints of facial components from LR inputs images. Therefore, we are able to super-resolve very small unaligned face images (16$ imes$16 pixels) with a large upscaling factor of 8$ imes$ while preserving face structure. Extensive experiments demonstrate that our network achieves superior face hallucination results and outperforms the state-of-the-art.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here