Paper

Sparsely Grouped Multi-task Generative Adversarial Networks for Facial Attribute Manipulation

Recent Image-to-Image Translation algorithms have achieved significant progress in neural style transfer and image attribute manipulation tasks. However, existing approaches require exhaustively labelling training data, which is labor demanding, difficult to scale up, and hard to migrate into new domains. To overcome such a key limitation, we propose Sparsely Grouped Generative Adversarial Networks (SG-GAN) as a novel approach that can translate images on sparsely grouped datasets where only a few samples for training are labelled. Using a novel one-input multi-output architecture, SG-GAN is well-suited for tackling sparsely grouped learning and multi-task learning. The proposed model can translate images among multiple groups using only a single commonly trained model. To experimentally validate advantages of the new model, we apply the proposed method to tackle a series of attribute manipulation tasks for facial images. Experimental results demonstrate that SG-GAN can generate image translation results of comparable quality with baselines methods on adequately labelled datasets and results of superior quality on sparsely grouped datasets. The official implementation is publicly available:https://github.com/zhangqianhui/Sparsely-Grouped-GAN.

Results in Papers With Code
(↓ scroll down to see all results)