Style-Restricted GAN: Multi-Modal Translation with Style Restriction Using Generative Adversarial Networks

17 May 2021  ·  Sho Inoue, Tad Gonsalves ·

Unpaired image-to-image translation using Generative Adversarial Networks (GAN) is successful in converting images among multiple domains. Moreover, recent studies have shown a way to diversify the outputs of the generator. However, since there are no restrictions on how the generator diversifies the results, it is likely to translate some unexpected features. In this paper, we propose Style-Restricted GAN (SRGAN) to demonstrate the importance of controlling the encoded features used in style diversifying process. More specifically, instead of KL divergence loss, we adopt three new losses to restrict the distribution of the encoded features: batch KL divergence loss, correlation loss, and histogram imitation loss. Further, the encoder is pre-trained with classification tasks before being used in translation process. The study reports quantitative as well as qualitative results with Precision, Recall, Density, and Coverage. The proposed three losses lead to the enhancement of the level of diversity compared to the conventional KL loss. In particular, SRGAN is found to be successful in translating with higher diversity and without changing the class-unrelated features in the CelebA face dataset. To conclude, the importance of the encoded features being well-regulated was proven with two experiments. Our implementation is available at https://github.com/shinshoji01/Style-Restricted_GAN.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods