Learning Unsupervised Cross-domain Image-to-Image Translation Using a Shared Discriminator

9 Feb 2021  ·  Rajiv Kumar, Rishabh Dabral, G. Sivakumar ·

Unsupervised image-to-image translation is used to transform images from a source domain to generate images in a target domain without using source-target image pairs. Promising results have been obtained for this problem in an adversarial setting using two independent GANs and attention mechanisms. We propose a new method that uses a single shared discriminator between the two GANs, which improves the overall efficacy. We assess the qualitative and quantitative results on image transfiguration, a cross-domain translation task, in a setting where the target domain shares similar semantics to the source domain. Our results indicate that even without adding attention mechanisms, our method performs at par with attention-based methods and generates images of comparable quality.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image-to-Image Translation Apples and Oranges Shared discriminator GAN Kernel Inception Distance 4.4 # 1
Image-to-Image Translation Zebra and Horses Shared discriminator GAN Kernel Inception Distance 5.8 # 1

Methods


No methods listed for this paper. Add relevant methods here