Unsupervised Image-To-Image Translation

69 papers with code • 2 benchmarks • 2 datasets

Unsupervised image-to-image translation is the task of doing image-to-image translation without ground truth image-to-image pairings.

( Image credit: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks )

Libraries

Use these libraries to find Unsupervised Image-To-Image Translation models and implementations

Latest papers with no code

Disentangled Unsupervised Image Translation via Restricted Information Flow

no code yet • 26 Nov 2021

Unsupervised image-to-image translation methods aim to map images from one domain into plausible examples from another domain while preserving structures shared across two domains.

Federated Contrastive Learning for Privacy-Preserving Unpaired Image-to-Image Translation

no code yet • 29 Sep 2021

In addition, by combining it with the pre-trained VGG network, the learnable part of the discriminator can be further reduced without impairing the image quality, resulting in two order magnitude reduction in the communication cost.

Mutual-GAN: Towards Unsupervised Cross-Weather Adaptation with Mutual Information Constraint

no code yet • 30 Jun 2021

In practical applications, the outdoor weather and illumination are changeable, e. g., cloudy and nighttime, which results in a significant drop of semantic segmentation accuracy of CNN only trained with daytime data.

Federated CycleGAN for Privacy-Preserving Image-to-Image Translation

no code yet • 17 Jun 2021

Although the recent federated learning (FL) allows a neural network to be trained without data exchange, the basic assumption of the FL is that all clients have their own training data from a similar domain, which is different from our image-to-image translation scenario in which each client has images from its unique domain and the goal is to learn image translation between different domains without accessing the target domain data.

Smoothing the Disentangled Latent Style Space for Unsupervised Image-to-Image Translation

no code yet • CVPR 2021

In this paper, we propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space in which: 1) Both intra- and inter-domain interpolations correspond to gradual changes in the generated images and 2) The content of the source image is better preserved during the translation.

Few-Shot Unsupervised Image-to-Image Translation on complex scenes

no code yet • 7 Jun 2021

Moreover, we propose a way to adapt the FUNIT framework in order to leverage the power of object detection that one can see in other methods.

Contrastive Learning for Unsupervised Image-to-Image Translation

no code yet • 7 May 2021

Image-to-image translation aims to learn a mapping between different groups of visually distinguishable images.

Memory-guided Unsupervised Image-to-image Translation

no code yet • CVPR 2021

We present a novel unsupervised framework for instance-level image-to-image translation.

Learning Cycle-Consistent Cooperative Networks via Alternating MCMC Teaching for Unsupervised Cross-Domain Translation

no code yet • 7 Mar 2021

This paper studies the unsupervised cross-domain translation problem by proposing a generative framework, in which the probability distribution of each domain is represented by a generative cooperative network that consists of an energy-based model and a latent variable model.

Six-channel Image Representation for Cross-domain Object Detection

no code yet • 3 Jan 2021

If we train the detector using the data from one domain, it cannot perform well on the data from another domain due to domain shift, which is one of the big challenges of most object detection models.