Unsupervised Image-To-Image Translation
59 papers with code • 2 benchmarks • 2 datasets
Unsupervised image-to-image translation is the task of doing image-to-image translation without ground truth image-to-image pairings.
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner.
Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary).
To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain.
To handle the limitation, in this paper we propose a novel Attention-Guided Generative Adversarial Network (AGGAN), which can detect the most discriminative semantic object and minimize changes of unwanted part for semantic manipulation problems without using extra data and models.
Style transfer usually refers to the task of applying color and texture information from a specific style image to a given content image while preserving the structure of the latter.