Unsupervised Image-To-Image Translation
69 papers with code • 2 benchmarks • 2 datasets
Unsupervised image-to-image translation is the task of doing image-to-image translation without ground truth image-to-image pairings.
( Image credit: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks )
Libraries
Use these libraries to find Unsupervised Image-To-Image Translation models and implementationsLatest papers
GP-UNIT: Generative Prior for Versatile Unsupervised Image-to-Image Translation
In this paper, we introduce a novel versatile framework, Generative Prior-guided UNsupervised Image-to-image Translation (GP-UNIT), that improves the quality, applicability and controllability of the existing translation models.
Wavelet-based Unsupervised Label-to-Image Translation
Semantic Image Synthesis (SIS) is a subclass of image-to-image translation where a semantic layout is used to generate a photorealistic image.
Domain-knowledge Inspired Pseudo Supervision (DIPS) for Unsupervised Image-to-Image Translation Models to Support Cross-Domain Classification
Cross-domain classification frameworks were developed to handle this data domain shift problem by utilizing unsupervised image-to-image translation models to translate an input image from the unlabeled domain to the labeled domain.
Augmenting Ego-Vehicle for Traffic Near-Miss and Accident Classification Dataset using Manipulating Conditional Style Translation
To develop the advanced self-driving systems, many researchers are focusing to alert all possible traffic risk cases from closed-circuit television (CCTV) and dashboard-mounted cameras.
DGFont++: Robust Deformable Generative Networks for Unsupervised Font Generation
Moreover, we introduce contrastive self-supervised learning to learn a robust style representation for fonts by understanding the similarity and dissimilarities of fonts.
LANIT: Language-Driven Image-to-Image Translation for Unlabeled Data
Existing techniques for image-to-image translation commonly have suffered from two critical problems: heavy reliance on per-sample domain annotation and/or inability of handling multiple attributes per image.
Learning to Incorporate Texture Saliency Adaptive Attention to Image Cartoonization
Image cartoonization is recently dominated by generative adversarial networks (GANs) from the perspective of unsupervised image-to-image translation, in which an inherent challenge is to precisely capture and sufficiently transfer characteristic cartoon styles (e. g., clear edges, smooth color shading, abstract fine structures, etc.).
Unsupervised Image-to-Image Translation with Generative Prior
In this work, we present a novel framework, Generative Prior-guided UNsupervised Image-to-image Translation (GP-UNIT), to improve the overall quality and applicability of the translation algorithm.
A Style-aware Discriminator for Controllable Image Translation
Current image-to-image translations do not control the output domain beyond the classes used during training, nor do they interpolate between different domains well, leading to implausible results.
Learning to generate line drawings that convey geometry and semantics
We introduce a geometry loss which predicts depth information from the image features of a line drawing, and a semantic loss which matches the CLIP features of a line drawing with its corresponding photograph.