Multimodal unsupervised image-to-image translation is the task of producing multiple translations to one domain from a single image in another domain.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
#2 best model for Image-to-Image Translation on Cityscapes Photo-to-Labels
To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain.
Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains.
#2 best model for Multimodal Unsupervised Image-To-Image Translation on Cats-and-Dogs
Thus, in the above example, we can create, for every person without glasses a version with the glasses observed in any face image.