245 papers with code • 30 benchmarks • 21 datasets
Image-to-image translation is the task of taking images from one domain and transforming them so they have the style (or characteristics) of images from another domain.
For generating high resolution solar images we use the Pix2PixHD and Pix2Pix algorithms.
Moreover, we propose a way to adapt the FUNIT framework in order to leverage the power of object detection that one can see in other methods.
We model the conditional distribution of the latent encodings by modeling the auto-regressive distributions with an efficient multi-scale normalizing flow, where each conditioning factor affects image synthesis at its respective resolution scale.
Specifically, we extend self-supervised learning from traditional representation learning, which works on images from a single domain, to domain invariant representation learning, which works on images from two different domains by utilizing an image-to-image translation network.
While attention-based transformer networks achieve unparalleled success in nearly all language tasks, the large number of tokens coupled with the quadratic activation memory usage makes them prohibitive for visual tasks.
This work is the first to employ and adapt the image-to-image translation concept based on conditional generative adversarial networks (cGAN) towards learning a forward and an inverse solution operator of partial differential equations (PDEs).
We consider an approach to depth map enhancement based on learning from unpaired data.
Generation of maps from satellite images is conventionally done by a range of tools.
In this paper, we propose a new transfer learning for I2I translation (TransferI2I).