19 papers with code • 0 benchmarks • 0 datasets
Finding the latent code in the GAN latent space corresponding to a natural image.
For successful semantic editing of real images, it is critical for a GAN inversion method to find an in-domain latent code that aligns with the domain of a pre-trained GAN model.
Seamlessly blending features from multiple images is extremely challenging because of complex relationships in lighting, geometry, and partial occlusion which cause coupling between different parts of the image.
In contrast to previous fully supervised approaches, in this paper we present ShapeInversion, which introduces Generative Adversarial Network (GAN) inversion to shape completion for the first time.
Existing image outpainting methods pose the problem as a conditional image-to-image translation task, often generating repetitive structures and textures by replicating the content available in the input image.
One of the important research topics in image generative models is to disentangle the spatial contents and styles for their separate control.
In this work, we investigate regression into the latent space as a probe to understand the compositional properties of GANs.