Image generation (synthesis) is the task of generating new images from an existing dataset.
In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.
( Image credit: StyleGAN )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.
Ranked #2 on Image Generation on Stanford Dogs
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.
Ranked #11 on Conditional Image Generation on CIFAR-10
In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks.
Ranked #10 on Conditional Image Generation on ImageNet 128x128
Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible.
Ranked #1 on Image Generation on LSUN Bedroom 64 x 64
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature.
Ranked #1 on Image Generation on LSUN Bedroom 256 x 256
It this paper we revisit the fast stylization method introduced in Ulyanov et.
We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation.
We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data.
Ranked #1 on Image Generation on CAT 256x256