Image Generation

1979 papers with code • 85 benchmarks • 67 datasets

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Libraries

Use these libraries to find Image Generation models and implementations

Most implemented papers

Generative Adversarial Text to Image Synthesis

reedscot/icml2016 17 May 2016

Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal.

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

eriklindernoren/PyTorch-GAN NeurIPS 2016

This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.

Spectral Normalization for Generative Adversarial Networks

pfnet-research/sngan_projection ICLR 2018

One of the challenges in the study of generative adversarial networks is the instability of its training.

Conditional Image Synthesis With Auxiliary Classifier GANs

eriklindernoren/PyTorch-GAN ICML 2017

We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models.

Density estimation using Real NVP

tensorflow/models 27 May 2016

Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

ajbrock/BigGAN-PyTorch ICLR 2019

Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal.

High-Resolution Image Synthesis with Latent Diffusion Models

compvis/stable-diffusion CVPR 2022

By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond.

Training Generative Adversarial Networks with Limited Data

NVlabs/stylegan2-ada NeurIPS 2020

We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5. 59 to 2. 42.

Glow: Generative Flow with Invertible 1x1 Convolutions

openai/glow NeurIPS 2018

Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis.

Semantic Image Synthesis with Spatially-Adaptive Normalization

NVlabs/SPADE CVPR 2019

Previous methods directly feed the semantic layout as input to the deep network, which is then processed through stacks of convolution, normalization, and nonlinearity layers.