Image Generation

973 papers with code • 77 benchmarks • 59 datasets

Image generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Libraries

Use these libraries to find Image Generation models and implementations

Most implemented papers

Analyzing and Improving the Image Quality of StyleGAN

NVlabs/stylegan2 CVPR 2020

Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.

Wasserstein GAN

eriklindernoren/PyTorch-GAN 26 Jan 2017

We introduce a new algorithm named WGAN, an alternative to traditional GAN training.

Improved Training of Wasserstein GANs

igul222/improved_wgan_training NeurIPS 2017

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability.

A Style-Based Generator Architecture for Generative Adversarial Networks

NVlabs/stylegan CVPR 2019

We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature.

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

bioinf-jku/TTUR NeurIPS 2017

Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible.

Self-Attention Generative Adversarial Networks

brain-research/self-attention-gan arXiv 2018

In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks.

Improved Techniques for Training GANs

openai/improved-gan NeurIPS 2016

We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.

SinGAN: Learning a Generative Model from a Single Natural Image

tamarott/SinGAN ICCV 2019

We introduce SinGAN, an unconditional generative model that can be learned from a single natural image.

Generative Adversarial Text to Image Synthesis

hanzhanggit/StackGAN 17 May 2016

Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal.