Image Generation

659 papers with code • 58 benchmarks • 41 datasets

Image generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Greatest papers with code

Density estimation using Real NVP

tensorflow/models 27 May 2016

Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning.

Density Estimation Image Generation

Improved Techniques for Training GANs

tensorflow/models NeurIPS 2016

We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.

Conditional Image Generation Semi-Supervised Image Classification

Reformer: The Efficient Transformer

huggingface/transformers ICLR 2020

Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences.

Image Generation Language Modelling +1

Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image

google-research/google-research 17 Dec 2020

We introduce the problem of perpetual view generation -- long-range generation of novel views corresponding to an arbitrarily long camera trajectory given a single image.

Image Generation Video Generation

Rethinking Attention with Performers

google-research/google-research ICLR 2021

We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness.

Image Generation

A General and Adaptive Robust Loss Function

google-research/google-research CVPR 2019

We present a generalization of the Cauchy/Lorentzian, Geman-McClure, Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2 loss functions.

Image Generation

Self-Attention Generative Adversarial Networks

jantic/DeOldify arXiv 2018

In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks.

Conditional Image Generation

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

jantic/DeOldify NeurIPS 2017

Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible.

Image Generation

Training with Quantization Noise for Extreme Model Compression

pytorch/fairseq ICLR 2021

A standard solution is to train networks with Quantization Aware Training, where the weights are quantized during training and the gradients approximated with the Straight-Through Estimator.

Image Generation Model Compression

Aggregating Nested Transformers

rwightman/pytorch-image-models 26 May 2021

In this work, we explore the idea of nesting basic local transformers on non-overlapping image blocks and aggregating them in a hierarchical manner.

Image Classification Image Generation