659 papers with code • 58 benchmarks • 41 datasets
Image generation (synthesis) is the task of generating new images from an existing dataset.
In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.
( Image credit: StyleGAN )
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.
Ranked #11 on Conditional Image Generation on CIFAR-10
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences.
Ranked #2 on Open-Domain Question Answering on SearchQA
We introduce the problem of perpetual view generation -- long-range generation of novel views corresponding to an arbitrarily long camera trajectory given a single image.
We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness.
Ranked #12 on Image Generation on ImageNet 64x64
In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks.
Ranked #11 on Conditional Image Generation on ImageNet 128x128
Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible.
Ranked #1 on Image Generation on LSUN Bedroom 64 x 64
A standard solution is to train networks with Quantization Aware Training, where the weights are quantized during training and the gradients approximated with the Straight-Through Estimator.