Browse > Computer Vision > Image Generation

Image Generation

117 papers with code · Computer Vision

Image generation (synthesis) is the task of generating new images from an existing dataset.

State-of-the-art leaderboards

Greatest papers with code

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

NeurIPS 2016 tensorflow/models

This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation.

IMAGE GENERATION REPRESENTATION LEARNING

Improved Techniques for Training GANs

NeurIPS 2016 tensorflow/models

We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic.

CONDITIONAL IMAGE GENERATION

Density estimation using Real NVP

27 May 2016tensorflow/models

Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task.

IMAGE GENERATION

Instance Normalization: The Missing Ingredient for Fast Stylization

27 Jul 2016lengstrom/fast-style-transfer

It this paper we revisit the fast stylization method introduced in Ulyanov et. We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images.

IMAGE GENERATION IMAGE STYLIZATION

Self-Attention Generative Adversarial Networks

21 May 2018jantic/DeOldify

In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps.

CONDITIONAL IMAGE GENERATION

Progressive Growing of GANs for Improved Quality, Stability, and Variation

ICLR 2018 jantic/DeOldify

We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses.

FACE GENERATION

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

NeurIPS 2017 jantic/DeOldify

Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved.

IMAGE GENERATION

A Style-Based Generator Architecture for Generative Adversarial Networks

12 Dec 2018NVlabs/stylegan

We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis.

IMAGE GENERATION

BEGAN: Boundary Equilibrium Generative Adversarial Networks

31 Mar 2017tensorpack/tensorpack

We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training.

IMAGE GENERATION

Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis

CVPR 2016 awentzonline/image-analogies

This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images. The generative MRF acts on higher-levels of a dCNN feature pyramid, controling the image layout at an abstract level.

IMAGE GENERATION TEXTURE SYNTHESIS