Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images

ICLR 2021  ·  Rewon Child ·

We present a hierarchical VAE that, for the first time, generates samples quickly while outperforming the PixelCNN in log-likelihood on all natural image benchmarks. We begin by observing that, in theory, VAEs can actually represent autoregressive models, as well as faster, better models if they exist, when made sufficiently deep. Despite this, autoregressive models have historically outperformed VAEs in log-likelihood. We test if insufficient depth explains why by scaling a VAE to greater stochastic depth than previously explored and evaluating it CIFAR-10, ImageNet, and FFHQ. In comparison to the PixelCNN, these very deep VAEs achieve higher likelihoods, use fewer parameters, generate samples thousands of times faster, and are more easily applied to high-resolution images. Qualitative studies suggest this is because the VAE learns efficient hierarchical visual representations. We release our source code and models at https://github.com/openai/vdvae.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract

Datasets


Results from the Paper


Ranked #2 on Image Generation on FFHQ 1024 x 1024 (bits/dimension metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation CIFAR-10 Very Deep VAE bits/dimension 2.87 # 15
Image Generation FFHQ 1024 x 1024 Very Deep VAE bits/dimension 2.42 # 2
Image Generation FFHQ 256 x 256 Very Deep VAE bits/dimension 0.61 # 2
Image Generation ImageNet 32x32 Very Deep VAE bpd 3.8 # 10
Image Generation ImageNet 64x64 Very Deep VAE Bits per dim 3.52 # 10

Methods