Latent Bernoulli Autoencoder

In this work, we pose a question whether it is possible to design and train an autoencoder model in an end-to-end fashion to learn latent representations in multivariate Bernoulli space, and achieve performance comparable with the current state-of-the-art variational methods. Moreover, we investigate how to generate novel samples and perform smooth interpolation in the binary latent space. To meet our objective, we propose a simplified deterministic model with a straight-through estimator to learn the binary latents and show its competitiveness with the latest VAE methods. Furthermore, we propose a novel method based on a random hyperplane rounding for sampling and smooth interpolation in the multivariate Bernoulli latent space. Although not a main objective, we demonstrate that our methods perform on par or better than the current state-of-the-art methods on common CelebA, CIFAR-10 and MNIST datasets. PyTorch code and trained models to reproduce published results will be released with the camera ready version.

PDF ICML 2020 PDF

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here