The Mutual Autoencoder: Controlling Information in Latent Code Representations

ICLR 2018 Mary PhuongMax WellingNate KushmanRyota TomiokaSebastian Nowozin

Variational autoencoders (VAE) learn probabilistic latent variable models by optimizing a bound on the marginal likelihood of the observed data. Beyond providing a good density model a VAE model assigns to each data instance a latent code... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.