Neighbor Embedding Variational Autoencoder

21 Mar 2021  ·  Renfei Tu, Yang Liu, Yongzeng Xue, Cheng Wang, Maozu Guo ·

Being one of the most popular generative framework, variational autoencoders(VAE) are known to suffer from a phenomenon termed posterior collapse, i.e. the latent variational distributions collapse to the prior, especially when a strong decoder network is used. In this work, we analyze the latent representation of collapsed VAEs, and proposed a novel model, neighbor embedding VAE(NE-VAE), which explicitly constraints the encoder to encode inputs close in the input space to be close in the latent space. We observed that for VAE variants that report similar ELBO, KL divergence or even mutual information scores may still behave quite differently in the latent organization. In our experiments, NE-VAE can produce qualitatively different latent representations with majority of the latent dimensions remained active, which may benefit downstream latent space optimization tasks. NE-VAE can prevent posterior collapse to a much greater extent than it's predecessors, and can be easily plugged into any autoencoder framework, without introducing addition model components and complex training routines.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods