Seeing the whole picture instead of a single point: Self-supervised likelihood learning for deep generative models

Recent findings show that deep generative models can judge out-of-distribution samples as more likely than those drawn from the same distribution as the training data. In this work, we focus on variational autoencoders (VAEs) and address the problem of misaligned likelihood estimates on image data. We develop a novel likelihood function that is based not only on the parameters returned by the VAE but also on the features of the data learned in a self-supervised fashion. In this way, the model additionally captures the semantic information that is disregarded by the usual VAE likelihood function. We demonstrate the improvements in reliability of the estimates with experiments on the FashionMNIST and MNIST datasets.

PDF Abstract
No code implementations yet. Submit your code now



  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here