Search Results for author: Johannes Zenn

Found 4 papers, 2 papers with code

Upgrading VAE Training With Unlimited Data Plans Provided by Diffusion Models

no code implementations30 Oct 2023 Tim Z. Xiao, Johannes Zenn, Robert Bamler

Variational autoencoders (VAEs) are popular models for representation learning but their encoders are susceptible to overfitting (Cremer et al., 2018) because they are trained on a finite training set instead of the true (continuous) data distribution $p_{\mathrm{data}}(\mathbf{x})$.

Data Augmentation Representation Learning

The SVHN Dataset Is Deceptive for Probabilistic Generative Models Due to a Distribution Mismatch

no code implementations30 Oct 2023 Tim Z. Xiao, Johannes Zenn, Robert Bamler

However, with this work, we aim to warn the community about an issue of the SVHN dataset as a benchmark for generative modeling tasks: we discover that the official split into training set and test set of the SVHN dataset are not drawn from the same distribution.

Classification

Resampling Gradients Vanish in Differentiable Sequential Monte Carlo Samplers

1 code implementation27 Apr 2023 Johannes Zenn, Robert Bamler

Annealed Importance Sampling (AIS) moves particles along a Markov chain from a tractable initial distribution to an intractable target distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.