no code implementations • 30 Oct 2023 • Tim Z. Xiao, Johannes Zenn, Robert Bamler
Variational autoencoders (VAEs) are popular models for representation learning but their encoders are susceptible to overfitting (Cremer et al., 2018) because they are trained on a finite training set instead of the true (continuous) data distribution $p_{\mathrm{data}}(\mathbf{x})$.
no code implementations • 30 Oct 2023 • Tim Z. Xiao, Johannes Zenn, Robert Bamler
However, with this work, we aim to warn the community about an issue of the SVHN dataset as a benchmark for generative modeling tasks: we discover that the official split into training set and test set of the SVHN dataset are not drawn from the same distribution.
1 code implementation • 27 Apr 2023 • Johannes Zenn, Robert Bamler
Annealed Importance Sampling (AIS) moves particles along a Markov chain from a tractable initial distribution to an intractable target distribution.
1 code implementation • 3 Dec 2021 • Jonathan Wenger, Nicholas Krämer, Marvin Pförtner, Jonathan Schmidt, Nathanael Bosch, Nina Effenberger, Johannes Zenn, Alexandra Gessner, Toni Karvonen, François-Xavier Briol, Maren Mahsereci, Philipp Hennig
Probabilistic numerical methods (PNMs) solve numerical problems via probabilistic inference.