Search Results for author: Dennis Forster

Found 6 papers, 2 papers with code

The ELBO of Variational Autoencoders Converges to a Sum of Three Entropies

1 code implementation28 Oct 2020 Simon Damm, Dennis Forster, Dmytro Velychko, Zhenwen Dai, Asja Fischer, Jörg Lücke

Here we show that for standard (i. e., Gaussian) VAEs the ELBO converges to a value given by the sum of three entropies: the (negative) entropy of the prior distribution, the expected (negative) entropy of the observable distribution, and the average entropy of the variational distributions (the latter is already part of the ELBO).

Large Scale Clustering with Variational EM for Gaussian Mixture Models

1 code implementation1 Oct 2018 Florian Hirschberger, Dennis Forster, Jörg Lücke

The aim of the project (which resulted in this arXiv version and the later TPAMI paper) is the exploration of the current efficiency and large-scale limits in fitting a parametric model for clustering to data distributions.

Benchmarking Clustering +1

Can clustering scale sublinearly with its clusters? A variational EM acceleration of GMMs and $k$-means

no code implementations9 Nov 2017 Dennis Forster, Jörg Lücke

The basic idea is to use a partial variational E-step which reduces the linear complexity of $\mathcal{O}(NCD)$ required for a full E-step to a sublinear complexity.

Clustering

$k$-means as a variational EM approximation of Gaussian mixture models

no code implementations16 Apr 2017 Jörg Lücke, Dennis Forster

We show that $k$-means (Lloyd's algorithm) is obtained as a special case when truncated variational EM approximations are applied to Gaussian Mixture Models (GMM) with isotropic Gaussians.

Clustering

Truncated Variational EM for Semi-Supervised Neural Simpletrons

no code implementations7 Feb 2017 Dennis Forster, Jörg Lücke

Inference and learning for probabilistic generative networks is often very challenging and typically prevents scalability to as large networks as used for deep discriminative approaches.

Neural Simpletrons - Minimalistic Directed Generative Networks for Learning with Few Labels

no code implementations28 Jun 2015 Dennis Forster, Abdul-Saboor Sheikh, Jörg Lücke

This results in powerful though very complex models that are hard to train and that demand additional labels for optimal parameter tuning, which are often not given when labeled data is very sparse.

Cannot find the paper you are looking for? You can Submit a new open access paper.