no code implementations • NeurIPS 2009 • Jörg Lücke, Richard Turner, Maneesh Sahani, Marc Henniges
We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another.
no code implementations • 15 Nov 2012 • Abdul-Saboor Sheikh, Jacquelyn A. Shelton, Jörg Lücke
We investigate two approaches to optimize the parameters of spike-and-slab sparse coding: a novel truncated EM approach and, for comparison, an approach based on standard factored variational distributions.
no code implementations • NeurIPS 2013 • Zhenwen Dai, Georgios Exarchakis, Jörg Lücke
By far most approaches to unsupervised learning learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions.
no code implementations • 28 Jun 2015 • Dennis Forster, Abdul-Saboor Sheikh, Jörg Lücke
This results in powerful though very complex models that are hard to train and that demand additional labels for optimal parameter tuning, which are often not given when labeled data is very sparse.
1 code implementation • 10 Oct 2016 • Jörg Lücke
The specific structure of truncated distributions allows for deriving novel and mathematically grounded results, which in turn can be used to formulate novel efficient algorithms to optimize the parameters of probabilistic generative models.
no code implementations • NeurIPS 2016 • Abdul-Saboor Sheikh, Jörg Lücke
As example model we use spike-and-slab sparse coding for V1 processing, and combine latent subspace selection with Gibbs sampling (select-and-sample).
no code implementations • NeurIPS 2016 • Travis Monk, Cristina Savin, Jörg Lücke
We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities.
no code implementations • 7 Feb 2017 • Dennis Forster, Jörg Lücke
Inference and learning for probabilistic generative networks is often very challenging and typically prevents scalability to as large networks as used for deep discriminative approaches.
no code implementations • 16 Apr 2017 • Jörg Lücke, Dennis Forster
We show that $k$-means (Lloyd's algorithm) is obtained as a special case when truncated variational EM approximations are applied to Gaussian Mixture Models (GMM) with isotropic Gaussians.
no code implementations • 9 Nov 2017 • Dennis Forster, Jörg Lücke
The basic idea is to use a partial variational E-step which reduces the linear complexity of $\mathcal{O}(NCD)$ required for a full E-step to a sublinear complexity.
no code implementations • 21 Dec 2017 • Jörg Lücke, Zhenwen Dai, Georgios Exarchakis
We investigate the optimization of two probabilistic generative models with binary latent variables using a novel variational EM approach.
1 code implementation • 1 Oct 2018 • Florian Hirschberger, Dennis Forster, Jörg Lücke
The aim of the project (which resulted in this arXiv version and the later TPAMI paper) is the exploration of the current efficiency and large-scale limits in fitting a parametric model for clustering to data distributions.
no code implementations • 1 Aug 2019 • Georgios Exarchakis, Jörg Bornschein, Abdul-Saboor Sheikh, Zhenwen Dai, Marc Henniges, Jakob Drefs, Jörg Lücke
The library widens the scope of dictionary learning approaches beyond implementations of standard approaches such as ICA, NMF or standard L1 sparse coding.
1 code implementation • 4 Mar 2020 • Hamid Mousavi, Jakob Drefs, Florian Hirschberger, Jörg Lücke
Here, we consider LVMs that are defined for a range of different distributions, i. e., observables can follow any (regular) distribution of the exponential family.
1 code implementation • 28 Oct 2020 • Simon Damm, Dennis Forster, Dmytro Velychko, Zhenwen Dai, Asja Fischer, Jörg Lücke
Here we show that for standard (i. e., Gaussian) VAEs the ELBO converges to a value given by the sum of three entropies: the (negative) entropy of the prior distribution, the expected (negative) entropy of the observable distribution, and the average entropy of the variational distributions (the latter is already part of the ELBO).
no code implementations • 27 Nov 2020 • Enrico Guiraud, Jakob Drefs, Jörg Lücke
Discrete latent variables are considered important for real world data, which has motivated research on Variational Autoencoders (VAEs) with discrete latents.
no code implementations • 22 Dec 2020 • Jakob Drefs, Enrico Guiraud, Jörg Lücke
In general, our investigations highlight the importance of research on optimization methods for generative models to achieve performance improvements.
no code implementations • 7 Sep 2022 • Jörg Lücke, Jan Warnken
In this purely theoretical contribution, we show that (for a very large class of generative models) the variational lower bound is at all stationary points of learning equal to a sum of entropies.
1 code implementation • 3 Nov 2023 • Dmytro Velychko, Simon Damm, Asja Fischer, Jörg Lücke
Our main contributions are theoretical, however, and they are twofold: (1) for non-trivial posterior approximations, we provide the (to the knowledge of the authors) first analytical ELBO objective for standard probabilistic sparse coding; and (2) we provide the first demonstration on how a recently shown convergence of the ELBO to entropy sums can be used for learning.