Search Results for author: Jörg Lücke

Found 19 papers, 5 papers with code

Learning Sparse Codes with Entropy-Based ELBOs

1 code implementation3 Nov 2023 Dmytro Velychko, Simon Damm, Asja Fischer, Jörg Lücke

Our main contributions are theoretical, however, and they are twofold: (1) for non-trivial posterior approximations, we provide the (to the knowledge of the authors) first analytical ELBO objective for standard probabilistic sparse coding; and (2) we provide the first demonstration on how a recently shown convergence of the ELBO to entropy sums can be used for learning.

On the Convergence of the ELBO to Entropy Sums

no code implementations7 Sep 2022 Jörg Lücke, Jan Warnken

In this purely theoretical contribution, we show that (for a very large class of generative models) the variational lower bound is at all stationary points of learning equal to a sum of entropies.

Evolutionary Variational Optimization of Generative Models

no code implementations22 Dec 2020 Jakob Drefs, Enrico Guiraud, Jörg Lücke

In general, our investigations highlight the importance of research on optimization methods for generative models to achieve performance improvements.

Evolutionary Algorithms Image Denoising +1

Direct Evolutionary Optimization of Variational Autoencoders With Binary Latents

no code implementations27 Nov 2020 Enrico Guiraud, Jakob Drefs, Jörg Lücke

Discrete latent variables are considered important for real world data, which has motivated research on Variational Autoencoders (VAEs) with discrete latents.

Evolutionary Algorithms Zero-Shot Learning

The ELBO of Variational Autoencoders Converges to a Sum of Three Entropies

1 code implementation28 Oct 2020 Simon Damm, Dennis Forster, Dmytro Velychko, Zhenwen Dai, Asja Fischer, Jörg Lücke

Here we show that for standard (i. e., Gaussian) VAEs the ELBO converges to a value given by the sum of three entropies: the (negative) entropy of the prior distribution, the expected (negative) entropy of the observable distribution, and the average entropy of the variational distributions (the latter is already part of the ELBO).

Generic Unsupervised Optimization for a Latent Variable Model With Exponential Family Observables

1 code implementation4 Mar 2020 Hamid Mousavi, Jakob Drefs, Florian Hirschberger, Jörg Lücke

Here, we consider LVMs that are defined for a range of different distributions, i. e., observables can follow any (regular) distribution of the exponential family.

Denoising

ProSper -- A Python Library for Probabilistic Sparse Coding with Non-Standard Priors and Superpositions

no code implementations1 Aug 2019 Georgios Exarchakis, Jörg Bornschein, Abdul-Saboor Sheikh, Zhenwen Dai, Marc Henniges, Jakob Drefs, Jörg Lücke

The library widens the scope of dictionary learning approaches beyond implementations of standard approaches such as ICA, NMF or standard L1 sparse coding.

Dictionary Learning

Large Scale Clustering with Variational EM for Gaussian Mixture Models

1 code implementation1 Oct 2018 Florian Hirschberger, Dennis Forster, Jörg Lücke

The aim of the project (which resulted in this arXiv version and the later TPAMI paper) is the exploration of the current efficiency and large-scale limits in fitting a parametric model for clustering to data distributions.

Benchmarking Clustering +1

Truncated Variational Sampling for "Black Box" Optimization of Generative Models

no code implementations21 Dec 2017 Jörg Lücke, Zhenwen Dai, Georgios Exarchakis

We investigate the optimization of two probabilistic generative models with binary latent variables using a novel variational EM approach.

Can clustering scale sublinearly with its clusters? A variational EM acceleration of GMMs and $k$-means

no code implementations9 Nov 2017 Dennis Forster, Jörg Lücke

The basic idea is to use a partial variational E-step which reduces the linear complexity of $\mathcal{O}(NCD)$ required for a full E-step to a sublinear complexity.

Clustering

$k$-means as a variational EM approximation of Gaussian mixture models

no code implementations16 Apr 2017 Jörg Lücke, Dennis Forster

We show that $k$-means (Lloyd's algorithm) is obtained as a special case when truncated variational EM approximations are applied to Gaussian Mixture Models (GMM) with isotropic Gaussians.

Clustering

Truncated Variational EM for Semi-Supervised Neural Simpletrons

no code implementations7 Feb 2017 Dennis Forster, Jörg Lücke

Inference and learning for probabilistic generative networks is often very challenging and typically prevents scalability to as large networks as used for deep discriminative approaches.

Select-and-Sample for Spike-and-Slab Sparse Coding

no code implementations NeurIPS 2016 Abdul-Saboor Sheikh, Jörg Lücke

As example model we use spike-and-slab sparse coding for V1 processing, and combine latent subspace selection with Gibbs sampling (select-and-sample).

Denoising

Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics

no code implementations NeurIPS 2016 Travis Monk, Cristina Savin, Jörg Lücke

We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities.

Truncated Variational Expectation Maximization

1 code implementation10 Oct 2016 Jörg Lücke

The specific structure of truncated distributions allows for deriving novel and mathematically grounded results, which in turn can be used to formulate novel efficient algorithms to optimize the parameters of probabilistic generative models.

Neural Simpletrons - Minimalistic Directed Generative Networks for Learning with Few Labels

no code implementations28 Jun 2015 Dennis Forster, Abdul-Saboor Sheikh, Jörg Lücke

This results in powerful though very complex models that are hard to train and that demand additional labels for optimal parameter tuning, which are often not given when labeled data is very sparse.

What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach

no code implementations NeurIPS 2013 Zhenwen Dai, Georgios Exarchakis, Jörg Lücke

By far most approaches to unsupervised learning learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions.

Position

A Truncated EM Approach for Spike-and-Slab Sparse Coding

no code implementations15 Nov 2012 Abdul-Saboor Sheikh, Jacquelyn A. Shelton, Jörg Lücke

We investigate two approaches to optimize the parameters of spike-and-slab sparse coding: a novel truncated EM approach and, for comparison, an approach based on standard factored variational distributions.

Image Denoising

Occlusive Components Analysis

no code implementations NeurIPS 2009 Jörg Lücke, Richard Turner, Maneesh Sahani, Marc Henniges

We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another.

Object

Cannot find the paper you are looking for? You can Submit a new open access paper.