Search Results for author: Jörg Lücke

Found 17 papers, 0 papers with code

Evolutionary Variational Optimization of Generative Models

no code implementations22 Dec 2020 Jakob Drefs, Enrico Guiraud, Jörg Lücke

In general, our investigations highlight the importance of research on optimization methods for generative models to achieve performance improvements.

Image Denoising Zero-Shot Learning

Direct Evolutionary Optimization of Variational Autoencoders With Binary Latents

no code implementations27 Nov 2020 Enrico Guiraud, Jakob Drefs, Jörg Lücke

Discrete latent variables are considered important for real world data, which has motivated research on Variational Autoencoders (VAEs) with discrete latents.

Zero-Shot Learning

The Evidence Lower Bound of Variational Autoencoders Converges to a Sum of Three Entropies

no code implementations28 Oct 2020 Jörg Lücke, Dennis Forster, Zhenwen Dai

Our derived analytical results are exact and apply for small as well as complex neural networks for decoder and encoder.

Maximal Causes for Exponential Family Observables

no code implementations4 Mar 2020 S. Hamid Mousavi, Jakob Drefs, Florian Hirschberger, Jörg Lücke

However, in many cases observables do not follow a normal distribution, and a linear summation of latents is often at odds with non-Gaussian observables (e. g., means of the Bernoulli distribution have to lie in the unit interval).

Denoising Latent Variable Models

ProSper -- A Python Library for Probabilistic Sparse Coding with Non-Standard Priors and Superpositions

no code implementations1 Aug 2019 Georgios Exarchakis, Jörg Bornschein, Abdul-Saboor Sheikh, Zhenwen Dai, Marc Henniges, Jakob Drefs, Jörg Lücke

The library widens the scope of dictionary learning approaches beyond implementations of standard approaches such as ICA, NMF or standard L1 sparse coding.

Dictionary Learning

Large Scale Clustering with Variational EM for Gaussian Mixture Models

no code implementations1 Oct 2018 Florian Hirschberger, Dennis Forster, Jörg Lücke

We first show theoretically how the clustering objective of variational EM (which reduces complexity for many clusters) can be combined with coreset objectives (which reduce complexity for many data points).


Truncated Variational Sampling for "Black Box" Optimization of Generative Models

no code implementations21 Dec 2017 Jörg Lücke, Zhenwen Dai, Georgios Exarchakis

We investigate the optimization of two probabilistic generative models with binary latent variables using a novel variational EM approach.

Can clustering scale sublinearly with its clusters? A variational EM acceleration of GMMs and $k$-means

no code implementations9 Nov 2017 Dennis Forster, Jörg Lücke

The basic idea is to use a partial variational E-step which reduces the linear complexity of $\mathcal{O}(NCD)$ required for a full E-step to a sublinear complexity.

$k$-means as a variational EM approximation of Gaussian mixture models

no code implementations16 Apr 2017 Jörg Lücke, Dennis Forster

We show that $k$-means (Lloyd's algorithm) is obtained as a special case when truncated variational EM approximations are applied to Gaussian Mixture Models (GMM) with isotropic Gaussians.

Truncated Variational EM for Semi-Supervised Neural Simpletrons

no code implementations7 Feb 2017 Dennis Forster, Jörg Lücke

Inference and learning for probabilistic generative networks is often very challenging and typically prevents scalability to as large networks as used for deep discriminative approaches.

Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics

no code implementations NeurIPS 2016 Travis Monk, Cristina Savin, Jörg Lücke

We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities.

Select-and-Sample for Spike-and-Slab Sparse Coding

no code implementations NeurIPS 2016 Abdul-Saboor Sheikh, Jörg Lücke

As example model we use spike-and-slab sparse coding for V1 processing, and combine latent subspace selection with Gibbs sampling (select-and-sample).


Truncated Variational Expectation Maximization

no code implementations10 Oct 2016 Jörg Lücke

The specific structure of truncated distributions allows for deriving novel and mathematically grounded results, which in turn can be used to formulate novel efficient algorithms to optimize the parameters of probabilistic generative models.

Latent Variable Models

Neural Simpletrons - Minimalistic Directed Generative Networks for Learning with Few Labels

no code implementations28 Jun 2015 Dennis Forster, Abdul-Saboor Sheikh, Jörg Lücke

This results in powerful though very complex models that are hard to train and that demand additional labels for optimal parameter tuning, which are often not given when labeled data is very sparse.

What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach

no code implementations NeurIPS 2013 Zhenwen Dai, Georgios Exarchakis, Jörg Lücke

By far most approaches to unsupervised learning learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions.

A Truncated EM Approach for Spike-and-Slab Sparse Coding

no code implementations15 Nov 2012 Abdul-Saboor Sheikh, Jacquelyn A. Shelton, Jörg Lücke

We investigate two approaches to optimize the parameters of spike-and-slab sparse coding: a novel truncated EM approach and, for comparison, an approach based on standard factored variational distributions.

Image Denoising

Occlusive Components Analysis

no code implementations NeurIPS 2009 Jörg Lücke, Richard Turner, Maneesh Sahani, Marc Henniges

We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another.

Cannot find the paper you are looking for? You can Submit a new open access paper.