Search Results for author: Laurenz Wiskott

Found 26 papers, 4 papers with code

ProtoP-OD: Explainable Object Detection with Prototypical Parts

no code implementations29 Feb 2024 Pavlos Rath-Manakidis, Frederik Strothmann, Tobias Glasmachers, Laurenz Wiskott

Interpretation and visualization of the behavior of detection transformers tends to highlight the locations in the image that the model attends to, but it provides limited insight into the \emph{semantics} that the model is focusing on.

Object object-detection +1

Classification and Reconstruction Processes in Deep Predictive Coding Networks: Antagonists or Allies?

1 code implementation17 Jan 2024 Jan Rathjens, Laurenz Wiskott

Predictive coding-inspired deep networks for visual computing integrate classification and reconstruction processes in shared intermediate layers.

Classification

Improving Reinforcement Learning Efficiency with Auxiliary Tasks in Non-Visual Environments: A Comparison

no code implementations6 Oct 2023 Moritz Lange, Noah Krystiniak, Raphael C. Engelhardt, Wolfgang Konen, Laurenz Wiskott

These insights can inform future development of interpretable representation learning approaches for non-visual observations and advance the use of RL solutions in real-world scenarios.

Continuous Control reinforcement-learning +2

A Tutorial on the Spectral Theory of Markov Chains

no code implementations5 Jul 2022 Eddie Seabrook, Laurenz Wiskott

Markov chains are a class of probabilistic models that have achieved widespread application in the quantitative sciences.

A model of semantic completion in generative episodic memory

no code implementations26 Nov 2021 Zahra Fayyaz, Aya Altamimi, Sen Cheng, Laurenz Wiskott

We assume that attention selects some part of the index matrix while others are discarded, this then represents the gist of the episode and is stored as a memory trace.

Hippocampus

Modular Networks Prevent Catastrophic Interference in Model-Based Multi-Task Reinforcement Learning

1 code implementation15 Nov 2021 Robin Schiewer, Laurenz Wiskott

As a remedy, enforcing an internal structure for the learned dynamics model by training isolated sub-networks for each task notably improves performance while using the same amount of parameters.

reinforcement-learning Reinforcement Learning (RL)

Reward prediction for representation learning and reward shaping

no code implementations7 May 2021 Hlynur Davíð Hlynsson, Laurenz Wiskott

One of the fundamental challenges in reinforcement learning (RL) is the one of data efficiency: modern algorithms require a very large number of training samples, especially compared to humans, for solving environments with high-dimensional observations.

Reinforcement Learning (RL) Representation Learning

Singular Sturm-Liouville Problems with Zero Potential (q=0) and Singular Slow Feature Analysis

no code implementations9 Nov 2020 Stefan Richthofer, Laurenz Wiskott

We study the special case that the potential $q$ is zero under Neumann boundary conditions and give simple and explicit criteria, solely in terms of the coefficient functions, to assess whether various properties of the regular case apply.

blind source separation Total Energy

Latent Representation Prediction Networks

no code implementations20 Sep 2020 Hlynur Davíð Hlynsson, Merlin Schüler, Robin Schiewer, Tobias Glasmachers, Laurenz Wiskott

The prediction function is used as a forward model for search on a graph in a viewpoint-matching task and the representation learned to maximize predictability is found to outperform a pre-trained representation.

Navigate

Laplacian Matrix for Dimensionality Reduction and Clustering

no code implementations18 Sep 2019 Laurenz Wiskott, Fabian Schönfeld

Many problems in machine learning can be expressed by means of a graph with nodes representing training samples and edges representing the relationship between samples in terms of similarity, temporal proximity, or label information.

Clustering Dimensionality Reduction

A Hippocampus Model for Online One-Shot Storage of Pattern Sequences

no code implementations30 May 2019 Jan Melchior, Mehdi Bayati, Amir Azizi, Sen Cheng, Laurenz Wiskott

To our knowledge this is the first model of the hippocampus that allows to store correlated pattern sequences online in a one-shot fashion without a consolidation process, which can instantaneously be recalled later.

Hippocampus One-Shot Learning

Hebbian-Descent

no code implementations25 May 2019 Jan Melchior, Laurenz Wiskott

Hebbian-descent addresses these problems by getting rid of the activation function's derivative and by centering, i. e. keeping the neural activities mean free, leading to a biologically plausible update rule that is provably convergent, does not suffer from the vanishing error term problem, can deal with correlated data, profits from seeing patterns several times, and enables successful online learning when centering is used.

Contrastive Learning

Learning gradient-based ICA by neurally estimating mutual information

no code implementations22 Apr 2019 Hlynur Davíð Hlynsson, Laurenz Wiskott

Several methods of estimating the mutual information of random variables have been developed in recent years.

blind source separation

Gradient-based Training of Slow Feature Analysis by Differentiable Approximate Whitening

no code implementations27 Aug 2018 Merlin Schüler, Hlynur Davíð Hlynsson, Laurenz Wiskott

We propose Power Slow Feature Analysis, a gradient-based method to extract temporally slow features from a high-dimensional input stream that varies on a faster time-scale, as a variant of Slow Feature Analysis (SFA) that allows end-to-end training of arbitrary differentiable architectures and thereby significantly extends the class of models that can effectively be used for slow feature extraction.

Global Navigation Using Predictable and Slow Feature Analysis in Multiroom Environments, Path Planning and Other Control Tasks

no code implementations22 May 2018 Stefan Richthofer, Laurenz Wiskott

While PFA obtains a well predictable model, PFAx yields a model ideally suited for manipulations with predictable outcome.

PFAx: Predictable Feature Analysis to Perform Control

no code implementations2 Dec 2017 Stefan Richthofer, Laurenz Wiskott

Features will be exclusively extracted from the main input such that they are most predictable based on themselves and the supplementary information.

Dimensionality Reduction feature selection

Intrinsically Motivated Acquisition of Modular Slow Features for Humanoids in Continuous and Non-Stationary Environments

no code implementations17 Jan 2017 Varun Raj Kompella, Laurenz Wiskott

The former is used to make the robot self-motivated to explore by rewarding itself whenever it makes progress learning an abstraction; the later is used to update the abstraction by extracting slowly varying components from raw sensory inputs.

Graph-based Predictable Feature Analysis

no code implementations1 Feb 2016 Björn Weghenkel, Asja Fischer, Laurenz Wiskott

We propose graph-based predictable feature analysis (GPFA), a new method for unsupervised learning of predictable features from high-dimensional time series, where high predictability is understood very generically as low variance in the distribution of the next data point given the previous ones.

Graph Embedding Time Series +1

Improved graph-based SFA: Information preservation complements the slowness principle

no code implementations15 Jan 2016 Alberto N. Escalante-B., Laurenz Wiskott

The feature space spanned by HGSFA is complex due to the composition of the nonlinearities of the nodes in the network.

MORPH Time Series +1

Theoretical Analysis of the Optimal Free Responses of Graph-Based SFA for the Design of Training Graphs

no code implementations28 Sep 2015 Alberto N. Escalante-B., Laurenz Wiskott

The method is versatile, directly supports multiple labels, and provides higher accuracy compared to current graphs for the problems considered.

regression Time Series +1

Gaussian-binary Restricted Boltzmann Machines on Modeling Natural Image Statistics

1 code implementation23 Jan 2014 Nan Wang, Jan Melchior, Laurenz Wiskott

We present a theoretical analysis of Gaussian-binary restricted Boltzmann machines (GRBMs) from the perspective of density models.

blind source separation

Modeling correlations in spontaneous activity of visual cortex with centered Gaussian-binary deep Boltzmann machines

no code implementations20 Dec 2013 Nan Wang, Dirk Jancke, Laurenz Wiskott

Our work demonstrates the centered GDBM is a meaningful model approach for basic receptive field properties and the emergence of spontaneous activity patterns in early cortical visual areas.

Predictable Feature Analysis

no code implementations11 Nov 2013 Stefan Richthofer, Laurenz Wiskott

In our approach, we measure predictability with respect to a certain prediction model.

Decision Making

How to Center Binary Deep Boltzmann Machines

1 code implementation6 Nov 2013 Jan Melchior, Asja Fischer, Laurenz Wiskott

This work analyzes centered binary Restricted Boltzmann Machines (RBMs) and binary Deep Boltzmann Machines (DBMs), where centering is done by subtracting offset values from visible and hidden variables.

Cannot find the paper you are looking for? You can Submit a new open access paper.