no code implementations • 29 Feb 2024 • Pavlos Rath-Manakidis, Frederik Strothmann, Tobias Glasmachers, Laurenz Wiskott
Interpretation and visualization of the behavior of detection transformers tends to highlight the locations in the image that the model attends to, but it provides limited insight into the \emph{semantics} that the model is focusing on.
no code implementations • 19 Feb 2024 • Moritz Lange, Raphael C. Engelhardt, Wolfgang Konen, Laurenz Wiskott
Visual navigation requires a whole range of capabilities.
1 code implementation • 17 Jan 2024 • Jan Rathjens, Laurenz Wiskott
Predictive coding-inspired deep networks for visual computing integrate classification and reconstruction processes in shared intermediate layers.
no code implementations • 6 Oct 2023 • Moritz Lange, Noah Krystiniak, Raphael C. Engelhardt, Wolfgang Konen, Laurenz Wiskott
These insights can inform future development of interpretable representation learning approaches for non-visual observations and advance the use of RL solutions in real-world scenarios.
no code implementations • 5 Jul 2022 • Eddie Seabrook, Laurenz Wiskott
Markov chains are a class of probabilistic models that have achieved widespread application in the quantitative sciences.
no code implementations • 26 Nov 2021 • Zahra Fayyaz, Aya Altamimi, Sen Cheng, Laurenz Wiskott
We assume that attention selects some part of the index matrix while others are discarded, this then represents the gist of the episode and is stored as a memory trace.
1 code implementation • 15 Nov 2021 • Robin Schiewer, Laurenz Wiskott
As a remedy, enforcing an internal structure for the learned dynamics model by training isolated sub-networks for each task notably improves performance while using the same amount of parameters.
no code implementations • 7 May 2021 • Hlynur Davíð Hlynsson, Laurenz Wiskott
One of the fundamental challenges in reinforcement learning (RL) is the one of data efficiency: modern algorithms require a very large number of training samples, especially compared to humans, for solving environments with high-dimensional observations.
no code implementations • 9 Nov 2020 • Stefan Richthofer, Laurenz Wiskott
We study the special case that the potential $q$ is zero under Neumann boundary conditions and give simple and explicit criteria, solely in terms of the coefficient functions, to assess whether various properties of the regular case apply.
no code implementations • 20 Sep 2020 • Hlynur Davíð Hlynsson, Merlin Schüler, Robin Schiewer, Tobias Glasmachers, Laurenz Wiskott
The prediction function is used as a forward model for search on a graph in a viewpoint-matching task and the representation learned to maximize predictability is found to outperform a pre-trained representation.
no code implementations • 18 Sep 2019 • Laurenz Wiskott, Fabian Schönfeld
Many problems in machine learning can be expressed by means of a graph with nodes representing training samples and edges representing the relationship between samples in terms of similarity, temporal proximity, or label information.
no code implementations • 3 Jul 2019 • Hlynur Davíð Hlynsson, Alberto N. Escalante-B., Laurenz Wiskott
The algorithms are trained on different-sized subsets of the MNIST and Omniglot data sets.
no code implementations • 30 May 2019 • Jan Melchior, Mehdi Bayati, Amir Azizi, Sen Cheng, Laurenz Wiskott
To our knowledge this is the first model of the hippocampus that allows to store correlated pattern sequences online in a one-shot fashion without a consolidation process, which can instantaneously be recalled later.
no code implementations • 25 May 2019 • Jan Melchior, Laurenz Wiskott
Hebbian-descent addresses these problems by getting rid of the activation function's derivative and by centering, i. e. keeping the neural activities mean free, leading to a biologically plausible update rule that is provably convergent, does not suffer from the vanishing error term problem, can deal with correlated data, profits from seeing patterns several times, and enables successful online learning when centering is used.
no code implementations • 22 Apr 2019 • Hlynur Davíð Hlynsson, Laurenz Wiskott
Several methods of estimating the mutual information of random variables have been developed in recent years.
no code implementations • 27 Aug 2018 • Merlin Schüler, Hlynur Davíð Hlynsson, Laurenz Wiskott
We propose Power Slow Feature Analysis, a gradient-based method to extract temporally slow features from a high-dimensional input stream that varies on a faster time-scale, as a variant of Slow Feature Analysis (SFA) that allows end-to-end training of arbitrary differentiable architectures and thereby significantly extends the class of models that can effectively be used for slow feature extraction.
no code implementations • 22 May 2018 • Stefan Richthofer, Laurenz Wiskott
While PFA obtains a well predictable model, PFAx yields a model ideally suited for manipulations with predictable outcome.
no code implementations • 2 Dec 2017 • Stefan Richthofer, Laurenz Wiskott
Features will be exclusively extracted from the main input such that they are most predictable based on themselves and the supplementary information.
no code implementations • 17 Jan 2017 • Varun Raj Kompella, Laurenz Wiskott
The former is used to make the robot self-motivated to explore by rewarding itself whenever it makes progress learning an abstraction; the later is used to update the abstraction by extracting slowly varying components from raw sensory inputs.
no code implementations • 1 Feb 2016 • Björn Weghenkel, Asja Fischer, Laurenz Wiskott
We propose graph-based predictable feature analysis (GPFA), a new method for unsupervised learning of predictable features from high-dimensional time series, where high predictability is understood very generically as low variance in the distribution of the next data point given the previous ones.
no code implementations • 15 Jan 2016 • Alberto N. Escalante-B., Laurenz Wiskott
The feature space spanned by HGSFA is complex due to the composition of the nonlinearities of the nodes in the network.
no code implementations • 28 Sep 2015 • Alberto N. Escalante-B., Laurenz Wiskott
The method is versatile, directly supports multiple labels, and provides higher accuracy compared to current graphs for the problems considered.
1 code implementation • 23 Jan 2014 • Nan Wang, Jan Melchior, Laurenz Wiskott
We present a theoretical analysis of Gaussian-binary restricted Boltzmann machines (GRBMs) from the perspective of density models.
no code implementations • 20 Dec 2013 • Nan Wang, Dirk Jancke, Laurenz Wiskott
Our work demonstrates the centered GDBM is a meaningful model approach for basic receptive field properties and the emergence of spontaneous activity patterns in early cortical visual areas.
no code implementations • 11 Nov 2013 • Stefan Richthofer, Laurenz Wiskott
In our approach, we measure predictability with respect to a certain prediction model.
1 code implementation • 6 Nov 2013 • Jan Melchior, Asja Fischer, Laurenz Wiskott
This work analyzes centered binary Restricted Boltzmann Machines (RBMs) and binary Deep Boltzmann Machines (DBMs), where centering is done by subtracting offset values from visible and hidden variables.