Search Results for author: Wolfgang Maass

Found 28 papers, 9 papers with code

How can neuromorphic hardware attain brain-like functional capabilities?

no code implementations25 Oct 2023 Wolfgang Maass

But current architectures and training methods for networks of spiking neurons in NMHW are largely copied from artificial neural networks.

A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware

1 code implementation8 Jul 2021 Philipp Plank, Arjun Rao, Andreas Wild, Wolfgang Maass

Spike-based neuromorphic hardware holds the promise to provide more energy efficient implementations of Deep Neural Networks (DNNs) than standard hardware such as GPUs.

Question Answering Time Series +2

Pairing Conceptual Modeling with Machine Learning

no code implementations27 Jun 2021 Wolfgang Maass, Veda C. Storey

We then examine how conceptual modeling can be applied to machine learning and propose a framework for incorporating conceptual modeling into data science projects.

BIG-bench Machine Learning Knowledge Graphs

Current State and Future Directions for Learning in Biological Recurrent Neural Networks: A Perspective Piece

no code implementations12 May 2021 Luke Y. Prince, Roy Henha Eyono, Ellen Boven, Arna Ghosh, Joe Pemberton, Franz Scherr, Claudia Clopath, Rui Ponte Costa, Wolfgang Maass, Blake A. Richards, Cristina Savin, Katharina Anna Wilmes

We provide a brief review of the common assumptions about biological learning with findings from experimental neuroscience and contrast them with the efficiency of gradient-based learning in recurrent neural networks.

Online Spatio-Temporal Learning in Deep Neural Networks

1 code implementation24 Jul 2020 Thomas Bohnstingl, Stanisław Woźniak, Wolfgang Maass, Angeliki Pantazi, Evangelos Eleftheriou

For shallow networks, OSTL is gradient-equivalent to BPTT enabling for the first time online training of SNNs with BPTT-equivalent gradients.

Language Modelling speech-recognition +1

Embodied Synaptic Plasticity with Online Reinforcement learning

1 code implementation3 Mar 2020 Jacques Kaiser, Michael Hoff, Andreas Konle, J. Camilo Vasquez Tieck, David Kappel, Daniel Reichard, Anand Subramoney, Robert Legenstein, Arne Roennau, Wolfgang Maass, Rudiger Dillmann

We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following.

reinforcement-learning Reinforcement Learning (RL)

Optimized spiking neurons classify images with high accuracy through temporal coding with two spikes

1 code implementation31 Jan 2020 Christoph Stöckl, Wolfgang Maass

Spike-based neuromorphic hardware promises to reduce the energy consumption of image classification and other deep learning applications, particularly on mobile phones or other edge devices.

General Classification Image Classification

Recognizing Images with at most one Spike per Neuron

no code implementations30 Dec 2019 Christoph Stöckl, Wolfgang Maass

We introduce a new conversion method where a gate in the ANN, which can basically be of any type, is emulated by a small circuit of spiking neurons, with At Most One Spike (AMOS) per neuron.

Image Classification

Reservoirs learn to learn

no code implementations16 Sep 2019 Anand Subramoney, Franz Scherr, Wolfgang Maass

We wondered whether the performance of liquid state machines can be improved if the recurrent weights are chosen with a purpose, rather than randomly.

Eligibility traces provide a data-inspired alternative to backpropagation through time

no code implementations NeurIPS Workshop Neuro_AI 2019 Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass

Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns.

speech-recognition Speech Recognition

Efficient Reward-Based Structural Plasticity on a SpiNNaker 2 Prototype

no code implementations20 Mar 2019 Yexin Yan, David Kappel, Felix Neumaerker, Johannes Partzsch, Bernhard Vogginger, Sebastian Hoeppner, Steve Furber, Wolfgang Maass, Robert Legenstein, Christian Mayr

Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources.

Neuromorphic Hardware learns to learn

no code implementations15 Mar 2019 Thomas Bohnstingl, Franz Scherr, Christian Pehle, Karlheinz Meier, Wolfgang Maass

In contrast, the hyperparameters and learning algorithms of networks of neurons in the brain, which they aim to emulate, have been optimized through extensive evolutionary and developmental processes for specific ranges of computing and learning tasks.

Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets

3 code implementations25 Jan 2019 Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass

This lack of understanding is linked to a lack of learning algorithms for recurrent networks of spiking neurons (RSNNs) that are both functionally powerful and can be implemented by known biological mechanisms.

Deep Rewiring: Training very sparse deep networks

4 code implementations ICLR 2018 Guillaume Bellec, David Kappel, Wolfgang Maass, Robert Legenstein

Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them.

A dynamic connectome supports the emergence of stable computational function of neural circuits through reward-based learning

no code implementations13 Apr 2017 David Kappel, Robert Legenstein, Stefan Habenschuss, Michael Hsieh, Wolfgang Maass

These data are inconsistent with common models for network plasticity, and raise the questions how neural circuits can maintain a stable computational function in spite of these continuously ongoing processes, and what functional uses these ongoing processes might have.

CaMKII activation supports reward-based neural network optimization through Hamiltonian sampling

no code implementations1 Jun 2016 Zhaofei Yu, David Kappel, Robert Legenstein, Sen Song, Feng Chen, Wolfgang Maass

Our theoretical analysis shows that stochastic search could in principle even attain optimal network configurations by emulating one of the most well-known nonlinear optimization methods, simulated annealing.

Synaptic Sampling: A Bayesian Approach to Neural Network Plasticity and Rewiring

no code implementations NeurIPS 2015 David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass

We reexamine in this article the conceptual and mathematical framework for understanding the organization of plasticity in spiking neural networks.

Network Plasticity as Bayesian Inference

1 code implementation20 Apr 2015 David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass

General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference.

Bayesian Inference Learning Theory

A theoretical basis for efficient computations with noisy spiking neurons

no code implementations18 Dec 2014 Zeno Jonke, Stefan Habenschuss, Wolfgang Maass

Furthermore, one can demonstrate for the Traveling Salesman Problem a surprising computational advantage of networks of spiking neurons compared with traditional artificial neural networks and Gibbs sampling.

Traveling Salesman Problem

Functional network reorganization in motor cortex can be explained by reward-modulated Hebbian learning

no code implementations NeurIPS 2009 Steven Chase, Andrew Schwartz, Wolfgang Maass, Robert A. Legenstein

It was recently shown that tuning properties of neurons in monkey motor cortex are adapted selectively in order to compensate for an erroneous interpretation of their activity.

Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks

no code implementations NeurIPS 2009 Stefan Klampfl, Wolfgang Maass

Many models for computations in recurrent networks of neurons assume that the network state moves from some initial state to some fixed point attractor or limit cycle that represents the output of the computation.

General Classification

STDP enables spiking neurons to detect hidden causes of their inputs

no code implementations NeurIPS 2009 Bernhard Nessler, Michael Pfeiffer, Wolfgang Maass

We show here that STDP, in conjunction with a stochastic soft winner-take-all (WTA) circuit, induces spiking neurons to generate through their synaptic weights implicit internal models for subclasses (or causes") of the high-dimensional spike patterns of hundreds of pre-synaptic neurons.

Dimensionality Reduction

Hebbian Learning of Bayes Optimal Decisions

no code implementations NeurIPS 2008 Bernhard Nessler, Michael Pfeiffer, Wolfgang Maass

Uncertainty is omnipresent when we perceive or interact with our environment, and the Bayesian framework provides computational methods for dealing with it.

Bayesian Inference Decision Making +2

Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

no code implementations NeurIPS 2007 Lars Buesing, Wolfgang Maass

We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived.

Cannot find the paper you are looking for? You can Submit a new open access paper.