Search Results for author: Mihai A. Petrovici

Found 35 papers, 10 papers with code

DelGrad: Exact gradients in spiking networks for learning transmission delays and weights

no code implementations30 Apr 2024 Julian Göltz, Jimmy Weber, Laura Kriener, Peter Lake, Melika Payvand, Mihai A. Petrovici

To alleviate these issues, we propose an analytical approach for calculating exact loss gradients with respect to both synaptic weights and delays in an event-based fashion.

Lu.i -- A low-cost electronic neuron for education and outreach

1 code implementation25 Apr 2024 Yannik Stradmann, Julian Göltz, Mihai A. Petrovici, Johannes Schemmel, Sebastian Billaudelle

With an increasing presence of science throughout all parts of society, there is a rising expectation for researchers to effectively communicate their work and, equally, for teachers to discuss contemporary findings in their classrooms.

Backpropagation through space, time, and the brain

no code implementations25 Mar 2024 Benjamin Ellenberger, Paul Haider, Jakob Jordan, Kevin Max, Ismael Jaras, Laura Kriener, Federico Benitez, Mihai A. Petrovici

In particular, GLE exploits the ability of biological neurons to phase-shift their output rate with respect to their membrane potential, which is essential in both directions of information propagation.

Order from chaos: Interplay of development and learning in recurrent networks of structured neurons

no code implementations26 Feb 2024 Laura Kriener, Kristin Völk, Ben von Hünerbein, Federico Benitez, Walter Senn, Mihai A. Petrovici

By applying a fully local, always-on plasticity rule we are able to learn complex sequences in a recurrent network comprised of two populations.

Confidence and second-order errors in cortical circuits

1 code implementation27 Sep 2023 Arno Granier, Mihai A. Petrovici, Walter Senn, Katharina A. Wilmes

Minimization of cortical prediction errors has been considered a key computational goal of the cerebral cortex underlying perception, action and learning.

Gradient-based methods for spiking physical systems

no code implementations29 Aug 2023 Julian Göltz, Sebastian Billaudelle, Laura Kriener, Luca Blessing, Christian Pehle, Eric Müller, Johannes Schemmel, Mihai A. Petrovici

Recent efforts have fostered significant progress towards deep learning in spiking networks, both theoretical and in silico.

Learning beyond sensations: how dreams organize neuronal representations

no code implementations3 Aug 2023 Nicolas Deperrois, Mihai A. Petrovici, Walter Senn, Jakob Jordan

However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs.

Contrastive Learning

A method for the ethical analysis of brain-inspired AI

no code implementations18 May 2023 Michele Farisco, Gianluca Baldassarre, Emilio Cartoni, Antonia Leach, Mihai A. Petrovici, Achim Rosemann, Arleen Salles, Bernd Stahl, Sacha J. van Albada

The conclusion resulting from the application of this method is that, compared to traditional AI, brain-inspired AI raises new foundational ethical issues and some new practical ethical issues, and exacerbates some of the issues raised by traditional AI.

NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

1 code implementation10 Apr 2023 Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Denis Kleyko, Noah Pacik-Nelson, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Yao-Hong Liu, Shih-Chii Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Alessandro Pierro, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Matthew Stewart, Kenneth Stewart, Terrence C. Stewart, Philipp Stratmann, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi

The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings.


Learning efficient backprojections across cortical hierarchies in real time

1 code implementation20 Dec 2022 Kevin Max, Laura Kriener, Garibaldi Pineda García, Thomas Nowotny, Ismael Jaras, Walter Senn, Mihai A. Petrovici

Models of sensory processing and learning in the cortex need to efficiently assign credit to synapses in all areas.

DELAUNAY: a dataset of abstract art for psychophysical and machine learning research

1 code implementation28 Jan 2022 Camille Gontier, Jakob Jordan, Mihai A. Petrovici

This dataset provides a middle ground between natural images and artificial patterns and can thus be used in a variety of contexts, for example to investigate the sample efficiency of humans and artificial neural networks.

BIG-bench Machine Learning

Variational learning of quantum ground states on spiking neuromorphic hardware

no code implementations30 Sep 2021 Robert Klassert, Andreas Baumbach, Mihai A. Petrovici, Martin Gärttner

Recent research has demonstrated the usefulness of neural networks as variational ansatz functions for quantum many-body states.

Learning cortical representations through perturbed and adversarial dreaming

1 code implementation9 Sep 2021 Nicolas Deperrois, Mihai A. Petrovici, Walter Senn, Jakob Jordan

We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs).

Learning Semantic Representations

Conductance-based dendrites perform Bayes-optimal cue integration

no code implementations27 Apr 2021 Jakob Jordan, João Sacramento, Willem A. M. Wybo, Mihai A. Petrovici, Walter Senn

We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration.

The Yin-Yang dataset

1 code implementation16 Feb 2021 Laura Kriener, Julian Göltz, Mihai A. Petrovici

The Yin-Yang dataset was developed for research on biologically plausible error backpropagation and deep learning in spiking neural networks.

Evolving Neuronal Plasticity Rules using Cartesian Genetic Programming

no code implementations8 Feb 2021 Henrik D. Mettler, Maximilian Schmidt, Walter Senn, Mihai A. Petrovici, Jakob Jordan

We formulate the search for phenomenological models of synaptic plasticity as an optimization problem.

Natural-gradient learning for spiking neurons

no code implementations23 Nov 2020 Elena Kreutzer, Walter M. Senn, Mihai A. Petrovici

In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights.

Structural plasticity on an accelerated analog neuromorphic hardware system

no code implementations27 Dec 2019 Sebastian Billaudelle, Benjamin Cramer, Mihai A. Petrovici, Korbinian Schreiber, David Kappel, Johannes Schemmel, Karlheinz Meier

In computational neuroscience, as well as in machine learning, neuromorphic devices promise an accelerated and scalable alternative to neural network simulations.

Computational Efficiency

Demonstrating Advantages of Neuromorphic Computation: A Pilot Study

no code implementations8 Nov 2018 Timo Wunderlich, Akos F. Kungl, Eric Müller, Andreas Hartel, Yannik Stradmann, Syed Ahmed Aamir, Andreas Grübl, Arthur Heimbrecht, Korbinian Schreiber, David Stöckel, Christian Pehle, Sebastian Billaudelle, Gerd Kiene, Christian Mauch, Johannes Schemmel, Karlheinz Meier, Mihai A. Petrovici

Neuromorphic devices represent an attempt to mimic aspects of the brain's architecture and dynamics with the aim of replicating its hallmark functional capabilities in terms of computational power, robust learning and energy efficiency.

Stochasticity from function -- why the Bayesian brain may need no noise

no code implementations21 Sep 2018 Dominik Dold, Ilja Bytschok, Akos F. Kungl, Andreas Baumbach, Oliver Breitwieser, Walter Senn, Johannes Schemmel, Karlheinz Meier, Mihai A. Petrovici

An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing.

Bayesian Inference

Spiking neurons with short-term synaptic plasticity form superior generative networks

no code implementations24 Sep 2017 Luziwei Leng, Roman Martel, Oliver Breitwieser, Ilja Bytschok, Walter Senn, Johannes Schemmel, Karlheinz Meier, Mihai A. Petrovici

In this work, we use networks of leaky integrate-and-fire neurons that are trained to perform both discriminative and generative tasks in their forward and backward information processing paths, respectively.

Robustness from structure: Inference with hierarchical spiking networks on analog neuromorphic hardware

no code implementations12 Mar 2017 Mihai A. Petrovici, Anna Schroeder, Oliver Breitwieser, Andreas Grübl, Johannes Schemmel, Karlheinz Meier

How spiking networks are able to perform probabilistic inference is an intriguing question, not only for understanding information processing in the brain, but also for transferring these computational principles to neuromorphic silicon circuits.

Stochastic inference with spiking neurons in the high-conductance state

no code implementations23 Oct 2016 Mihai A. Petrovici, Johannes Bill, Ilja Bytschok, Johannes Schemmel, Karlheinz Meier

The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro.

Bayesian Inference Vocal Bursts Intensity Prediction

The high-conductance state enables neural sampling in networks of LIF neurons

no code implementations5 Jan 2016 Mihai A. Petrovici, Ilja Bytschok, Johannes Bill, Johannes Schemmel, Karlheinz Meier

The core idea of our approach is to separately consider two different "modes" of spiking dynamics: burst spiking and transient quiescence, in which the neuron does not spike for longer periods.

Bayesian Inference

Stochastic inference with deterministic spiking neurons

no code implementations13 Nov 2013 Mihai A. Petrovici, Johannes Bill, Ilja Bytschok, Johannes Schemmel, Karlheinz Meier

The seemingly stochastic transient dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference.

Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.