no code implementations • 11 Mar 2025 • Sven Krausse, Emre Neftci, Friedrich T. Sommer, Alpha Renner
The entorhinal-hippocampal formation is the mammalian brain's navigation system, encoding both physical and abstract spaces via grid cells.
1 code implementation • 20 Jan 2025 • Jamie Lohoff, Anil Kaya, Florian Assmuth, Emre Neftci
We demonstrate the alignment of our gradients with respect to gradient backpropagation on an synthetic task where e-prop gradients are exact, as well as audio speech classification benchmarks.
no code implementations • 16 Dec 2024 • Wadjih Bencheikh, Jan Finkbeiner, Emre Neftci
Recurrent neural networks (RNNs) are valued for their computational efficiency and reduced memory requirements on tasks involving long sequence lengths but require high memory-processor bandwidth to train.
no code implementations • 7 Nov 2024 • Sanja Karilanova, Maxime Fabre, Emre Neftci, Ayça Özçelikkale
However, SNN model parameters are sensitive to temporal resolution, leading to significant performance drops when the temporal resolution of target data at the edge is not the same with that of the pre-deployment source data used for training, especially when fine-tuning is not possible at the edge.
1 code implementation • 11 Oct 2024 • Florian Feiler, Emre Neftci, Younes Bouhadjar
The ability to predict future events or patterns based on previous experience is crucial for many applications such as traffic control, weather forecasting, or supply chain management.
no code implementations • 11 Oct 2024 • Jan Finkbeiner, Emre Neftci
Autoregressive decoder-only transformers have become key components for scalable sequence processing and generation models.
1 code implementation • 28 Sep 2024 • Nathan Leroux, Paul-Philipp Manea, Chirag Sudarshan, Jan Finkbeiner, Sebastian Siegel, John Paul Strachan, Emre Neftci
However, GPU-stored projections must be loaded into SRAM for each new generation step, causing latency and energy bottlenecks.
no code implementations • 4 Sep 2024 • Jamie Lohoff, Jan Finkbeiner, Emre Neftci
Spiking Neural Networks (SNNs) simulators are essential tools to prototype biologically inspired models and neuromorphic hardware architectures and predict their performance.
1 code implementation • 28 Aug 2024 • Kenneth Stewart, Michael Neumeier, Sumit Bam Shrestha, Garrick Orchard, Emre Neftci
In this work, we emulate the multiple stages of learning with digital neuromorphic technology that simulates the neural and synaptic processes of the brain using two stages of learning.
no code implementations • 7 Jun 2024 • Jamie Lohoff, Emre Neftci
In this paper, we present a novel method to optimize the number of necessary multiplications for Jacobian computation by leveraging deep reinforcement learning (RL) and a concept called cross-country elimination while still computing the exact Jacobian.
no code implementations • 2 May 2024 • Madison Cotteret, Hugh Greatorex, Alpha Renner, Junren Chen, Emre Neftci, Huaqiang Wu, Giacomo Indiveri, Martin Ziegler, Elisabetta Chicca
To address this, we describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs, by exploiting the properties of high-dimensional distributed representations.
no code implementations • 15 Mar 2024 • Soikat Hasan Ahmed, Jan Finkbeiner, Emre Neftci
Event cameras offer high temporal resolution and dynamic range with minimal motion blur, making them promising for object detection tasks.
no code implementations • 7 Nov 2023 • Jan Finkbeiner, Thomas Gmeinder, Mark Pupilli, Alexander Titterton, Emre Neftci
To overcome this limitation, we explore sparse and recurrent model training on a massively parallel multiple instruction multiple data (MIMD) architecture with distributed local memory.
no code implementations • 5 Oct 2023 • Dhireesha Kudithipudi, Anurag Daram, Abdullah M. Zyarah, Fatima Tuz Zohora, James B. Aimone, Angel Yanguas-Gil, Nicholas Soures, Emre Neftci, Matthew Mattina, Vincenzo Lomonaco, Clare D. Thiem, Benjamin Epstein
Lifelong learning - an agent's ability to learn throughout its lifetime - is a hallmark of biological learning systems and a central challenge for artificial intelligence (AI).
1 code implementation • 23 May 2023 • Nick Alonso, Jeff Krichmar, Emre Neftci
Backpropagation (BP), the standard learning algorithm for artificial neural networks, is often considered biologically implausible.
1 code implementation • 10 Apr 2023 • Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Weijie Ke, Mina A Khoei, Denis Kleyko, Noah Pacik-Nelson, Alessandro Pierro, Philipp Stratmann, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Shih-Chii Liu, Yao-Hong Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan R. Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Kenneth Stewart, Matthew Stewart, Terrence C. Stewart, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi
To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems.
1 code implementation • 21 Mar 2023 • Nathan Leroux, Jan Finkbeiner, Emre Neftci
However, the self-attention mechanism often used in Transformers requires large time windows for each computation step and thus makes them less suitable for online signal processing compared to Recurrent Neural Networks (RNNs).
1 code implementation • 1 Jun 2022 • Nick Alonso, Beren Millidge, Jeff Krichmar, Emre Neftci
Our novel implementation considerably improves the stability of IL across learning rates, which is consistent with our theory, as a key property of implicit SGD is its stability.
no code implementations • 18 May 2022 • Jinwei Xing, Takashi Nagata, Xinyun Zou, Emre Neftci, Jeffrey L. Krichmar
Although deep Reinforcement Learning (RL) has proven successful in a wide range of tasks, one challenge it faces is interpretability when applied to real-world problems.
1 code implementation • 26 Jan 2022 • Kenneth Stewart, Emre Neftci
In this work, we demonstrate gradient-based meta-learning in SNNs using the surrogate gradient method that approximates the spiking threshold function for gradient estimations.
no code implementations • 29 Sep 2021 • Kenneth Michael Stewart, Andreea Danielescu, Timothy Shea, Emre Neftci
Our novel approach consists of an event-based guided Variational Autoencoder (VAE) which encodes event-based data sensed by a Dynamic Vision Sensor (DVS) into a latent space representation suitable to compute the similarity of mid-air gesture data.
3 code implementations • 27 Sep 2021 • Jason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, Wei D. Lu
This paper serves as a tutorial and perspective showing how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks.
1 code implementation • 30 Apr 2021 • Nick Alonso, Emre Neftci
This finding suggests that this gradient-based PC model may be useful for understanding how the brain solves the credit assignment problem.
1 code implementation • 29 Apr 2021 • Hin Wai Lui, Emre Neftci
To address this challenge, we present a simplified neuron model that reduces the number of state variables by 4-fold while still being compatible with gradient based training.
no code implementations • 31 Mar 2021 • Kenneth Stewart, Andreea Danielescu, Timothy Shea, Emre Neftci
We also implement the encoder component of the model on neuromorphic hardware and discuss the potential for our algorithm to enable real-time learning from real-world event data.
no code implementations • 20 Feb 2021 • Sourav Dutta, Georgios Detorakis, Abhishek Khanna, Benjamin Grisafe, Emre Neftci, Suman Datta
We experimentally show that the inherent stochastic switching of the selector element between the insulator and metallic state introduces a multiplicative stochastic noise within the synapses of NSM that samples the conductance states of the FeFET, both during learning and inference.
1 code implementation • 10 Feb 2021 • Jinwei Xing, Takashi Nagata, Kexin Chen, Xinyun Zou, Emre Neftci, Jeffrey L. Krichmar
To address this issue, we propose a two-stage RL agent that first learns a latent unified state representation (LUSR) which is consistent across multiple domains in the first stage, and then do RL training in one source domain based on LUSR in the second stage.
no code implementations • 3 Aug 2020 • Kenneth Stewart, Garrick Orchard, Sumit Bam Shrestha, Emre Neftci
We present the Surrogate-gradient Online Error-triggered Learning (SOEL) system for online few-shot learning on neuromorphic processors.
1 code implementation • NeurIPS 2019 • Georgios Detorakis, Sourav Dutta, Abhishek Khanna, Matthew Jerry, Suman Datta, Emre Neftci
Multiplicative stochasticity such as Dropout improves the robustness and generalizability of deep neural networks.
no code implementations • 11 Oct 2019 • Kenneth Stewart, Garrick Orchard, Sumit Bam Shrestha, Emre Neftci
Recent work suggests that synaptic plasticity dynamics in biological models of neurons and neuromorphic hardware are compatible with gradient-based learning (Neftci et al., 2019).
no code implementations • 9 Apr 2019 • Jacques Kaiser, Alexander Friedrich, J. Camilo Vasquez Tieck, Daniel Reichard, Arne Roennau, Emre Neftci, Rüdiger Dillmann
In this setup, visual information is actively sensed by a DVS mounted on a robotic head performing microsaccadic eye movements.
3 code implementations • 27 Nov 2018 • Jacques Kaiser, Hesham Mostafa, Emre Neftci
A relatively smaller body of work, however, discusses similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks.
1 code implementation • 19 Jun 2018 • Georgios Detorakis, Travis Bartley, Emre Neftci
It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices.
no code implementations • 29 Sep 2017 • Georgios Detorakis, Sadique Sheik, Charles Augustine, Somnath Paul, Bruno U. Pedroni, Nikil Dutt, Jeffrey Krichmar, Gert Cauwenberghs, Emre Neftci
Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware.
1 code implementation • 16 Dec 2016 • Emre Neftci, Charles Augustine, Somnath Paul, Georgios Detorakis
Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations in neuromorphic computing hardware.
no code implementations • 27 Sep 2016 • S. Burc Eryilmaz, Emre Neftci, Siddharth Joshi, Sang-Bum Kim, Matthew BrightSky, Hsiang-Lan Lung, Chung Lam, Gert Cauwenberghs, H. -S. Philip Wong
Current large scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy.
no code implementations • 11 Jul 2016 • Bruno U. Pedroni, Sadique Sheik, Siddharth Joshi, Georgios Detorakis, Somnath Paul, Charles Augustine, Emre Neftci, Gert Cauwenberghs
We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation.
no code implementations • 16 Jan 2016 • Peter U. Diehl, Bruno U. Pedroni, Andrew Cassidy, Paul Merolla, Emre Neftci, Guido Zarrella
We present an approach to constructing a neuromorphic device that responds to language input by producing neuron spikes in proportion to the strength of the appropriate positive or negative emotional response.
no code implementations • 16 Jan 2016 • Peter U. Diehl, Guido Zarrella, Andrew Cassidy, Bruno U. Pedroni, Emre Neftci
We find that short synaptic delays are sufficient to implement the dynamical (temporal) aspect of the RNN in the question classification task.
no code implementations • 23 Dec 2014 • Maruan Al-Shedivat, Emre Neftci, Gert Cauwenberghs
These mappings are encoded in a distribution over a (possibly infinite) collection of models.
no code implementations • 5 Nov 2013 • Emre Neftci, Srinjoy Das, Bruno Pedroni, Kenneth Kreutz-Delgado, Gert Cauwenberghs
However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate.