Search Results for author: Emre Neftci

Found 41 papers, 17 papers with code

A Grid Cell-Inspired Structured Vector Algebra for Cognitive Maps

no code implementations11 Mar 2025 Sven Krausse, Emre Neftci, Friedrich T. Sommer, Alpha Renner

The entorhinal-hippocampal formation is the mammalian brain's navigation system, encoding both physical and abstract spaces via grid cells.

A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity

1 code implementation20 Jan 2025 Jamie Lohoff, Anil Kaya, Florian Assmuth, Emre Neftci

We demonstrate the alignment of our gradients with respect to gradient backpropagation on an synthetic task where e-prop gradients are exact, as well as audio speech classification benchmarks.

Optimal Gradient Checkpointing for Sparse and Recurrent Architectures using Off-Chip Memory

no code implementations16 Dec 2024 Wadjih Bencheikh, Jan Finkbeiner, Emre Neftci

Recurrent neural networks (RNNs) are valued for their computational efficiency and reduced memory requirements on tasks involving long sequence lengths but require high memory-processor bandwidth to train.

Computational Efficiency

Zero-Shot Temporal Resolution Domain Adaptation for Spiking Neural Networks

no code implementations7 Nov 2024 Sanja Karilanova, Maxime Fabre, Emre Neftci, Ayça Özçelikkale

However, SNN model parameters are sensitive to temporal resolution, leading to significant performance drops when the temporal resolution of target data at the edge is not the same with that of the pre-deployment source data used for training, especially when fine-tuning is not possible at the edge.

Domain Adaptation Image Classification +2

Unsupervised Learning of Spatio-Temporal Patterns in Spiking Neuronal Networks

1 code implementation11 Oct 2024 Florian Feiler, Emre Neftci, Younes Bouhadjar

The ability to predict future events or patterns based on previous experience is crucial for many applications such as traffic control, weather forecasting, or supply chain management.

Management Weather Forecasting

On-Chip Learning via Transformer In-Context Learning

no code implementations11 Oct 2024 Jan Finkbeiner, Emre Neftci

Autoregressive decoder-only transformers have become key components for scalable sequence processing and generation models.

Decoder In-Context Learning

Analog In-Memory Computing Attention Mechanism for Fast and Energy-Efficient Large Language Models

1 code implementation28 Sep 2024 Nathan Leroux, Paul-Philipp Manea, Chirag Sudarshan, Jan Finkbeiner, Sebastian Siegel, John Paul Strachan, Emre Neftci

However, GPU-stored projections must be loaded into SRAM for each new generation step, causing latency and energy bottlenecks.

SNNAX -- Spiking Neural Networks in JAX

no code implementations4 Sep 2024 Jamie Lohoff, Jan Finkbeiner, Emre Neftci

Spiking Neural Networks (SNNs) simulators are essential tools to prototype biologically inspired models and neuromorphic hardware architectures and predict their performance.

Emulating Brain-like Rapid Learning in Neuromorphic Edge Computing

1 code implementation28 Aug 2024 Kenneth Stewart, Michael Neumeier, Sumit Bam Shrestha, Garrick Orchard, Emre Neftci

In this work, we emulate the multiple stages of learning with digital neuromorphic technology that simulates the neural and synaptic processes of the brain using two stages of learning.

Edge-computing One-Shot Learning +1

Optimizing Automatic Differentiation with Deep Reinforcement Learning

no code implementations7 Jun 2024 Jamie Lohoff, Emre Neftci

In this paper, we present a novel method to optimize the number of necessary multiplications for Jacobian computation by leveraging deep reinforcement learning (RL) and a concept called cross-country elimination while still computing the exact Jacobian.

Computational Efficiency Deep Reinforcement Learning +2

Distributed Representations Enable Robust Multi-Timescale Symbolic Computation in Neuromorphic Hardware

no code implementations2 May 2024 Madison Cotteret, Hugh Greatorex, Alpha Renner, Junren Chen, Emre Neftci, Huaqiang Wu, Giacomo Indiveri, Martin Ziegler, Elisabetta Chicca

To address this, we describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs, by exploiting the properties of high-dimensional distributed representations.

A Hybrid SNN-ANN Network for Event-based Object Detection with Spatial and Temporal Attention

no code implementations15 Mar 2024 Soikat Hasan Ahmed, Jan Finkbeiner, Emre Neftci

Event cameras offer high temporal resolution and dynamic range with minimal motion blur, making them promising for object detection tasks.

object-detection Object Detection

Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models

no code implementations7 Nov 2023 Jan Finkbeiner, Thomas Gmeinder, Mark Pupilli, Alexander Titterton, Emre Neftci

To overcome this limitation, we explore sparse and recurrent model training on a massively parallel multiple instruction multiple data (MIMD) architecture with distributed local memory.

Efficient Neural Network

Design Principles for Lifelong Learning AI Accelerators

no code implementations5 Oct 2023 Dhireesha Kudithipudi, Anurag Daram, Abdullah M. Zyarah, Fatima Tuz Zohora, James B. Aimone, Angel Yanguas-Gil, Nicholas Soures, Emre Neftci, Matthew Mattina, Vincenzo Lomonaco, Clare D. Thiem, Benjamin Epstein

Lifelong learning - an agent's ability to learn throughout its lifetime - is a hallmark of biological learning systems and a central challenge for artificial intelligence (AI).

Lifelong learning

Understanding and Improving Optimization in Predictive Coding Networks

1 code implementation23 May 2023 Nick Alonso, Jeff Krichmar, Emre Neftci

Backpropagation (BP), the standard learning algorithm for artificial neural networks, is often considered biologically implausible.

NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

1 code implementation10 Apr 2023 Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Weijie Ke, Mina A Khoei, Denis Kleyko, Noah Pacik-Nelson, Alessandro Pierro, Philipp Stratmann, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Shih-Chii Liu, Yao-Hong Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan R. Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Kenneth Stewart, Matthew Stewart, Terrence C. Stewart, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi

To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems.

Benchmarking

Online Transformers with Spiking Neurons for Fast Prosthetic Hand Control

1 code implementation21 Mar 2023 Nathan Leroux, Jan Finkbeiner, Emre Neftci

However, the self-attention mechanism often used in Transformers requires large time windows for each computation step and thus makes them less suitable for online signal processing compared to Recurrent Neural Networks (RNNs).

Position regression

A Theoretical Framework for Inference Learning

1 code implementation1 Jun 2022 Nick Alonso, Beren Millidge, Jeff Krichmar, Emre Neftci

Our novel implementation considerably improves the stability of IL across learning rates, which is consistent with our theory, as a key property of implicit SGD is its stability.

Policy Distillation with Selective Input Gradient Regularization for Efficient Interpretability

no code implementations18 May 2022 Jinwei Xing, Takashi Nagata, Xinyun Zou, Emre Neftci, Jeffrey L. Krichmar

Although deep Reinforcement Learning (RL) has proven successful in a wide range of tasks, one challenge it faces is interpretability when applied to real-world problems.

Autonomous Driving Deep Reinforcement Learning +1

Meta-learning Spiking Neural Networks with Surrogate Gradient Descent

1 code implementation26 Jan 2022 Kenneth Stewart, Emre Neftci

In this work, we demonstrate gradient-based meta-learning in SNNs using the surrogate gradient method that approximates the spiking threshold function for gradient estimations.

Meta-Learning

Encoding Event-Based Gesture Data With a Hybrid SNN Guided Variational Auto-encoder

no code implementations29 Sep 2021 Kenneth Michael Stewart, Andreea Danielescu, Timothy Shea, Emre Neftci

Our novel approach consists of an event-based guided Variational Autoencoder (VAE) which encodes event-based data sensed by a Dynamic Vision Sensor (DVS) into a latent space representation suitable to compute the similarity of mid-air gesture data.

Gesture Recognition Self-Supervised Learning

Training Spiking Neural Networks Using Lessons From Deep Learning

3 code implementations27 Sep 2021 Jason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, Wei D. Lu

This paper serves as a tutorial and perspective showing how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks.

Deep Learning

Tightening the Biological Constraints on Gradient-Based Predictive Coding

1 code implementation30 Apr 2021 Nick Alonso, Emre Neftci

This finding suggests that this gradient-based PC model may be useful for understanding how the brain solves the credit assignment problem.

Hessian Aware Quantization of Spiking Neural Networks

1 code implementation29 Apr 2021 Hin Wai Lui, Emre Neftci

To address this challenge, we present a simplified neuron model that reduces the number of state variables by 4-fold while still being compatible with gradient based training.

Quantization

Encoding Event-Based Data With a Hybrid SNN Guided Variational Auto-encoder in Neuromorphic Hardware

no code implementations31 Mar 2021 Kenneth Stewart, Andreea Danielescu, Timothy Shea, Emre Neftci

We also implement the encoder component of the model on neuromorphic hardware and discuss the potential for our algorithm to enable real-time learning from real-world event data.

Clustering Gesture Recognition

Neural Sampling Machine with Stochastic Synapse allows Brain-like Learning and Inference

no code implementations20 Feb 2021 Sourav Dutta, Georgios Detorakis, Abhishek Khanna, Benjamin Grisafe, Emre Neftci, Suman Datta

We experimentally show that the inherent stochastic switching of the selector element between the insulator and metallic state introduces a multiplicative stochastic noise within the synapses of NSM that samples the conductance states of the FeFET, both during learning and inference.

Bayesian Inference Decision Making +1

Domain Adaptation In Reinforcement Learning Via Latent Unified State Representation

1 code implementation10 Feb 2021 Jinwei Xing, Takashi Nagata, Kexin Chen, Xinyun Zou, Emre Neftci, Jeffrey L. Krichmar

To address this issue, we propose a two-stage RL agent that first learns a latent unified state representation (LUSR) which is consistent across multiple domains in the first stage, and then do RL training in one source domain based on LUSR in the second stage.

Autonomous Driving Deep Reinforcement Learning +6

Online Few-shot Gesture Learning on a Neuromorphic Processor

no code implementations3 Aug 2020 Kenneth Stewart, Garrick Orchard, Sumit Bam Shrestha, Emre Neftci

We present the Surrogate-gradient Online Error-triggered Learning (SOEL) system for online few-shot learning on neuromorphic processors.

Few-Shot Learning Gesture Recognition +1

On-chip Few-shot Learning with Surrogate Gradient Descent on a Neuromorphic Processor

no code implementations11 Oct 2019 Kenneth Stewart, Garrick Orchard, Sumit Bam Shrestha, Emre Neftci

Recent work suggests that synaptic plasticity dynamics in biological models of neurons and neuromorphic hardware are compatible with gradient-based learning (Neftci et al., 2019).

Few-Shot Learning Transfer Learning

Embodied Neuromorphic Vision with Event-Driven Random Backpropagation

no code implementations9 Apr 2019 Jacques Kaiser, Alexander Friedrich, J. Camilo Vasquez Tieck, Daniel Reichard, Arne Roennau, Emre Neftci, Rüdiger Dillmann

In this setup, visual information is actively sensed by a DVS mounted on a robotic head performing microsaccadic eye movements.

Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)

3 code implementations27 Nov 2018 Jacques Kaiser, Hesham Mostafa, Emre Neftci

A relatively smaller body of work, however, discusses similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks.

Contrastive Hebbian Learning with Random Feedback Weights

1 code implementation19 Jun 2018 Georgios Detorakis, Travis Bartley, Emre Neftci

It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices.

Neuromorphic Deep Learning Machines

1 code implementation16 Dec 2016 Emre Neftci, Charles Augustine, Somnath Paul, Georgios Detorakis

Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations in neuromorphic computing hardware.

Deep Learning

Training a Probabilistic Graphical Model with Resistive Switching Electronic Synapses

no code implementations27 Sep 2016 S. Burc Eryilmaz, Emre Neftci, Siddharth Joshi, Sang-Bum Kim, Matthew BrightSky, Hsiang-Lan Lung, Chung Lam, Gert Cauwenberghs, H. -S. Philip Wong

Current large scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy.

Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity

no code implementations11 Jul 2016 Bruno U. Pedroni, Sadique Sheik, Siddharth Joshi, Georgios Detorakis, Somnath Paul, Charles Augustine, Emre Neftci, Gert Cauwenberghs

We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation.

TrueHappiness: Neuromorphic Emotion Recognition on TrueNorth

no code implementations16 Jan 2016 Peter U. Diehl, Bruno U. Pedroni, Andrew Cassidy, Paul Merolla, Emre Neftci, Guido Zarrella

We present an approach to constructing a neuromorphic device that responds to language input by producing neuron spikes in proportion to the strength of the appropriate positive or negative emotional response.

Emotion Recognition Sentiment Analysis

Learning Non-deterministic Representations with Energy-based Ensembles

no code implementations23 Dec 2014 Maruan Al-Shedivat, Emre Neftci, Gert Cauwenberghs

These mappings are encoded in a distribution over a (possibly infinite) collection of models.

One-Shot Learning

Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems

no code implementations5 Nov 2013 Emre Neftci, Srinjoy Das, Bruno Pedroni, Kenneth Kreutz-Delgado, Gert Cauwenberghs

However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.