Search Results for author: Emre Neftci

Found 30 papers, 12 papers with code

A Hybrid SNN-ANN Network for Event-based Object Detection with Spatial and Temporal Attention

no code implementations15 Mar 2024 Soikat Hasan Ahmed, Jan Finkbeiner, Emre Neftci

Event cameras offer high temporal resolution and dynamic range with minimal motion blur, making them promising for object detection tasks.

object-detection Object Detection

Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models

no code implementations7 Nov 2023 Jan Finkbeiner, Thomas Gmeinder, Mark Pupilli, Alexander Titterton, Emre Neftci

To overcome this limitation, we explore sparse and recurrent model training on a massively parallel multiple instruction multiple data (MIMD) architecture with distributed local memory.

Efficient Neural Network

Design Principles for Lifelong Learning AI Accelerators

no code implementations5 Oct 2023 Dhireesha Kudithipudi, Anurag Daram, Abdullah M. Zyarah, Fatima Tuz Zohora, James B. Aimone, Angel Yanguas-Gil, Nicholas Soures, Emre Neftci, Matthew Mattina, Vincenzo Lomonaco, Clare D. Thiem, Benjamin Epstein

Lifelong learning - an agent's ability to learn throughout its lifetime - is a hallmark of biological learning systems and a central challenge for artificial intelligence (AI).

Understanding and Improving Optimization in Predictive Coding Networks

1 code implementation23 May 2023 Nick Alonso, Jeff Krichmar, Emre Neftci

Backpropagation (BP), the standard learning algorithm for artificial neural networks, is often considered biologically implausible.

NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

1 code implementation10 Apr 2023 Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Denis Kleyko, Noah Pacik-Nelson, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Yao-Hong Liu, Shih-Chii Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Alessandro Pierro, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Matthew Stewart, Kenneth Stewart, Terrence C. Stewart, Philipp Stratmann, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi

The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings.

Benchmarking

Online Transformers with Spiking Neurons for Fast Prosthetic Hand Control

1 code implementation21 Mar 2023 Nathan Leroux, Jan Finkbeiner, Emre Neftci

However, the self-attention mechanism often used in Transformers requires large time windows for each computation step and thus makes them less suitable for online signal processing compared to Recurrent Neural Networks (RNNs).

Position regression

A Theoretical Framework for Inference Learning

1 code implementation1 Jun 2022 Nick Alonso, Beren Millidge, Jeff Krichmar, Emre Neftci

Our novel implementation considerably improves the stability of IL across learning rates, which is consistent with our theory, as a key property of implicit SGD is its stability.

Policy Distillation with Selective Input Gradient Regularization for Efficient Interpretability

no code implementations18 May 2022 Jinwei Xing, Takashi Nagata, Xinyun Zou, Emre Neftci, Jeffrey L. Krichmar

Although deep Reinforcement Learning (RL) has proven successful in a wide range of tasks, one challenge it faces is interpretability when applied to real-world problems.

Autonomous Driving Reinforcement Learning (RL)

Meta-learning Spiking Neural Networks with Surrogate Gradient Descent

no code implementations26 Jan 2022 Kenneth Stewart, Emre Neftci

In this work, we demonstrate gradient-based meta-learning in SNNs using the surrogate gradient method that approximates the spiking threshold function for gradient estimations.

Meta-Learning

Encoding Event-Based Gesture Data With a Hybrid SNN Guided Variational Auto-encoder

no code implementations29 Sep 2021 Kenneth Michael Stewart, Andreea Danielescu, Timothy Shea, Emre Neftci

Our novel approach consists of an event-based guided Variational Autoencoder (VAE) which encodes event-based data sensed by a Dynamic Vision Sensor (DVS) into a latent space representation suitable to compute the similarity of mid-air gesture data.

Gesture Recognition Self-Supervised Learning

Training Spiking Neural Networks Using Lessons From Deep Learning

3 code implementations27 Sep 2021 Jason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, Wei D. Lu

This paper serves as a tutorial and perspective showing how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks.

Tightening the Biological Constraints on Gradient-Based Predictive Coding

1 code implementation30 Apr 2021 Nick Alonso, Emre Neftci

This finding suggests that this gradient-based PC model may be useful for understanding how the brain solves the credit assignment problem.

Hessian Aware Quantization of Spiking Neural Networks

1 code implementation29 Apr 2021 Hin Wai Lui, Emre Neftci

To address this challenge, we present a simplified neuron model that reduces the number of state variables by 4-fold while still being compatible with gradient based training.

Quantization

Encoding Event-Based Data With a Hybrid SNN Guided Variational Auto-encoder in Neuromorphic Hardware

no code implementations31 Mar 2021 Kenneth Stewart, Andreea Danielescu, Timothy Shea, Emre Neftci

We also implement the encoder component of the model on neuromorphic hardware and discuss the potential for our algorithm to enable real-time learning from real-world event data.

Clustering Gesture Recognition

Neural Sampling Machine with Stochastic Synapse allows Brain-like Learning and Inference

no code implementations20 Feb 2021 Sourav Dutta, Georgios Detorakis, Abhishek Khanna, Benjamin Grisafe, Emre Neftci, Suman Datta

We experimentally show that the inherent stochastic switching of the selector element between the insulator and metallic state introduces a multiplicative stochastic noise within the synapses of NSM that samples the conductance states of the FeFET, both during learning and inference.

Bayesian Inference Decision Making +1

Domain Adaptation In Reinforcement Learning Via Latent Unified State Representation

1 code implementation10 Feb 2021 Jinwei Xing, Takashi Nagata, Kexin Chen, Xinyun Zou, Emre Neftci, Jeffrey L. Krichmar

To address this issue, we propose a two-stage RL agent that first learns a latent unified state representation (LUSR) which is consistent across multiple domains in the first stage, and then do RL training in one source domain based on LUSR in the second stage.

Autonomous Driving Domain Adaptation +5

Online Few-shot Gesture Learning on a Neuromorphic Processor

no code implementations3 Aug 2020 Kenneth Stewart, Garrick Orchard, Sumit Bam Shrestha, Emre Neftci

We present the Surrogate-gradient Online Error-triggered Learning (SOEL) system for online few-shot learning on neuromorphic processors.

Few-Shot Learning Gesture Recognition +1

On-chip Few-shot Learning with Surrogate Gradient Descent on a Neuromorphic Processor

no code implementations11 Oct 2019 Kenneth Stewart, Garrick Orchard, Sumit Bam Shrestha, Emre Neftci

Recent work suggests that synaptic plasticity dynamics in biological models of neurons and neuromorphic hardware are compatible with gradient-based learning (Neftci et al., 2019).

Few-Shot Learning Transfer Learning

Embodied Neuromorphic Vision with Event-Driven Random Backpropagation

no code implementations9 Apr 2019 Jacques Kaiser, Alexander Friedrich, J. Camilo Vasquez Tieck, Daniel Reichard, Arne Roennau, Emre Neftci, Rüdiger Dillmann

In this setup, visual information is actively sensed by a DVS mounted on a robotic head performing microsaccadic eye movements.

Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)

3 code implementations27 Nov 2018 Jacques Kaiser, Hesham Mostafa, Emre Neftci

A relatively smaller body of work, however, discusses similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks.

Contrastive Hebbian Learning with Random Feedback Weights

1 code implementation19 Jun 2018 Georgios Detorakis, Travis Bartley, Emre Neftci

It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices.

Neuromorphic Deep Learning Machines

1 code implementation16 Dec 2016 Emre Neftci, Charles Augustine, Somnath Paul, Georgios Detorakis

Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations in neuromorphic computing hardware.

Training a Probabilistic Graphical Model with Resistive Switching Electronic Synapses

no code implementations27 Sep 2016 S. Burc Eryilmaz, Emre Neftci, Siddharth Joshi, Sang-Bum Kim, Matthew BrightSky, Hsiang-Lan Lung, Chung Lam, Gert Cauwenberghs, H. -S. Philip Wong

Current large scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy.

Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity

no code implementations11 Jul 2016 Bruno U. Pedroni, Sadique Sheik, Siddharth Joshi, Georgios Detorakis, Somnath Paul, Charles Augustine, Emre Neftci, Gert Cauwenberghs

We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation.

TrueHappiness: Neuromorphic Emotion Recognition on TrueNorth

no code implementations16 Jan 2016 Peter U. Diehl, Bruno U. Pedroni, Andrew Cassidy, Paul Merolla, Emre Neftci, Guido Zarrella

We present an approach to constructing a neuromorphic device that responds to language input by producing neuron spikes in proportion to the strength of the appropriate positive or negative emotional response.

Emotion Recognition Sentiment Analysis

Learning Non-deterministic Representations with Energy-based Ensembles

no code implementations23 Dec 2014 Maruan Al-Shedivat, Emre Neftci, Gert Cauwenberghs

These mappings are encoded in a distribution over a (possibly infinite) collection of models.

One-Shot Learning

Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems

no code implementations5 Nov 2013 Emre Neftci, Srinjoy Das, Bruno Pedroni, Kenneth Kreutz-Delgado, Gert Cauwenberghs

However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.