Search Results for author: Emre Neftci

Found 19 papers, 7 papers with code

Tightening the Biological Constraints on Gradient-Based Predictive Coding

1 code implementation30 Apr 2021 Nick Alonso, Emre Neftci

This finding suggests that this gradient-based PC model may be useful for understanding how the brain solves the credit assignment problem.

Hessian Aware Quantization of Spiking Neural Networks

1 code implementation29 Apr 2021 Hin Wai Lui, Emre Neftci

To address this challenge, we present a simplified neuron model that reduces the number of state variables by 4-fold while still being compatible with gradient based training.


Gesture Similarity Analysis on Event Data Using a Hybrid Guided Variational Auto Encoder

no code implementations31 Mar 2021 Kenneth Stewart, Andreea Danielescu, Lazar Supic, Timothy Shea, Emre Neftci

Furthermore, we argue that the resulting event-based encoder and pseudo-labeling system are suitable for implementation in neuromorphic hardware for online adaptation and learning of natural mid-air gestures.

Gesture Recognition

Neural Sampling Machine with Stochastic Synapse allows Brain-like Learning and Inference

no code implementations20 Feb 2021 Sourav Dutta, Georgios Detorakis, Abhishek Khanna, Benjamin Grisafe, Emre Neftci, Suman Datta

We experimentally show that the inherent stochastic switching of the selector element between the insulator and metallic state introduces a multiplicative stochastic noise within the synapses of NSM that samples the conductance states of the FeFET, both during learning and inference.

Bayesian Inference Decision Making +1

Domain Adaptation In Reinforcement Learning Via Latent Unified State Representation

1 code implementation10 Feb 2021 Jinwei Xing, Takashi Nagata, Kexin Chen, Xinyun Zou, Emre Neftci, Jeffrey L. Krichmar

To address this issue, we propose a two-stage RL agent that first learns a latent unified state representation (LUSR) which is consistent across multiple domains in the first stage, and then do RL training in one source domain based on LUSR in the second stage.

Autonomous Driving Domain Adaptation +2

Online Few-shot Gesture Learning on a Neuromorphic Processor

no code implementations3 Aug 2020 Kenneth Stewart, Garrick Orchard, Sumit Bam Shrestha, Emre Neftci

We present the Surrogate-gradient Online Error-triggered Learning (SOEL) system for online few-shot learning on neuromorphic processors.

Few-Shot Learning Gesture Recognition +1

On-chip Few-shot Learning with Surrogate Gradient Descent on a Neuromorphic Processor

no code implementations11 Oct 2019 Kenneth Stewart, Garrick Orchard, Sumit Bam Shrestha, Emre Neftci

Recent work suggests that synaptic plasticity dynamics in biological models of neurons and neuromorphic hardware are compatible with gradient-based learning (Neftci et al., 2019).

Few-Shot Learning Transfer Learning

Embodied Neuromorphic Vision with Event-Driven Random Backpropagation

no code implementations9 Apr 2019 Jacques Kaiser, Alexander Friedrich, J. Camilo Vasquez Tieck, Daniel Reichard, Arne Roennau, Emre Neftci, Rüdiger Dillmann

In this setup, visual information is actively sensed by a DVS mounted on a robotic head performing microsaccadic eye movements.

Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)

3 code implementations27 Nov 2018 Jacques Kaiser, Hesham Mostafa, Emre Neftci

A relatively smaller body of work, however, discusses similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks.

Contrastive Hebbian Learning with Random Feedback Weights

1 code implementation19 Jun 2018 Georgios Detorakis, Travis Bartley, Emre Neftci

It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices.

Neuromorphic Deep Learning Machines

1 code implementation16 Dec 2016 Emre Neftci, Charles Augustine, Somnath Paul, Georgios Detorakis

Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations in neuromorphic computing hardware.

Training a Probabilistic Graphical Model with Resistive Switching Electronic Synapses

no code implementations27 Sep 2016 S. Burc Eryilmaz, Emre Neftci, Siddharth Joshi, Sang-Bum Kim, Matthew BrightSky, Hsiang-Lan Lung, Chung Lam, Gert Cauwenberghs, H. -S. Philip Wong

Current large scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy.

Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity

no code implementations11 Jul 2016 Bruno U. Pedroni, Sadique Sheik, Siddharth Joshi, Georgios Detorakis, Somnath Paul, Charles Augustine, Emre Neftci, Gert Cauwenberghs

We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation.

TrueHappiness: Neuromorphic Emotion Recognition on TrueNorth

no code implementations16 Jan 2016 Peter U. Diehl, Bruno U. Pedroni, Andrew Cassidy, Paul Merolla, Emre Neftci, Guido Zarrella

We present an approach to constructing a neuromorphic device that responds to language input by producing neuron spikes in proportion to the strength of the appropriate positive or negative emotional response.

Emotion Recognition Sentiment Analysis

Learning Non-deterministic Representations with Energy-based Ensembles

no code implementations23 Dec 2014 Maruan Al-Shedivat, Emre Neftci, Gert Cauwenberghs

These mappings are encoded in a distribution over a (possibly infinite) collection of models.

One-Shot Learning

Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems

no code implementations5 Nov 2013 Emre Neftci, Srinjoy Das, Bruno Pedroni, Kenneth Kreutz-Delgado, Gert Cauwenberghs

However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.