Search Results for author: Jason K. Eshraghian

Found 18 papers, 11 papers with code

To Spike or Not To Spike: A Digital Hardware Perspective on Deep Learning Acceleration

1 code implementation27 Jun 2023 Fabrizio Ottati, Chang Gao, Qinyu Chen, Giovanni Brignone, Mario R. Casu, Jason K. Eshraghian, Luciano Lavagno

The power efficiency of the biological brain outperforms any large-scale deep learning ( DL ) model; thus, neuromorphic computing tries to mimic the brain operations, such as spike-based information processing, to improve the efficiency of DL models.

Memristive Reservoirs Learn to Learn

no code implementations22 Jun 2023 Ruomin Zhu, Jason K. Eshraghian, Zdenka Kuncic

Using the framework, we successfully identify the optimal hyperparameters for the reservoir.

PowerGAN: A Machine Learning Approach for Power Side-Channel Attack on Compute-in-Memory Accelerators

no code implementations13 Apr 2023 Ziyu Wang, Yuting Wu, Yongmo Park, Sangmin Yoo, Xinxin Wang, Jason K. Eshraghian, Wei D. Lu

Analog compute-in-memory (CIM) systems are promising for deep neural network (DNN) inference acceleration due to their energy efficiency and high throughput.

Generative Adversarial Network

SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks

1 code implementation27 Feb 2023 Rui-Jie Zhu, Qihang Zhao, Guoqi Li, Jason K. Eshraghian

As a result, their performance lags behind modern deep learning, and we are yet to see the effectiveness of SNNs in language generation.

Language Modelling Text Generation

OpenSpike: An OpenRAM SNN Accelerator

1 code implementation2 Feb 2023 Farhad Modaresi, Matthew Guthaus, Jason K. Eshraghian

This paper presents a spiking neural network (SNN) accelerator made using fully open-source EDA tools, process design kit (PDK), and memory macros synthesized using OpenRAM.

Intelligence Processing Units Accelerate Neuromorphic Learning

1 code implementation19 Nov 2022 Pao-Sheng Vincent Sun, Alexander Titterton, Anjlee Gopiani, Tim Santos, Arindam Basu, Wei D. Lu, Jason K. Eshraghian

Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency when performing inference with deep learning workloads.

Spiking neural networks for nonlinear regression

1 code implementation6 Oct 2022 Alexander Henkes, Jason K. Eshraghian, Henning Wessels

To overcome this problem, a framework for regression using spiking neural networks is proposed.

regression

Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays

no code implementations26 Jun 2022 Peng Zhou, Jason K. Eshraghian, Dong-Uk Choi, Wei D. Lu, Sung-Mo Kang

We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs).

SPICEprop: Backpropagating Errors Through Memristive Spiking Neural Networks

no code implementations2 Mar 2022 Peng Zhou, Jason K. Eshraghian, Dong-Uk Choi, Sung-Mo Kang

The natural spiking dynamics of the MIF neuron model are fully differentiable, eliminating the need for gradient approximations that are prevalent in the spiking neural network literature.

A Fully Memristive Spiking Neural Network with Unsupervised Learning

no code implementations2 Mar 2022 Peng Zhou, Dong-Uk Choi, Jason K. Eshraghian, Sung-Mo Kang

We present a fully memristive spiking neural network (MSNN) consisting of physically-realizable memristive neurons and memristive synapses to implement an unsupervised Spiking Time Dependent Plasticity (STDP) learning rule.

Multi-class Classification Retrieval

Navigating Local Minima in Quantized Spiking Neural Networks

1 code implementation15 Feb 2022 Jason K. Eshraghian, Corey Lammie, Mostafa Rahimi Azghadi, Wei D. Lu

Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms.

Navigate

The fine line between dead neurons and sparsity in binarized spiking neural networks

1 code implementation28 Jan 2022 Jason K. Eshraghian, Wei D. Lu

Spiking neural networks can compensate for quantization error by encoding information either in the temporal domain, or by processing discretized quantities in hidden states of higher precision.

Quantization

Design Space Exploration of Dense and Sparse Mapping Schemes for RRAM Architectures

no code implementations18 Jan 2022 Corey Lammie, Jason K. Eshraghian, Chenqi Li, Amirali Amirsoleimani, Roman Genov, Wei D. Lu, Mostafa Rahimi Azghadi

The impact of device and circuit-level effects in mixed-signal Resistive Random Access Memory (RRAM) accelerators typically manifest as performance degradation of Deep Learning (DL) algorithms, but the degree of impact varies based on algorithmic features.

Quantization

Training Spiking Neural Networks Using Lessons From Deep Learning

3 code implementations27 Sep 2021 Jason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, Wei D. Lu

This paper serves as a tutorial and perspective showing how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks.

Memristive Stochastic Computing for Deep Learning Parameter Optimization

no code implementations11 Mar 2021 Corey Lammie, Jason K. Eshraghian, Wei D. Lu, Mostafa Rahimi Azghadi

Stochastic Computing (SC) is a computing paradigm that allows for the low-cost and low-power computation of various arithmetic operations using stochastic bit streams and digital logic.

Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications

1 code implementation11 Jul 2020 Mostafa Rahimi Azghadi, Corey Lammie, Jason K. Eshraghian, Melika Payvand, Elisa Donati, Bernabe Linares-Barranco, Giacomo Indiveri

The advent of dedicated Deep Learning (DL) accelerators and neuromorphic processors has brought on new opportunities for applying both Deep and Spiking Neural Network (SNN) algorithms to healthcare and biomedical applications at the edge.

Electromyography (EMG) Sensor Fusion

Adaptive Precision CNN Accelerator Using Radix-X Parallel Connected Memristor Crossbars

1 code implementation22 Jun 2019 Jaeheum Lee, Jason K. Eshraghian, Kyoungrok Cho, Kamran Eshraghian

This novel algorithm-hardware solution is described as the radix-X Convolutional Neural Network Crossbar Array, and demonstrate how to efficiently represent negative weights using a single column line, rather than double the number of additional columns.

Cannot find the paper you are looking for? You can Submit a new open access paper.