1 code implementation • 24 Nov 2023 • Jens E. Pedersen, Steven Abreu, Matthias Jobst, Gregor Lenz, Vittorio Fra, Felix C. Bauer, Dylan R. Muir, Peng Zhou, Bernhard Vogginger, Kade Heckel, Gianvito Urgese, Sadasivan Shankar, Terrence C. Stewart, Jason K. Eshraghian, Sadique Sheik
Despite a well-established mathematical foundation for neural dynamics, the implementation details vary greatly across different platforms.
1 code implementation • 27 Jun 2023 • Fabrizio Ottati, Chang Gao, Qinyu Chen, Giovanni Brignone, Mario R. Casu, Jason K. Eshraghian, Luciano Lavagno
The power efficiency of the biological brain outperforms any large-scale deep learning ( DL ) model; thus, neuromorphic computing tries to mimic the brain operations, such as spike-based information processing, to improve the efficiency of DL models.
no code implementations • 22 Jun 2023 • Ruomin Zhu, Jason K. Eshraghian, Zdenka Kuncic
Using the framework, we successfully identify the optimal hyperparameters for the reservoir.
no code implementations • 13 Apr 2023 • Ziyu Wang, Yuting Wu, Yongmo Park, Sangmin Yoo, Xinxin Wang, Jason K. Eshraghian, Wei D. Lu
Analog compute-in-memory (CIM) systems are promising for deep neural network (DNN) inference acceleration due to their energy efficiency and high throughput.
1 code implementation • 27 Feb 2023 • Rui-Jie Zhu, Qihang Zhao, Guoqi Li, Jason K. Eshraghian
As a result, their performance lags behind modern deep learning, and we are yet to see the effectiveness of SNNs in language generation.
1 code implementation • 2 Feb 2023 • Farhad Modaresi, Matthew Guthaus, Jason K. Eshraghian
This paper presents a spiking neural network (SNN) accelerator made using fully open-source EDA tools, process design kit (PDK), and memory macros synthesized using OpenRAM.
1 code implementation • 19 Nov 2022 • Pao-Sheng Vincent Sun, Alexander Titterton, Anjlee Gopiani, Tim Santos, Arindam Basu, Wei D. Lu, Jason K. Eshraghian
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency when performing inference with deep learning workloads.
1 code implementation • 6 Oct 2022 • Alexander Henkes, Jason K. Eshraghian, Henning Wessels
To overcome this problem, a framework for regression using spiking neural networks is proposed.
no code implementations • 26 Jun 2022 • Peng Zhou, Jason K. Eshraghian, Dong-Uk Choi, Wei D. Lu, Sung-Mo Kang
We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs).
no code implementations • 2 Mar 2022 • Peng Zhou, Jason K. Eshraghian, Dong-Uk Choi, Sung-Mo Kang
The natural spiking dynamics of the MIF neuron model are fully differentiable, eliminating the need for gradient approximations that are prevalent in the spiking neural network literature.
no code implementations • 2 Mar 2022 • Peng Zhou, Dong-Uk Choi, Jason K. Eshraghian, Sung-Mo Kang
We present a fully memristive spiking neural network (MSNN) consisting of physically-realizable memristive neurons and memristive synapses to implement an unsupervised Spiking Time Dependent Plasticity (STDP) learning rule.
1 code implementation • 15 Feb 2022 • Jason K. Eshraghian, Corey Lammie, Mostafa Rahimi Azghadi, Wei D. Lu
Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms.
1 code implementation • 28 Jan 2022 • Jason K. Eshraghian, Wei D. Lu
Spiking neural networks can compensate for quantization error by encoding information either in the temporal domain, or by processing discretized quantities in hidden states of higher precision.
no code implementations • 18 Jan 2022 • Corey Lammie, Jason K. Eshraghian, Chenqi Li, Amirali Amirsoleimani, Roman Genov, Wei D. Lu, Mostafa Rahimi Azghadi
The impact of device and circuit-level effects in mixed-signal Resistive Random Access Memory (RRAM) accelerators typically manifest as performance degradation of Deep Learning (DL) algorithms, but the degree of impact varies based on algorithmic features.
3 code implementations • 27 Sep 2021 • Jason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, Wei D. Lu
This paper serves as a tutorial and perspective showing how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks.
no code implementations • 11 Mar 2021 • Corey Lammie, Jason K. Eshraghian, Wei D. Lu, Mostafa Rahimi Azghadi
Stochastic Computing (SC) is a computing paradigm that allows for the low-cost and low-power computation of various arithmetic operations using stochastic bit streams and digital logic.
1 code implementation • 11 Jul 2020 • Mostafa Rahimi Azghadi, Corey Lammie, Jason K. Eshraghian, Melika Payvand, Elisa Donati, Bernabe Linares-Barranco, Giacomo Indiveri
The advent of dedicated Deep Learning (DL) accelerators and neuromorphic processors has brought on new opportunities for applying both Deep and Spiking Neural Network (SNN) algorithms to healthcare and biomedical applications at the edge.
1 code implementation • 22 Jun 2019 • Jaeheum Lee, Jason K. Eshraghian, Kyoungrok Cho, Kamran Eshraghian
This novel algorithm-hardware solution is described as the radix-X Convolutional Neural Network Crossbar Array, and demonstrate how to efficiently represent negative weights using a single column line, rather than double the number of additional columns.