no code implementations • 26 Jun 2022 • Peng Zhou, Jason K. Eshraghian, Dong-Uk Choi, Wei D. Lu, Sung-Mo Kang
We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs).
1 code implementation • 15 Feb 2022 • Jason K. Eshraghian, Corey Lammie, Mostafa Rahimi Azghadi, Wei D. Lu
Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms.
1 code implementation • 28 Jan 2022 • Jason K. Eshraghian, Wei D. Lu
Spiking neural networks can compensate for quantization error by encoding information either in the temporal domain, or by processing discretized quantities in hidden states of higher precision.
no code implementations • 18 Jan 2022 • Corey Lammie, Jason K. Eshraghian, Chenqi Li, Amirali Amirsoleimani, Roman Genov, Wei D. Lu, Mostafa Rahimi Azghadi
The impact of device and circuit-level effects in mixed-signal Resistive Random Access Memory (RRAM) accelerators typically manifest as performance degradation of Deep Learning (DL) algorithms, but the degree of impact varies based on algorithmic features.
2 code implementations • 27 Sep 2021 • Jason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, Wei D. Lu
The brain is the perfect place to look for inspiration to develop more efficient neural networks.
no code implementations • 14 May 2021 • John Moon, Wei D. Lu
Analogous to deep neural networks, stacking sub-reservoirs in series is an efficient way to enhance the nonlinearity of data transformation to high-dimensional space and expand the diversity of temporal information captured by the reservoir.
no code implementations • 11 Mar 2021 • Corey Lammie, Jason K. Eshraghian, Wei D. Lu, Mostafa Rahimi Azghadi
Stochastic Computing (SC) is a computing paradigm that allows for the low-cost and low-power computation of various arithmetic operations using stochastic bit streams and digital logic.
no code implementations • 9 Dec 2016 • Mohammed A. Zidan, YeonJoo Jeong, Jong Hong Shin, Chao Du, Zhengya Zhang, Wei D. Lu
The proposed computing architecture is based on a uniform, physical, resistive, memory-centric fabric that can be optimally reconfigured and utilized to perform different computing and data storage tasks in a massively parallel approach.