Search Results for author: Wei D. Lu

Found 12 papers, 4 papers with code

Training Spiking Neural Networks Using Lessons From Deep Learning

3 code implementations27 Sep 2021 Jason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, Wei D. Lu

This paper serves as a tutorial and perspective showing how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks.

Navigating Local Minima in Quantized Spiking Neural Networks

1 code implementation15 Feb 2022 Jason K. Eshraghian, Corey Lammie, Mostafa Rahimi Azghadi, Wei D. Lu

Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms.

Navigate

The fine line between dead neurons and sparsity in binarized spiking neural networks

1 code implementation28 Jan 2022 Jason K. Eshraghian, Wei D. Lu

Spiking neural networks can compensate for quantization error by encoding information either in the temporal domain, or by processing discretized quantities in hidden states of higher precision.

Quantization

Intelligence Processing Units Accelerate Neuromorphic Learning

1 code implementation19 Nov 2022 Pao-Sheng Vincent Sun, Alexander Titterton, Anjlee Gopiani, Tim Santos, Arindam Basu, Wei D. Lu, Jason K. Eshraghian

Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency when performing inference with deep learning workloads.

Field-Programmable Crossbar Array (FPCA) for Reconfigurable Computing

no code implementations9 Dec 2016 Mohammed A. Zidan, YeonJoo Jeong, Jong Hong Shin, Chao Du, Zhengya Zhang, Wei D. Lu

The proposed computing architecture is based on a uniform, physical, resistive, memory-centric fabric that can be optimally reconfigured and utilized to perform different computing and data storage tasks in a massively parallel approach.

Memristive Stochastic Computing for Deep Learning Parameter Optimization

no code implementations11 Mar 2021 Corey Lammie, Jason K. Eshraghian, Wei D. Lu, Mostafa Rahimi Azghadi

Stochastic Computing (SC) is a computing paradigm that allows for the low-cost and low-power computation of various arithmetic operations using stochastic bit streams and digital logic.

Hierarchical Architectures in Reservoir Computing Systems

no code implementations14 May 2021 John Moon, Wei D. Lu

Analogous to deep neural networks, stacking sub-reservoirs in series is an efficient way to enhance the nonlinearity of data transformation to high-dimensional space and expand the diversity of temporal information captured by the reservoir.

Design Space Exploration of Dense and Sparse Mapping Schemes for RRAM Architectures

no code implementations18 Jan 2022 Corey Lammie, Jason K. Eshraghian, Chenqi Li, Amirali Amirsoleimani, Roman Genov, Wei D. Lu, Mostafa Rahimi Azghadi

The impact of device and circuit-level effects in mixed-signal Resistive Random Access Memory (RRAM) accelerators typically manifest as performance degradation of Deep Learning (DL) algorithms, but the degree of impact varies based on algorithmic features.

Quantization

Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays

no code implementations26 Jun 2022 Peng Zhou, Jason K. Eshraghian, Dong-Uk Choi, Wei D. Lu, Sung-Mo Kang

We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs).

RN-Net: Reservoir Nodes-Enabled Neuromorphic Vision Sensing Network

no code implementations19 Mar 2023 Sangmin Yoo, Eric Yeu-Jer Lee, Ziyu Wang, Xinxin Wang, Wei D. Lu

Event-based cameras are inspired by the sparse and asynchronous spike representation of the biological visual system.

PowerGAN: A Machine Learning Approach for Power Side-Channel Attack on Compute-in-Memory Accelerators

no code implementations13 Apr 2023 Ziyu Wang, Yuting Wu, Yongmo Park, Sangmin Yoo, Xinxin Wang, Jason K. Eshraghian, Wei D. Lu

Analog compute-in-memory (CIM) systems are promising for deep neural network (DNN) inference acceleration due to their energy efficiency and high throughput.

Generative Adversarial Network

Bulk-Switching Memristor-based Compute-In-Memory Module for Deep Neural Network Training

no code implementations23 May 2023 Yuting Wu, Qiwen Wang, Ziyu Wang, Xinxin Wang, Buvna Ayyagari, Siddarth Krishnan, Michael Chudzik, Wei D. Lu

The efficacy of training larger models is evaluated using realistic hardware parameters and shows that that analog CIM modules can enable efficient mix-precision DNN training with accuracy comparable to full-precision software trained models.

Cannot find the paper you are looking for? You can Submit a new open access paper.