Search Results for author: Ruokai Yin

Found 9 papers, 4 papers with code

TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training

no code implementations15 Jan 2024 DongHyun Lee, Ruokai Yin, Youngeun Kim, Abhishek Moitra, Yuhang Li, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activation.

Tensor Decomposition

Rethinking Skip Connections in Spiking Neural Networks with Time-To-First-Spike Coding

no code implementations1 Dec 2023 Youngeun Kim, Adar Kahana, Ruokai Yin, Yuhang Li, Panos Stinis, George Em Karniadakis, Priyadarshini Panda

In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding.

Are SNNs Truly Energy-efficient? $-$ A Hardware Perspective

no code implementations6 Sep 2023 Abhiroop Bhattacharjee, Ruokai Yin, Abhishek Moitra, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities, utilizing bio-inspired activation functions and sparse binary spike-data representations.

Benchmarking

Sharing Leaky-Integrate-and-Fire Neurons for Memory-Efficient Spiking Neural Networks

no code implementations26 May 2023 Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation.

Human Activity Recognition

MINT: Multiplier-less INTeger Quantization for Energy Efficient Spiking Neural Networks

1 code implementation16 May 2023 Ruokai Yin, Yuhang Li, Abhishek Moitra, Priyadarshini Panda

We propose Multiplier-less INTeger (MINT) quantization, a uniform quantization scheme that efficiently compresses weights and membrane potentials in spiking neural networks (SNNs).

Quantization

Workload-Balanced Pruning for Sparse Spiking Neural Networks

no code implementations13 Feb 2023 Ruokai Yin, Youngeun Kim, Yuhang Li, Abhishek Moitra, Nitin Satpute, Anna Hambitzer, Priyadarshini Panda

Though the existing pruning methods can provide extremely high weight sparsity for deep SNNs, the high weight sparsity brings a workload imbalance problem.

Exploring Lottery Ticket Hypothesis in Spiking Neural Networks

1 code implementation4 Jul 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda

To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i. e., winning tickets) that achieve comparable performance to the dense networks.

SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks

1 code implementation11 Apr 2022 Ruokai Yin, Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda

Based on SATA, we show quantitative analyses of the energy efficiency of SNN training and compare the training cost of SNNs and ANNs.

Total Energy

Cannot find the paper you are looking for? You can Submit a new open access paper.