Search Results for author: Gopalakrishnan Srinivasan

Found 12 papers, 4 papers with code

Complexity-aware Adaptive Training and Inference for Edge-Cloud Distributed AI Systems

no code implementations14 Sep 2021 Yinghan Long, Indranil Chakraborty, Gopalakrishnan Srinivasan, Kaushik Roy

Only data with high probabilities of belonging to hard classes would be sent to the extension block for prediction.

Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation

1 code implementation ICLR 2020 Nitin Rathi, Gopalakrishnan Srinivasan, Priyadarshini Panda, Kaushik Roy

We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing.

Image Classification

Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks

no code implementations5 Mar 2020 Sourjya Roy, Priyadarshini Panda, Gopalakrishnan Srinivasan, Anand Raghunathan

Our results for VGG-16 trained on CIFAR10 shows that L1 normalization provides the best performance among all the techniques explored in this work with less than 1% drop in accuracy after pruning 80% of the filters compared to the original network.

Explicitly Trained Spiking Sparsity in Spiking Neural Networks with Backpropagation

no code implementations2 Mar 2020 Jason M. Allred, Steven J. Spencer, Gopalakrishnan Srinivasan, Kaushik Roy

Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency resulting from sparse, event-driven computations.

RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network

1 code implementation CVPR 2020 Bing Han, Gopalakrishnan Srinivasan, Kaushik Roy

We find that performance degradation in the converted SNN stems from using "hard reset" spiking neuron that is driven to fixed reset potential once its membrane potential exceeds the firing threshold, leading to information loss during SNN inference.

Reinforcement Learning with Low-Complexity Liquid State Machines

1 code implementation4 Jun 2019 Wachirawit Ponghiran, Gopalakrishnan Srinivasan, Kaushik Roy

We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters.

Atari Games Q-Learning +2

ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing

no code implementations11 Feb 2019 Gopalakrishnan Srinivasan, Kaushik Roy

In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs.

Computational Efficiency Dimensionality Reduction

Xcel-RAM: Accelerating Binary Neural Networks in High-Throughput SRAM Compute Arrays

no code implementations1 Jul 2018 Amogh Agrawal, Akhilesh Jaiswal, Deboleena Roy, Bing Han, Gopalakrishnan Srinivasan, Aayush Ankit, Kaushik Roy

In this paper, we demonstrate how deep binary networks can be accelerated in modified von-Neumann machines by enabling binary convolutions within the SRAM array.

Emerging Technologies

Convolutional Spike Timing Dependent Plasticity based Feature Learning in Spiking Neural Networks

no code implementations10 Mar 2017 Priyadarshini Panda, Gopalakrishnan Srinivasan, Kaushik Roy

Brain-inspired learning models attempt to mimic the cortical architecture and computations performed in the neurons and synapses constituting the human brain to achieve its efficiency in cognitive tasks.

Object Recognition

Proposal for a Leaky-Integrate-Fire Spiking Neuron based on Magneto-Electric Switching of Ferro-magnets

1 code implementation29 Sep 2016 Akhilesh Jaiswal, Sourjya Roy, Gopalakrishnan Srinivasan, Kaushik Roy

The efficiency of the human brain in performing classification tasks has attracted considerable research interest in brain-inspired neuromorphic computing.

Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks

no code implementations27 Feb 2016 Gopalakrishnan Srinivasan, Parami Wijesinghe, Syed Shakib Sarwar, Akhilesh Jaiswal, Kaushik Roy

Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200mV from the nominal operating voltage (950mV) for practically no loss (less than 0. 5%) in accuracy (22nm predictive technology).

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.