Search Results for author: Abhishek Moitra

Found 23 papers, 7 papers with code

ClipFormer: Key-Value Clipping of Transformers on Memristive Crossbars for Write Noise Mitigation

no code implementations4 Feb 2024 Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

Specifically, we find pre-trained Vision Transformers (ViTs) to be vulnerable on crossbars due to the impact of write noise on the dynamically-generated Key (K) and Value (V) matrices in the attention layers, an effect not accounted for in prior studies.

TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training

no code implementations15 Jan 2024 DongHyun Lee, Ruokai Yin, Youngeun Kim, Abhishek Moitra, Yuhang Li, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activation.

Tensor Decomposition

Are SNNs Truly Energy-efficient? $-$ A Hardware Perspective

no code implementations6 Sep 2023 Abhiroop Bhattacharjee, Ruokai Yin, Abhishek Moitra, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities, utilizing bio-inspired activation functions and sparse binary spike-data representations.

Benchmarking

RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems

no code implementations5 Sep 2023 Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda

However, in prior detection works, the detector is attached to the classifier model and both detector and classifier work in tandem to perform adversarial detection that requires a high computational overhead which is not available at the low-power edge.

Adversarial Robustness Quantization

Examining the Role and Limits of Batchnorm Optimization to Mitigate Diverse Hardware-noise in In-memory Computing

no code implementations28 May 2023 Abhiroop Bhattacharjee, Abhishek Moitra, Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda

In-Memory Computing (IMC) platforms such as analog crossbars are gaining focus as they facilitate the acceleration of low-precision Deep Neural Networks (DNNs) with high area- & compute-efficiencies.

Quantization

Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient In-Memory Computing

1 code implementation27 May 2023 Yuhang Li, Abhishek Moitra, Tamar Geller, Priyadarshini Panda

Although the efficiency of SNNs can be realized on the In-Memory Computing (IMC) architecture, we show that the energy cost and latency of SNNs scale linearly with the number of timesteps used on IMC hardware.

Do We Really Need a Large Number of Visual Prompts?

no code implementations26 May 2023 Youngeun Kim, Yuhang Li, Abhishek Moitra, Priyadarshini Panda

Due to increasing interest in adapting models on resource-constrained edges, parameter-efficient transfer learning has been widely explored.

Transfer Learning Visual Prompt Tuning

Sharing Leaky-Integrate-and-Fire Neurons for Memory-Efficient Spiking Neural Networks

no code implementations26 May 2023 Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation.

Human Activity Recognition

MINT: Multiplier-less INTeger Quantization for Energy Efficient Spiking Neural Networks

1 code implementation16 May 2023 Ruokai Yin, Yuhang Li, Abhishek Moitra, Priyadarshini Panda

We propose Multiplier-less INTeger (MINT) quantization, a uniform quantization scheme that efficiently compresses weights and membrane potentials in spiking neural networks (SNNs).

Quantization

XPert: Peripheral Circuit & Neural Architecture Co-search for Area and Energy-efficient Xbar-based Computing

1 code implementation30 Mar 2023 Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda

The hardware-efficiency and accuracy of Deep Neural Networks (DNNs) implemented on In-memory Computing (IMC) architectures primarily depend on the DNN architecture and the peripheral circuit parameters.

XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural Architectures for Non-ideal Xbars

no code implementations15 Feb 2023 Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

This work proposes a two-phase algorithm-hardware co-optimization approach called XploreNAS that searches for hardware-efficient & adversarially robust neural architectures for non-ideal crossbar platforms.

Adversarial Robustness Neural Architecture Search

Workload-Balanced Pruning for Sparse Spiking Neural Networks

no code implementations13 Feb 2023 Ruokai Yin, Youngeun Kim, Yuhang Li, Abhishek Moitra, Nitin Satpute, Anna Hambitzer, Priyadarshini Panda

Though the existing pruning methods can provide extremely high weight sparsity for deep SNNs, the high weight sparsity brings a workload imbalance problem.

DeepCAM: A Fully CAM-based Inference Accelerator with Variable Hash Lengths for Energy-efficient Deep Neural Networks

no code implementations9 Feb 2023 Duy-Thanh Nguyen, Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

The first innovation is an approximate dot-product built on computations in the Euclidean space that can replace addition and multiplication with simple bit-wise operations.

SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for Benchmarking Spiking Neural Networks

2 code implementations24 Oct 2022 Abhishek Moitra, Abhiroop Bhattacharjee, Runcong Kuang, Gokul Krishnan, Yu Cao, Priyadarshini Panda

To this end, we propose SpikeSim, a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs.

Benchmarking

Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive Crossbars

no code implementations20 Jun 2022 Abhiroop Bhattacharjee, Youngeun Kim, Abhishek Moitra, Priyadarshini Panda

Spiking Neural Networks (SNNs) have recently emerged as the low-power alternative to Artificial Neural Networks (ANNs) owing to their asynchronous, sparse, and binary information processing.

SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks

1 code implementation11 Apr 2022 Ruokai Yin, Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda

Based on SATA, we show quantitative analyses of the energy efficiency of SNN training and compare the training cost of SNNs and ANNs.

Total Energy

Adversarial Detection without Model Information

1 code implementation9 Feb 2022 Abhishek Moitra, Youngeun Kim, Priyadarshini Panda

We train a standalone detector independent of the classifier model, with a layer-wise energy separation (LES) training to increase the separation between natural and adversarial energies.

Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks

no code implementations9 May 2021 Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

In this paper, we show how the bit-errors in the 6T cells of hybrid 6T-8T memories minimize the adversarial perturbations in a DNN.

Adversarial Robustness

Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks

no code implementations12 Jan 2021 Karina Vasquez, Yeshwanth Venkatesha, Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

Additionally, we find that integrating AD based quantization with AD based pruning (both conducted during training) yields up to ~198x and ~44x energy reductions for VGG19 and ResNet18 architectures respectively on PIM platform compared to baseline 16-bit precision, unpruned models.

Model Compression Quantization

Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks

no code implementations5 Jan 2021 Rachel Sterneck, Abhishek Moitra, Priyadarshini Panda

Based on prior works on detecting adversaries, we propose a structured methodology of augmenting a deep neural network (DNN) with a detector subnetwork.

Quantization

Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks

no code implementations26 Nov 2020 Abhishek Moitra, Priyadarshini Panda

In this work, we elicit the advantages and vulnerabilities of hybrid 6T-8T memories to improve the adversarial robustness and cause adversarial attacks on DNNs.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.