Search Results for author: Isha Garg

Found 15 papers, 5 papers with code

Pruning for Improved ADC Efficiency in Crossbar-based Analog In-memory Accelerators

no code implementations19 Mar 2024 Timur Ibrayev, Isha Garg, Indranil Chakraborty, Kaushik Roy

sparsity is then achieved by regularizing the variance of $L_{0}$ norms of neighboring columns within the same crossbar.

Memorization Through the Lens of Curvature of Loss Function Around Samples

no code implementations11 Jul 2023 Isha Garg, Deepak Ravikumar, Kaushik Roy

Second, we inject corrupted samples which are memorized by the network, and show that these are learned with high curvature.

Memorization

Samples With Low Loss Curvature Improve Data Efficiency

no code implementations CVPR 2023 Isha Garg, Kaushik Roy

SLo-curves identifies the samples with low curvatures as being more data-efficient and trains on them with an additional regularizer that penalizes high curvature of the loss surface in their vicinity.

Encoding Hierarchical Information in Neural Networks helps in Subpopulation Shift

no code implementations20 Dec 2021 Amitangshu Mukherjee, Isha Garg, Kaushik Roy

We show that learning in this structured hierarchical manner results in networks that are more robust against subpopulation shifts, with an improvement up to 3\% in terms of accuracy and up to 11\% in terms of graphical distance over standard models on subpopulation shift benchmarks.

Image Classification

Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks

no code implementations26 Apr 2021 Sayeed Shafayet Chowdhury, Isha Garg, Kaushik Roy

Moreover, they require 8-14X lesser compute energy compared to their unpruned standard deep learning counterparts.

Model Compression Quantization

Gradient Projection Memory for Continual Learning

1 code implementation ICLR 2021 Gobinda Saha, Isha Garg, Kaushik Roy

The ability to learn continually without forgetting the past tasks is a desired attribute for artificial learning systems.

Attribute Continual Learning +1

DCT-SNN: Using DCT To Distribute Spatial Information Over Time for Low-Latency Spiking Neural Networks

no code implementations ICCV 2021 Isha Garg, Sayeed Shafayet Chowdhury, Kaushik Roy

Notably, DCT-SNN performs inference with 2-14X reduced latency compared to other state-of-the-art SNNs, while achieving comparable accuracy to their standard deep learning counterparts.

Computational Efficiency

DCT-SNN: Using DCT to Distribute Spatial Information over Time for Learning Low-Latency Spiking Neural Networks

1 code implementation5 Oct 2020 Isha Garg, Sayeed Shafayet Chowdhury, Kaushik Roy

Notably, DCT-SNN performs inference with 2-14X reduced latency compared to other state-of-the-art SNNs, while achieving comparable accuracy to their standard deep learning counterparts.

Computational Efficiency

TREND: Transferability based Robust ENsemble Design

1 code implementation4 Aug 2020 Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy

In this work, we study the effect of network architecture, initialization, optimizer, input, weight and activation quantization on transferability of adversarial samples.

Adversarial Robustness Quantization

SPACE: Structured Compression and Sharing of Representational Space for Continual Learning

1 code implementation23 Jan 2020 Gobinda Saha, Isha Garg, Aayush Ankit, Kaushik Roy

A minimal number of extra dimensions required to explain the current task are added to the Core space and the remaining Residual is freed up for learning the next task.

Continual Learning

Constructing Energy-efficient Mixed-precision Neural Networks through Principal Component Analysis for Edge Intelligence

1 code implementation4 Jun 2019 Indranil Chakraborty, Deboleena Roy, Isha Garg, Aayush Ankit, Kaushik Roy

The `Internet of Things' has brought increased demand for AI-based edge computing in applications ranging from healthcare monitoring systems to autonomous vehicles.

Autonomous Vehicles Dimensionality Reduction +4

A Low Effort Approach to Structured CNN Design Using PCA

no code implementations15 Dec 2018 Isha Garg, Priyadarshini Panda, Kaushik Roy

We demonstrate the proposed methodology on AlexNet and VGG style networks on the CIFAR-10, CIFAR-100 and ImageNet datasets, and successfully achieve an optimized architecture with a reduction of up to 3. 8X and 9X in the number of operations and parameters respectively, while trading off less than 1% accuracy.

Dimensionality Reduction Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.