no code implementations • 5 Jul 2024 • Isha Garg, Christian Koguchi, Eshan Verma, Daniel Ulbricht
In our findings, we conclude that most models utilize a fraction of the available space.
no code implementations • 19 Mar 2024 • Timur Ibrayev, Isha Garg, Indranil Chakraborty, Kaushik Roy
sparsity is then achieved by regularizing the variance of $L_{0}$ norms of neighboring columns within the same crossbar.
no code implementations • 11 Jul 2023 • Isha Garg, Deepak Ravikumar, Kaushik Roy
Second, we inject corrupted samples which are memorized by the network, and show that these are learned with high curvature.
no code implementations • CVPR 2023 • Isha Garg, Kaushik Roy
SLo-curves identifies the samples with low curvatures as being more data-efficient and trains on them with an additional regularizer that penalizes high curvature of the loss surface in their vicinity.
no code implementations • 21 Jan 2022 • Isha Garg, Manish Nagaraj, Kaushik Roy
This is done via a central server that aggregates learning in the form of weight updates.
no code implementations • 20 Dec 2021 • Amitangshu Mukherjee, Isha Garg, Kaushik Roy
We show that learning in this structured hierarchical manner results in networks that are more robust against subpopulation shifts, with an improvement up to 3\% in terms of accuracy and up to 11\% in terms of graphical distance over standard models on subpopulation shift benchmarks.
no code implementations • 29 Sep 2021 • Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy
This reduces the separability of in-distribution data from OoD data.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
no code implementations • 26 Apr 2021 • Sayeed Shafayet Chowdhury, Isha Garg, Kaushik Roy
Moreover, they require 8-14X lesser compute energy compared to their unpruned standard deep learning counterparts.
1 code implementation • ICLR 2021 • Gobinda Saha, Isha Garg, Kaushik Roy
The ability to learn continually without forgetting the past tasks is a desired attribute for artificial learning systems.
no code implementations • ICCV 2021 • Isha Garg, Sayeed Shafayet Chowdhury, Kaushik Roy
Notably, DCT-SNN performs inference with 2-14X reduced latency compared to other state-of-the-art SNNs, while achieving comparable accuracy to their standard deep learning counterparts.
no code implementations • 15 Dec 2020 • Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy
We utilize mixup in two ways to implement Vicinal Risk Minimization.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
1 code implementation • 5 Oct 2020 • Isha Garg, Sayeed Shafayet Chowdhury, Kaushik Roy
Notably, DCT-SNN performs inference with 2-14X reduced latency compared to other state-of-the-art SNNs, while achieving comparable accuracy to their standard deep learning counterparts.
1 code implementation • 4 Aug 2020 • Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy
In this work, we study the effect of network architecture, initialization, optimizer, input, weight and activation quantization on transferability of adversarial samples.
1 code implementation • 23 Jan 2020 • Gobinda Saha, Isha Garg, Aayush Ankit, Kaushik Roy
A minimal number of extra dimensions required to explain the current task are added to the Core space and the remaining Residual is freed up for learning the next task.
1 code implementation • 4 Jun 2019 • Indranil Chakraborty, Deboleena Roy, Isha Garg, Aayush Ankit, Kaushik Roy
The `Internet of Things' has brought increased demand for AI-based edge computing in applications ranging from healthcare monitoring systems to autonomous vehicles.
no code implementations • 15 Dec 2018 • Isha Garg, Priyadarshini Panda, Kaushik Roy
We demonstrate the proposed methodology on AlexNet and VGG style networks on the CIFAR-10, CIFAR-100 and ImageNet datasets, and successfully achieve an optimized architecture with a reduction of up to 3. 8X and 9X in the number of operations and parameters respectively, while trading off less than 1% accuracy.