no code implementations • 19 Mar 2024 • Timur Ibrayev, Isha Garg, Indranil Chakraborty, Kaushik Roy
sparsity is then achieved by regularizing the variance of $L_{0}$ norms of neighboring columns within the same crossbar.
no code implementations • 26 Dec 2023 • Tanvi Sharma, Mustafa Ali, Indranil Chakraborty, Kaushik Roy
The proposed work provides insights into what type of CiM to use, and when and where to optimally integrate it in the cache hierarchy for GEMM acceleration.
1 code implementation • 19 Sep 2021 • Chun Tao, Deboleena Roy, Indranil Chakraborty, Kaushik Roy
First, we study the noise stability of such networks on unperturbed inputs and observe that internal activations of adversarially trained networks have lower Signal-to-Noise Ratio (SNR), and are sensitive to noise compared to vanilla networks.
no code implementations • 14 Sep 2021 • Yinghan Long, Indranil Chakraborty, Gopalakrishnan Srinivasan, Kaushik Roy
Only data with high probabilities of belonging to hard classes would be sent to the extension block for prediction.
no code implementations • 23 Jun 2021 • Shubham Negi, Indranil Chakraborty, Aayush Ankit, Kaushik Roy
The hardware efficiency (energy, latency and area) as well as application accuracy (considering device and circuit non-idealities) of DNNs mapped to such hardware are co-dependent on network parameters, such as kernel size, depth etc.
no code implementations • 24 Nov 2020 • Siddhant Siddhant, Indranil Chakraborty, Sayan Kar
For other $\omega$ (in the presence of $J$ or without), numerically obtained geodesics lead to results on displacement memory which appear to match qualitatively with those found from a deviation analysis.
General Relativity and Quantum Cosmology High Energy Physics - Theory
no code implementations • 27 Aug 2020 • Deboleena Roy, Indranil Chakraborty, Timur Ibrayev, Kaushik Roy
The increasing computational demand of Deep Learning has propelled research in special-purpose inference accelerators based on emerging non-volatile memory (NVM) technologies.
no code implementations • 21 May 2020 • Yinghan Long, Indranil Chakraborty, Kaushik Roy
The proposed network can be deployed in a distributed manner, consisting of quantized layers and early exits at the edge and full-precision layers on the cloud.
no code implementations • 1 May 2020 • Indranil Chakraborty, Sayan Kar
For constant negative curvature, we find there is permanent change in the separation of geodesics after the pulse has departed.
General Relativity and Quantum Cosmology
no code implementations • 27 Mar 2020 • Mustafa Ali, Akhilesh Jaiswal, Sangamesh Kodge, Amogh Agrawal, Indranil Chakraborty, Kaushik Roy
`In-memory computing' is being widely explored as a novel computing paradigm to mitigate the well known memory bottleneck.
no code implementations • 15 Mar 2020 • Indranil Chakraborty, Mustafa Fayez Ali, Dong Eun Kim, Aayush Ankit, Kaushik Roy
Further, using the functional simulator and GENIEx, we demonstrate that an analytical model can overestimate the degradation in classification accuracy by $\ge 10\%$ on CIFAR-100 and $3. 7\%$ on ImageNet datasets compared to GENIEx.
Emerging Technologies
1 code implementation • 4 Jun 2019 • Indranil Chakraborty, Deboleena Roy, Isha Garg, Aayush Ankit, Kaushik Roy
The `Internet of Things' has brought increased demand for AI-based edge computing in applications ranging from healthcare monitoring systems to autonomous vehicles.
no code implementations • 8 Feb 2019 • Priyadarshini Panda, Indranil Chakraborty, Kaushik Roy
Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8-bit to 4 values or 2-bit) extensively improves the adversarial robustness of DLNs for a substantial range of perturbations for minimal loss in test accuracy.
no code implementations • 1 Feb 2019 • Indranil Chakraborty, Deboleena Roy, Aayush Ankit, Kaushik Roy
In this work, we propose extremely quantized hybrid network architectures with both binary and full-precision sections to emulate the classification performance of full-precision networks while ensuring significant energy efficiency and memory compression.