Search Results for author: Indranil Chakraborty

Found 14 papers, 2 papers with code

Pruning for Improved ADC Efficiency in Crossbar-based Analog In-memory Accelerators

no code implementations19 Mar 2024 Timur Ibrayev, Isha Garg, Indranil Chakraborty, Kaushik Roy

sparsity is then achieved by regularizing the variance of $L_{0}$ norms of neighboring columns within the same crossbar.

WWW: What, When, Where to Compute-in-Memory

no code implementations26 Dec 2023 Tanvi Sharma, Mustafa Ali, Indranil Chakraborty, Kaushik Roy

The proposed work provides insights into what type of CiM to use, and when and where to optimally integrate it in the cache hierarchy for GEMM acceleration.

On the Noise Stability and Robustness of Adversarially Trained Networks on NVM Crossbars

1 code implementation19 Sep 2021 Chun Tao, Deboleena Roy, Indranil Chakraborty, Kaushik Roy

First, we study the noise stability of such networks on unperturbed inputs and observe that internal activations of adversarially trained networks have lower Signal-to-Noise Ratio (SNR), and are sensitive to noise compared to vanilla networks.

Complexity-aware Adaptive Training and Inference for Edge-Cloud Distributed AI Systems

no code implementations14 Sep 2021 Yinghan Long, Indranil Chakraborty, Gopalakrishnan Srinivasan, Kaushik Roy

Only data with high probabilities of belonging to hard classes would be sent to the extension block for prediction.

NAX: Co-Designing Neural Network and Hardware Architecture for Memristive Xbar based Computing Systems

no code implementations23 Jun 2021 Shubham Negi, Indranil Chakraborty, Aayush Ankit, Kaushik Roy

The hardware efficiency (energy, latency and area) as well as application accuracy (considering device and circuit non-idealities) of DNNs mapped to such hardware are co-dependent on network parameters, such as kernel size, depth etc.

Neural Architecture Search

Kundt geometries and memory effects in the Brans-Dicke theory of gravity

no code implementations24 Nov 2020 Siddhant Siddhant, Indranil Chakraborty, Sayan Kar

For other $\omega$ (in the presence of $J$ or without), numerically obtained geodesics lead to results on displacement memory which appear to match qualitatively with those found from a deviation analysis.

General Relativity and Quantum Cosmology High Energy Physics - Theory

On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks

no code implementations27 Aug 2020 Deboleena Roy, Indranil Chakraborty, Timur Ibrayev, Kaushik Roy

The increasing computational demand of Deep Learning has propelled research in special-purpose inference accelerators based on emerging non-volatile memory (NVM) technologies.

Image Generation

Conditionally Deep Hybrid Neural Networks Across Edge and Cloud

no code implementations21 May 2020 Yinghan Long, Indranil Chakraborty, Kaushik Roy

The proposed network can be deployed in a distributed manner, consisting of quantized layers and early exits at the edge and full-precision layers on the cloud.

Classification Cloud Computing +4

Memory effects in Kundt wave spacetimes

no code implementations1 May 2020 Indranil Chakraborty, Sayan Kar

For constant negative curvature, we find there is permanent change in the separation of geodesics after the pulse has departed.

General Relativity and Quantum Cosmology

IMAC: In-memory multi-bit Multiplication andACcumulation in 6T SRAM Array

no code implementations27 Mar 2020 Mustafa Ali, Akhilesh Jaiswal, Sangamesh Kodge, Amogh Agrawal, Indranil Chakraborty, Kaushik Roy

`In-memory computing' is being widely explored as a novel computing paradigm to mitigate the well known memory bottleneck.

GENIEx: A Generalized Approach to Emulating Non-Ideality in Memristive Xbars using Neural Networks

no code implementations15 Mar 2020 Indranil Chakraborty, Mustafa Fayez Ali, Dong Eun Kim, Aayush Ankit, Kaushik Roy

Further, using the functional simulator and GENIEx, we demonstrate that an analytical model can overestimate the degradation in classification accuracy by $\ge 10\%$ on CIFAR-100 and $3. 7\%$ on ImageNet datasets compared to GENIEx.

Emerging Technologies

Constructing Energy-efficient Mixed-precision Neural Networks through Principal Component Analysis for Edge Intelligence

1 code implementation4 Jun 2019 Indranil Chakraborty, Deboleena Roy, Isha Garg, Aayush Ankit, Kaushik Roy

The `Internet of Things' has brought increased demand for AI-based edge computing in applications ranging from healthcare monitoring systems to autonomous vehicles.

Autonomous Vehicles Dimensionality Reduction +4

Discretization based Solutions for Secure Machine Learning against Adversarial Attacks

no code implementations8 Feb 2019 Priyadarshini Panda, Indranil Chakraborty, Kaushik Roy

Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8-bit to 4 values or 2-bit) extensively improves the adversarial robustness of DLNs for a substantial range of perturbations for minimal loss in test accuracy.

Adversarial Robustness BIG-bench Machine Learning

Efficient Hybrid Network Architectures for Extremely Quantized Neural Networks Enabling Intelligence at the Edge

no code implementations1 Feb 2019 Indranil Chakraborty, Deboleena Roy, Aayush Ankit, Kaushik Roy

In this work, we propose extremely quantized hybrid network architectures with both binary and full-precision sections to emulate the classification performance of full-precision networks while ensuring significant energy efficiency and memory compression.

Edge-computing Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.