Search Results for author: Nandan Kumar Jha

Found 10 papers, 2 papers with code

Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning

2 code implementations26 Jul 2021 Karthik Garimella, Nandan Kumar Jha, Brandon Reagen

In this work, we ask: Is it feasible to substitute all ReLUs with low-degree polynomial activation functions for building deep, privacy-friendly neural networks?

Privacy Preserving Privacy Preserving Deep Learning

Circa: Stochastic ReLUs for Private Deep Learning

no code implementations NeurIPS 2021 Zahra Ghodsi, Nandan Kumar Jha, Brandon Reagen, Siddharth Garg

In this paper we re-think the ReLU computation and propose optimizations for PI tailored to properties of neural networks.

DeepReDuce: ReLU Reduction for Fast Private Inference

no code implementations2 Mar 2021 Nandan Kumar Jha, Zahra Ghodsi, Siddharth Garg, Brandon Reagen

This paper proposes DeepReDuce: a set of optimizations for the judicious removal of ReLUs to reduce private inference latency.

Modeling Data Reuse in Deep Neural Networks by Taking Data-Types into Cognizance

no code implementations6 Aug 2020 Nandan Kumar Jha, Sparsh Mittal

arithmetic intensity, does not always correctly estimate the degree of data reuse in DNNs since it gives equal importance to all the data types.

DeepPeep: Exploiting Design Ramifications to Decipher the Architecture of Compact DNNs

no code implementations30 Jul 2020 Nandan Kumar Jha, Sparsh Mittal, Binod Kumar, Govardhan Mattela

The remarkable predictive performance of deep neural networks (DNNs) has led to their adoption in service domains of unprecedented scale and scope.

Adversarial Attack

E2GC: Energy-efficient Group Convolution in Deep Neural Networks

no code implementations26 Jun 2020 Nandan Kumar Jha, Rajat Saini, Subhrajit Nag, Sparsh Mittal

We show that, at comparable computational complexity, DNNs with constant group size (E2GC) are more energy-efficient than DNNs with a fixed number of groups (F$g$GC).

Image Classification

ULSAM: Ultra-Lightweight Subspace Attention Module for Compact Convolutional Neural Networks

1 code implementation26 Jun 2020 Rajat Saini, Nandan Kumar Jha, Bedanta Das, Sparsh Mittal, C. Krishna Mohan

Our method of subspace attention is orthogonal and complementary to the existing state-of-the-arts attention mechanisms used in vision models.

Fine-Grained Image Classification General Classification

DRACO: Co-Optimizing Hardware Utilization, and Performance of DNNs on Systolic Accelerator

no code implementations26 Jun 2020 Nandan Kumar Jha, Shreyas Ravishankar, Sparsh Mittal, Arvind Kaushik, Dipan Mandal, Mahesh Chandra

The number of processing elements (PEs) in a fixed-sized systolic accelerator is well matched for large and compute-bound DNNs; whereas, memory-bound DNNs suffer from PE underutilization and fail to achieve peak performance and energy efficiency.

Computational Efficiency

The Ramifications of Making Deep Neural Networks Compact

no code implementations26 Jun 2020 Nandan Kumar Jha, Sparsh Mittal, Govardhan Mattela

Reducing the number of parameters in DNNs increases the number of activations which, in turn, increases the memory footprint.

Cannot find the paper you are looking for? You can Submit a new open access paper.