Search Results for author: Anshul Nasery

Found 9 papers, 4 papers with code

Label Differential Privacy via Aggregation

no code implementations16 Oct 2023 Anand Brahmbhatt, Rishi Saket, Shreyas Havaldar, Anshul Nasery, Aravindan Raghuveer

Further, the $\ell_2^2$-regressor which minimizes the loss on the aggregated dataset has a loss within $\left(1 + o(1)\right)$-factor of the optimum on the original dataset w. p.


End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates

no code implementations9 Jun 2023 Anshul Nasery, Hardik Shah, Arun Sai Suggala, Prateek Jain

Our algorithm is versatile and can be used with many popular compression methods including pruning, low-rank factorization, and quantization.

Neural Architecture Search Neural Network Compression +2

Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks

no code implementations4 Oct 2022 Sravanti Addepalli, Anshul Nasery, R. Venkatesh Babu, Praneeth Netrapalli, Prateek Jain

To bridge the gap between these two lines of work, we first hypothesize and verify that while SB may not altogether preclude learning complex features, it amplifies simpler features over complex ones.

DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization

no code implementations19 Aug 2022 Anshul Nasery, Sravanti Addepalli, Praneeth Netrapalli, Prateek Jain

We consider the problem of OOD generalization, where the goal is to train a model that performs well on test distributions that are different from the training distribution.


Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time

1 code implementation NeurIPS 2021 Anshul Nasery, Soumyadeep Thakur, Vihari Piratla, Abir De, Sunita Sarawagi

In several real world applications, machine learning models are deployed to make predictions on data whose distribution changes gradually along time, leading to a drift between the train and test distributions.

Teaching CNNs to mimic Human Visual Cognitive Process & regularise Texture-Shape bias

1 code implementation25 Jun 2020 Satyam Mohla, Anshul Nasery, Biplab Banerjee

Recent experiments in computer vision demonstrate texture bias as the primary reason for supreme results in models employing Convolutional Neural Networks (CNNs), conflicting with early works claiming that these networks identify objects using shape.

Object Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.