Search Results for author: Gaurav Kumar Nayak

Found 16 papers, 4 papers with code

DAD++: Improved Data-free Test Time Adversarial Defense

2 code implementations10 Sep 2023 Gaurav Kumar Nayak, Inder Khatri, Shubham Randive, Ruchit Rawal, Anirban Chakraborty

With the increasing deployment of deep neural networks in safety-critical applications such as self-driving cars, medical imaging, anomaly detection, etc., adversarial robustness has become a crucial concern in the reliability of these networks in real-world scenarios.

Adversarial Defense Adversarial Robustness +4

DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks

no code implementations14 Aug 2023 Indu Joshi, Priyank Upadhya, Gaurav Kumar Nayak, Peter Schüffler, Nassir Navab

Leveraging this, we introduce DISBELIEVE, a local model poisoning attack that creates malicious parameters or gradients such that their distance to benign clients' parameters or gradients is low respectively but at the same time their adverse effect on the global model's performance is high.

Federated Learning Model Poisoning +1

Adaptive Self-Distillation for Minimizing Client Drift in Heterogeneous Federated Learning

no code implementations31 May 2023 M. Yashwanth, Gaurav Kumar Nayak, Arya Singh, Yogesh Simmhan, Anirban Chakraborty

In practice, there can often be substantial heterogeneity (e. g., class imbalance) across the local data distributions observed by each of these clients.

Federated Learning

Query Efficient Cross-Dataset Transferable Black-Box Attack on Action Recognition

no code implementations23 Nov 2022 Rohit Gupta, Naveed Akhtar, Gaurav Kumar Nayak, Ajmal Mian, Mubarak Shah

By using a nearly disjoint dataset to train the substitute model, our method removes the requirement that the substitute model be trained using the same dataset as the target model, and leverages queries to the target model to retain the fooling rate benefits provided by query-based methods.

Action Recognition

Robust Few-shot Learning Without Using any Adversarial Samples

1 code implementation3 Nov 2022 Gaurav Kumar Nayak, Ruchit Rawal, Inder Khatri, Anirban Chakraborty

These methods rely on the generation of adversarial samples in every episode of training, which further adds a computational burden.

Decision Making Few-Shot Learning

Data-free Defense of Black Box Models Against Adversarial Attacks

1 code implementation3 Nov 2022 Gaurav Kumar Nayak, Inder Khatri, Ruchit Rawal, Anirban Chakraborty

At test time, WNR combined with trained regenerator network is prepended to the black box network, resulting in a high boost in adversarial accuracy.

Adversarial Robustness

DE-CROP: Data-efficient Certified Robustness for Pretrained Classifiers

no code implementations17 Oct 2022 Gaurav Kumar Nayak, Ruchit Rawal, Anirban Chakraborty

Existing works use this technique to provably secure a pretrained non-robust model by training a custom denoiser network on entire training data.

Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems

no code implementations5 May 2022 Gaurav Kumar Nayak, Ruchit Rawal, Rohit Lal, Himanshu Patil, Anirban Chakraborty

We, therefore, propose a holistic approach for quantifying adversarial vulnerability of a sample by combining these different perspectives, i. e., degree of model's reliance on high-frequency features and the (conventional) sample-distance to the decision boundary.

Adversarial Attack Knowledge Distillation

Beyond Classification: Knowledge Distillation using Multi-Object Impressions

no code implementations27 Oct 2021 Gaurav Kumar Nayak, Monish Keswani, Sharan Seshadri, Anirban Chakraborty

Knowledge Distillation (KD) utilizes training data as a transfer set to transfer knowledge from a complex network (Teacher) to a smaller network (Student).

Classification Knowledge Distillation +3

Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation

no code implementations18 Nov 2020 Gaurav Kumar Nayak, Konda Reddy Mopuri, Anirban Chakraborty

In such scenarios, existing approaches either iteratively compose a synthetic set representative of the original training dataset, one sample at a time or learn a generative model to compose such a transfer set.

Data-free Knowledge Distillation Transfer Learning

DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier

no code implementations27 Dec 2019 Sravanti Addepalli, Gaurav Kumar Nayak, Anirban Chakraborty, R. Venkatesh Babu

We use the available data, that may be an imbalanced subset of the original training dataset, or a related domain dataset, to retrieve representative samples from a trained classifier, using a novel Data-enriching GAN (DeGAN) framework.

Data-free Knowledge Distillation Incremental Learning +1

Zero-Shot Knowledge Distillation in Deep Networks

1 code implementation20 May 2019 Gaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj, R. Venkatesh Babu, Anirban Chakraborty

Without even using any meta-data, we synthesize the Data Impressions from the complex Teacher model and utilize these as surrogates for the original training data samples to transfer its learning to Student via knowledge distillation.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.