2 code implementations • 10 Sep 2023 • Gaurav Kumar Nayak, Inder Khatri, Shubham Randive, Ruchit Rawal, Anirban Chakraborty
With the increasing deployment of deep neural networks in safety-critical applications such as self-driving cars, medical imaging, anomaly detection, etc., adversarial robustness has become a crucial concern in the reliability of these networks in real-world scenarios.
no code implementations • 14 Aug 2023 • Indu Joshi, Priyank Upadhya, Gaurav Kumar Nayak, Peter Schüffler, Nassir Navab
Leveraging this, we introduce DISBELIEVE, a local model poisoning attack that creates malicious parameters or gradients such that their distance to benign clients' parameters or gradients is low respectively but at the same time their adverse effect on the global model's performance is high.
no code implementations • 31 May 2023 • M. Yashwanth, Gaurav Kumar Nayak, Arya Singh, Yogesh Simmhan, Anirban Chakraborty
In practice, there can often be substantial heterogeneity (e. g., class imbalance) across the local data distributions observed by each of these clients.
no code implementations • 23 Nov 2022 • Rohit Gupta, Naveed Akhtar, Gaurav Kumar Nayak, Ajmal Mian, Mubarak Shah
By using a nearly disjoint dataset to train the substitute model, our method removes the requirement that the substitute model be trained using the same dataset as the target model, and leverages queries to the target model to retain the fooling rate benefits provided by query-based methods.
1 code implementation • 3 Nov 2022 • Gaurav Kumar Nayak, Ruchit Rawal, Inder Khatri, Anirban Chakraborty
These methods rely on the generation of adversarial samples in every episode of training, which further adds a computational burden.
1 code implementation • 3 Nov 2022 • Gaurav Kumar Nayak, Inder Khatri, Ruchit Rawal, Anirban Chakraborty
At test time, WNR combined with trained regenerator network is prepended to the black box network, resulting in a high boost in adversarial accuracy.
no code implementations • 17 Oct 2022 • Gaurav Kumar Nayak, Ruchit Rawal, Anirban Chakraborty
Existing works use this technique to provably secure a pretrained non-robust model by training a custom denoiser network on entire training data.
no code implementations • 5 May 2022 • Gaurav Kumar Nayak, Ruchit Rawal, Rohit Lal, Himanshu Patil, Anirban Chakraborty
We, therefore, propose a holistic approach for quantifying adversarial vulnerability of a sample by combining these different perspectives, i. e., degree of model's reliance on high-frequency features and the (conventional) sample-distance to the decision boundary.
no code implementations • 4 Apr 2022 • Gaurav Kumar Nayak, Ruchit Rawal, Anirban Chakraborty
Deep models are highly susceptible to adversarial attacks.
no code implementations • 27 Oct 2021 • Gaurav Kumar Nayak, Monish Keswani, Sharan Seshadri, Anirban Chakraborty
Knowledge Distillation (KD) utilizes training data as a transfer set to transfer knowledge from a complex network (Teacher) to a smaller network (Student).
no code implementations • 26 Oct 2021 • Gaurav Kumar Nayak, Het Shah, Anirban Chakraborty
Thus, in this work, we propose a novel problem of "Incremental Learning for Animal Pose Estimation".
no code implementations • 15 Jan 2021 • Gaurav Kumar Nayak, Konda Reddy Mopuri, Saksham Jain, Anirban Chakraborty
We dub them "Data Impressions", which act as proxy to the training data and can be used to realize a variety of tasks.
no code implementations • 18 Nov 2020 • Gaurav Kumar Nayak, Konda Reddy Mopuri, Anirban Chakraborty
In such scenarios, existing approaches either iteratively compose a synthetic set representative of the original training dataset, one sample at a time or learn a generative model to compose such a transfer set.
no code implementations • 3 Aug 2020 • Gaurav Kumar Nayak, Saksham Jain, R. Venkatesh Babu, Anirban Chakraborty
In the emerging commercial space industry there is a drastic increase in access to low cost satellite imagery.
no code implementations • 27 Dec 2019 • Sravanti Addepalli, Gaurav Kumar Nayak, Anirban Chakraborty, R. Venkatesh Babu
We use the available data, that may be an imbalanced subset of the original training dataset, or a related domain dataset, to retrieve representative samples from a trained classifier, using a novel Data-enriching GAN (DeGAN) framework.
1 code implementation • 20 May 2019 • Gaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj, R. Venkatesh Babu, Anirban Chakraborty
Without even using any meta-data, we synthesize the Data Impressions from the complex Teacher model and utilize these as surrogates for the original training data samples to transfer its learning to Student via knowledge distillation.