no code implementations • 23 Feb 2024 • Vinu Sankar Sadasivan, Shoumik Saha, Gaurang Sriramanan, Priyatham Kattakinda, Atoosa Chegini, Soheil Feizi
Through human evaluations, we find that our untargeted attack causes Vicuna-7B-v1. 5 to produce ~15% more incorrect outputs when compared to LM outputs in the absence of our attack.
1 code implementation • 10 Jun 2023 • Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, R. Venkatesh Babu
Advances in adversarial defenses have led to a significant improvement in the robustness of Deep Neural Networks.
1 code implementation • 18 Oct 2022 • Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, R. Venkatesh Babu
The presence of images that flip Oracle predictions and those that do not makes this a challenging setting for adversarial robustness.
1 code implementation • NeurIPS 2021 • Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, Venkatesh Babu R
The vulnerability of Deep Neural Networks to adversarial attacks has spurred immense interest towards improving their robustness.
no code implementations • ICML Workshop AML 2021 • Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, Venkatesh Babu Radhakrishnan
The presence of images that flip Oracle predictions and those that do not, makes this a challenging setting for adversarial robustness.
1 code implementation • NeurIPS 2020 • Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, R. Venkatesh Babu
Further, we propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses by utilizing the proposed relaxation term for both attack generation and training.
1 code implementation • CVPR 2020 • Sravanti Addepalli, Vivek B. S., Arya Baburaj, Gaurang Sriramanan, R. Venkatesh Babu
In this work, we attempt to address this problem by training networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.