Search Results for author: B. S. Vivek

Found 2 papers, 2 papers with code

Regularizers for Single-step Adversarial Training

1 code implementation3 Feb 2020 B. S. Vivek, R. Venkatesh Babu

The proposed regularizers mitigate the effect of gradient masking by harnessing on properties that differentiate a robust model from that of a pseudo robust model.

FDA: Feature Disruptive Attack

1 code implementation ICCV 2019 Aditya Ganeshan, B. S. Vivek, R. Venkatesh Babu

Though Deep Neural Networks (DNN) show excellent performance across various computer vision tasks, several works show their vulnerability to adversarial samples, i. e., image samples with imperceptible noise engineered to manipulate the network's prediction.

Adversarial Attack Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.