Search Results for author: Nupur Thakur

Found 4 papers, 0 papers with code

PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos

no code implementations13 Sep 2021 Nupur Thakur, Baoxin Li

Extensive research has demonstrated that deep neural networks (DNNs) are prone to adversarial attacks.

Image Classification

Evaluating a Simple Retraining Strategy as a Defense Against Adversarial Attacks

no code implementations20 Jul 2020 Nupur Thakur, Yuzhen Ding, Baoxin Li

Though deep neural networks (DNNs) have shown superiority over other techniques in major fields like computer vision, natural language processing, robotics, recently, it has been proven that they are vulnerable to adversarial attacks.

AdvFoolGen: Creating Persistent Troubles for Deep Classifiers

no code implementations20 Jul 2020 Yuzhen Ding, Nupur Thakur, Baoxin Li

Researches have shown that deep neural networks are vulnerable to malicious attacks, where adversarial images are created to trick a network into misclassification even if the images may give rise to totally different labels by human eyes.

Cannot find the paper you are looking for? You can Submit a new open access paper.