no code implementations • 27 Oct 2022 • Utkarsh Soni, Nupur Thakur, Sarath Sreedharan, Lin Guan, Mudit Verma, Matthew Marquez, Subbarao Kambhampati
If the relevant concept is not in the shared vocabulary, then it is learned.
no code implementations • 13 Sep 2021 • Nupur Thakur, Baoxin Li
Extensive research has demonstrated that deep neural networks (DNNs) are prone to adversarial attacks.
no code implementations • 20 Jul 2020 • Nupur Thakur, Yuzhen Ding, Baoxin Li
Though deep neural networks (DNNs) have shown superiority over other techniques in major fields like computer vision, natural language processing, robotics, recently, it has been proven that they are vulnerable to adversarial attacks.
no code implementations • 20 Jul 2020 • Yuzhen Ding, Nupur Thakur, Baoxin Li
Researches have shown that deep neural networks are vulnerable to malicious attacks, where adversarial images are created to trick a network into misclassification even if the images may give rise to totally different labels by human eyes.