no code implementations • 19 Sep 2019 • Jihyeun Yoon, Kyungyul Kim, Jongseong Jang
Deep Neural Network based classifiers are known to be vulnerable to perturbations of inputs constructed by an adversarial attack to force misclassification.
Adversarial Attack Explainable Artificial Intelligence (XAI)
no code implementations • 30 Aug 2019 • Byeongmoon Ji, Hyemin Jung, Jihyeun Yoon, Kyungyul Kim, Younghak Shin
The prediction reliability of neural networks is important in many applications.