Utilizing a null class to restrict decision spaces and defend against neural network adversarial attacks

24 Feb 2020 Matthew J. Roos

Despite recent progress, deep neural networks generally continue to be vulnerable to so-called adversarial examples--input images with small perturbations that can result in changes in the output classifications, despite no such change in the semantic meaning to human viewers. This is true even for seemingly simple challenges such as the MNIST digit classification task... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet