no code implementations • ICML Workshop AML 2021 • Jacob M. Springer, Melanie Mitchell, Garrett T. Kenyon
Adversarial examples for neural networks are known to be transferable: examples optimized to be misclassified by a “source” network are often misclassified by other “destination” networks.
no code implementations • NeurIPS 2021 • Jacob M. Springer, Melanie Mitchell, Garrett T. Kenyon
Adversarial examples for neural network image classifiers are known to be transferable: examples optimized to be misclassified by a source classifier are often misclassified as well by classifiers with different architectures.
no code implementations • 9 Feb 2021 • Jacob M. Springer, Melanie Mitchell, Garrett T. Kenyon
The results we present in this paper provide new insight into the nature of the non-robust features responsible for adversarial vulnerability of neural network classifiers.
no code implementations • 28 Sep 2020 • Jacob M. Springer, Bryn Marie Reinstadler, Una-May O'Reilly
Neural networks are well-known to be vulnerable to imperceptible perturbations in the input, called adversarial examples, that result in misclassification.
no code implementations • 3 Sep 2020 • Jacob M. Springer, Garrett T. Kenyon
To investigate how weight initializations affect performance, we examine small convolutional networks that are trained to predict n steps of the two-dimensional cellular automaton Conway's Game of Life, the update rules of which can be implemented efficiently in a 2n+1 layer convolutional network.
no code implementations • 17 Nov 2018 • Jacob M. Springer, Charles S. Strauss, Austin M. Thresher, Edward Kim, Garrett T. Kenyon
Although deep learning has shown great success in recent years, researchers have discovered a critical flaw where small, imperceptible changes in the input to the system can drastically change the output classification.