Search Results for author: Jacob M. Springer

Found 6 papers, 0 papers with code

Uncovering Universal Features: How Adversarial Training Improves Adversarial Transferability

no code implementations ICML Workshop AML 2021 Jacob M. Springer, Melanie Mitchell, Garrett T. Kenyon

Adversarial examples for neural networks are known to be transferable: examples optimized to be misclassified by a “source” network are often misclassified by other “destination” networks.

A Little Robustness Goes a Long Way: Leveraging Robust Features for Targeted Transfer Attacks

no code implementations NeurIPS 2021 Jacob M. Springer, Melanie Mitchell, Garrett T. Kenyon

Adversarial examples for neural network image classifiers are known to be transferable: examples optimized to be misclassified by a source classifier are often misclassified as well by classifiers with different architectures.

Adversarial Perturbations Are Not So Weird: Entanglement of Robust and Non-Robust Features in Neural Network Classifiers

no code implementations9 Feb 2021 Jacob M. Springer, Melanie Mitchell, Garrett T. Kenyon

The results we present in this paper provide new insight into the nature of the non-robust features responsible for adversarial vulnerability of neural network classifiers.

STRATA: Simple, Gradient-Free Attacks for Models of Code

no code implementations28 Sep 2020 Jacob M. Springer, Bryn Marie Reinstadler, Una-May O'Reilly

Neural networks are well-known to be vulnerable to imperceptible perturbations in the input, called adversarial examples, that result in misclassification.

It's Hard for Neural Networks To Learn the Game of Life

no code implementations3 Sep 2020 Jacob M. Springer, Garrett T. Kenyon

To investigate how weight initializations affect performance, we examine small convolutional networks that are trained to predict n steps of the two-dimensional cellular automaton Conway's Game of Life, the update rules of which can be implemented efficiently in a 2n+1 layer convolutional network.

Classifiers Based on Deep Sparse Coding Architectures are Robust to Deep Learning Transferable Examples

no code implementations17 Nov 2018 Jacob M. Springer, Charles S. Strauss, Austin M. Thresher, Edward Kim, Garrett T. Kenyon

Although deep learning has shown great success in recent years, researchers have discovered a critical flaw where small, imperceptible changes in the input to the system can drastically change the output classification.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.