Search Results for author: Eric Wong

Found 13 papers, 11 papers with code

Adversarial Robustness Against the Union of Multiple Threat Models

1 code implementation ICML 2020 Pratyush Maini, Eric Wong, Zico Kolter

Owing to the susceptibility of deep learning systems to adversarial attacks, there has been a great deal of work in developing (both empirically and certifiably) robust classifiers.

Adversarial Robustness

Missingness Bias in Model Debugging

1 code implementation ICLR 2022 Saachi Jain, Hadi Salman, Eric Wong, Pengchuan Zhang, Vibhav Vineet, Sai Vemprala, Aleksander Madry

Missingness, or the absence of features from an input, is a concept fundamental to many model debugging tools.

Certified Patch Robustness via Smoothed Vision Transformers

1 code implementation11 Oct 2021 Hadi Salman, Saachi Jain, Eric Wong, Aleksander Mądry

Certified patch defenses can guarantee robustness of an image classifier to arbitrary changes within a bounded contiguous region.

DeepSplit: Scalable Verification of Deep Neural Networks via Operator Splitting

no code implementations16 Jun 2021 Shaoru Chen, Eric Wong, J. Zico Kolter, Mahyar Fazlyab

Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex relaxations as a promising alternative.

Image Classification reinforcement-learning

Leveraging Sparse Linear Layers for Debuggable Deep Networks

2 code implementations11 May 2021 Eric Wong, Shibani Santurkar, Aleksander Mądry

We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks.

Learning perturbation sets for robust machine learning

1 code implementation ICLR 2021 Eric Wong, J. Zico Kolter

In this paper, we aim to bridge this gap by learning perturbation sets from data, in order to characterize real-world effects for robust training and evaluation.

Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications

no code implementations30 Jun 2020 Eric Wong, Tim Schneider, Joerg Schmitt, Frank R. Schmidt, J. Zico Kolter

Additionally, we show how specific intervals of fuel injection quantities can be targeted to maximize robustness for certain ranges, allowing us to train a virtual sensor for fuel injection which is provably guaranteed to have at most 10. 69% relative error under noise while maintaining 3% relative error on non-adversarial data within normalized fuel injection ranges of 0. 6 to 1. 0.

Overfitting in adversarially robust deep learning

2 code implementations ICML 2020 Leslie Rice, Eric Wong, J. Zico Kolter

Based upon this observed effect, we show that the performance gains of virtually all recent algorithmic improvements upon adversarial training can be matched by simply using early stopping.

Data Augmentation

Fast is better than free: Revisiting adversarial training

10 code implementations ICLR 2020 Eric Wong, Leslie Rice, J. Zico Kolter

Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy to PGD attacks with $\epsilon=8/255$ in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at $\epsilon=2/255$ in 12 hours, in comparison to past work based on "free" adversarial training which took 10 and 50 hours to reach the same respective thresholds.

Adversarial Robustness Against the Union of Multiple Perturbation Models

1 code implementation9 Sep 2019 Pratyush Maini, Eric Wong, J. Zico Kolter

Owing to the susceptibility of deep learning systems to adversarial attacks, there has been a great deal of work in developing (both empirically and certifiably) robust classifiers.

Adversarial Robustness

Wasserstein Adversarial Examples via Projected Sinkhorn Iterations

2 code implementations21 Feb 2019 Eric Wong, Frank R. Schmidt, J. Zico Kolter

In this paper, we propose a new threat model for adversarial attacks based on the Wasserstein distance.

Adversarial Attack Adversarial Defense +4

Scaling provable adversarial defenses

4 code implementations NeurIPS 2018 Eric Wong, Frank R. Schmidt, Jan Hendrik Metzen, J. Zico Kolter

Recent work has developed methods for learning deep network classifiers that are provably robust to norm-bounded adversarial perturbation; however, these methods are currently only possible for relatively small feedforward networks.

Provable defenses against adversarial examples via the convex outer adversarial polytope

8 code implementations ICML 2018 Eric Wong, J. Zico Kolter

We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations on the training data.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.