Search Results for author: Leslie Rice

Found 7 papers, 3 papers with code

Fast is better than free: Revisiting adversarial training

10 code implementations ICLR 2020 Eric Wong, Leslie Rice, J. Zico Kolter

Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy to PGD attacks with $\epsilon=8/255$ in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at $\epsilon=2/255$ in 12 hours, in comparison to past work based on "free" adversarial training which took 10 and 50 hours to reach the same respective thresholds.

Overfitting in adversarially robust deep learning

4 code implementations ICML 2020 Leslie Rice, Eric Wong, J. Zico Kolter

Based upon this observed effect, we show that the performance gains of virtually all recent algorithmic improvements upon adversarial training can be matched by simply using early stopping.

Data Augmentation

(Certified!!) Adversarial Robustness for Free!

1 code implementation21 Jun 2022 Nicholas Carlini, Florian Tramer, Krishnamurthy Dj Dvijotham, Leslie Rice, MingJie Sun, J. Zico Kolter

In this paper we show how to achieve state-of-the-art certified adversarial robustness to 2-norm bounded perturbations by relying exclusively on off-the-shelf pretrained models.

Adversarial Robustness Denoising

Certified robustness against physically-realizable patch attack via randomized cropping

no code implementations1 Jan 2021 Wan-Yi Lin, Fatemeh Sheikholeslami, Jinghao Shi, Leslie Rice, J Zico Kolter

Our method improves upon the current state of the art in defending against patch attacks on CIFAR10 and ImageNet, both in terms of certified accuracy and inference time.

Classification Crop Classification +2

Robustness between the worst and average case

no code implementations NeurIPS 2021 Leslie Rice, Anna Bair, huan zhang, J. Zico Kolter

Several recent works in machine learning have focused on evaluating the test-time robustness of a classifier: how well the classifier performs not just on the target domain it was trained upon, but upon perturbed examples.

Adversarial Robustness

Empirical robustification of pre-trained classifiers

no code implementations ICML Workshop AML 2021 Mohammad Sadegh Norouzzadeh, Wan-Yi Lin, Leonid Boytsov, Leslie Rice, huan zhang, Filipe Condessa, J Zico Kolter

Most pre-trained classifiers, though they may work extremely well on the domain they were trained upon, are not trained in a robust fashion, and therefore are sensitive to adversarial attacks.

Denoising Image Reconstruction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.