10 code implementations • ICLR 2020 • Eric Wong, Leslie Rice, J. Zico Kolter
Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy to PGD attacks with $\epsilon=8/255$ in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at $\epsilon=2/255$ in 12 hours, in comparison to past work based on "free" adversarial training which took 10 and 50 hours to reach the same respective thresholds.
4 code implementations • ICML 2020 • Leslie Rice, Eric Wong, J. Zico Kolter
Based upon this observed effect, we show that the performance gains of virtually all recent algorithmic improvements upon adversarial training can be matched by simply using early stopping.
1 code implementation • 21 Jun 2022 • Nicholas Carlini, Florian Tramer, Krishnamurthy Dj Dvijotham, Leslie Rice, MingJie Sun, J. Zico Kolter
In this paper we show how to achieve state-of-the-art certified adversarial robustness to 2-norm bounded perturbations by relying exclusively on off-the-shelf pretrained models.
no code implementations • 1 Jan 2021 • Wan-Yi Lin, Fatemeh Sheikholeslami, Jinghao Shi, Leslie Rice, J Zico Kolter
Our method improves upon the current state of the art in defending against patch attacks on CIFAR10 and ImageNet, both in terms of certified accuracy and inference time.
no code implementations • NeurIPS 2021 • Leslie Rice, Anna Bair, huan zhang, J. Zico Kolter
Several recent works in machine learning have focused on evaluating the test-time robustness of a classifier: how well the classifier performs not just on the target domain it was trained upon, but upon perturbed examples.
no code implementations • ICML Workshop AML 2021 • Mohammad Sadegh Norouzzadeh, Wan-Yi Lin, Leonid Boytsov, Leslie Rice, huan zhang, Filipe Condessa, J Zico Kolter
Most pre-trained classifiers, though they may work extremely well on the domain they were trained upon, are not trained in a robust fashion, and therefore are sensitive to adversarial attacks.
no code implementations • ICML Workshop AML 2021 • Wan-Yi Lin, Fatemeh Sheikholeslami, Jinghao Shi, Leslie Rice, J Zico Kolter
This paper proposes a certifiable defense against adversarial patch attacks on image classification.