1 code implementation • 19 Oct 2023 • Chong Xiang, Tong Wu, Sihui Dai, Jonathan Petit, Suman Jana, Prateek Mittal
State-of-the-art defenses against adversarial patch attacks can now achieve strong certifiable robustness with a marginal drop in model utility.
no code implementations • 21 Feb 2023 • Sihui Dai, Saeed Mahloujifar, Chong Xiang, Vikash Sehwag, Pin-Yu Chen, Prateek Mittal
Using our framework, we present the first leaderboard, MultiRobustBench, for benchmarking multiattack evaluation which captures performance across attack types and attack strengths.
1 code implementation • 3 Feb 2022 • Chong Xiang, Alexander Valtchanov, Saeed Mahloujifar, Prateek Mittal
An attacker can use a single physically-realizable adversarial patch to make the object detector miss the detection of victim objects and undermine the functionality of object detection applications.
1 code implementation • 20 Aug 2021 • Chong Xiang, Saeed Mahloujifar, Prateek Mittal
Remarkably, PatchCleanser achieves 83. 9% top-1 clean accuracy and 62. 1% top-1 certified robust accuracy against a 2%-pixel square patch anywhere on the image for the 1000-class ImageNet dataset.
1 code implementation • 26 Apr 2021 • Chong Xiang, Prateek Mittal
Recent provably robust defenses generally follow the PatchGuard framework by using CNNs with small receptive fields and secure feature aggregation for robust model predictions.
2 code implementations • ICLR 2022 • Vikash Sehwag, Saeed Mahloujifar, Tinashe Handina, Sihui Dai, Chong Xiang, Mung Chiang, Prateek Mittal
We circumvent this challenge by using additional data from proxy distributions learned by advanced generative models.
1 code implementation • 5 Feb 2021 • Chong Xiang, Prateek Mittal
In this paper, we propose DetectorGuard as the first general framework for building provably robust object detectors against localized patch hiding attacks.
2 code implementations • 17 May 2020 • Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, Prateek Mittal
In this paper, we propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches.
no code implementations • 6 Dec 2018 • Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaarfar, Haojin Zhu
We conjecture that the key to defend against the model inversion and GAN-based attacks is not due to differential privacy but the perturbation of training data.
2 code implementations • CVPR 2019 • Chong Xiang, Charles R. Qi, Bo Li
Deep neural networks are known to be vulnerable to adversarial examples which are carefully crafted instances to cause the models to make wrong predictions.