Search Results for author: Chong Xiang

Found 10 papers, 8 papers with code

PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses

1 code implementation19 Oct 2023 Chong Xiang, Tong Wu, Sihui Dai, Jonathan Petit, Suman Jana, Prateek Mittal

State-of-the-art defenses against adversarial patch attacks can now achieve strong certifiable robustness with a marginal drop in model utility.

MultiRobustBench: Benchmarking Robustness Against Multiple Attacks

no code implementations21 Feb 2023 Sihui Dai, Saeed Mahloujifar, Chong Xiang, Vikash Sehwag, Pin-Yu Chen, Prateek Mittal

Using our framework, we present the first leaderboard, MultiRobustBench, for benchmarking multiattack evaluation which captures performance across attack types and attack strengths.

Benchmarking

ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding Attacks via Patch-agnostic Masking

1 code implementation3 Feb 2022 Chong Xiang, Alexander Valtchanov, Saeed Mahloujifar, Prateek Mittal

An attacker can use a single physically-realizable adversarial patch to make the object detector miss the detection of victim objects and undermine the functionality of object detection applications.

Autonomous Vehicles Object +2

PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier

1 code implementation20 Aug 2021 Chong Xiang, Saeed Mahloujifar, Prateek Mittal

Remarkably, PatchCleanser achieves 83. 9% top-1 clean accuracy and 62. 1% top-1 certified robust accuracy against a 2%-pixel square patch anywhere on the image for the 1000-class ImageNet dataset.

Image Classification

PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches

1 code implementation26 Apr 2021 Chong Xiang, Prateek Mittal

Recent provably robust defenses generally follow the PatchGuard framework by using CNNs with small receptive fields and secure feature aggregation for robust model predictions.

DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks

1 code implementation5 Feb 2021 Chong Xiang, Prateek Mittal

In this paper, we propose DetectorGuard as the first general framework for building provably robust object detectors against localized patch hiding attacks.

Image Classification Object +2

PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking

2 code implementations17 May 2020 Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, Prateek Mittal

In this paper, we propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches.

Differentially Private Data Generative Models

no code implementations6 Dec 2018 Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaarfar, Haojin Zhu

We conjecture that the key to defend against the model inversion and GAN-based attacks is not due to differential privacy but the perturbation of training data.

BIG-bench Machine Learning Federated Learning +2

Generating 3D Adversarial Point Clouds

2 code implementations CVPR 2019 Chong Xiang, Charles R. Qi, Bo Li

Deep neural networks are known to be vulnerable to adversarial examples which are carefully crafted instances to cause the models to make wrong predictions.

3D Shape Classification Autonomous Driving

Cannot find the paper you are looking for? You can Submit a new open access paper.