no code implementations • 18 Apr 2024 • Shouwei Ruan, Yinpeng Dong, Hanqing Liu, Yao Huang, Hang Su, Xingxing Wei
Vision-Language Pre-training (VLP) models like CLIP have achieved remarkable success in computer vision and particularly demonstrated superior robustness to distribution shifts of 2D images.
no code implementations • 15 Dec 2023 • Yao Huang, Yinpeng Dong, Shouwei Ruan, Xiao Yang, Hang Su, Xingxing Wei
However, the field of transferable targeted 3D adversarial attacks remains vacant.
1 code implementation • 21 Jul 2023 • Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei
Experimental results show that VIAT significantly improves the viewpoint robustness of various image classifiers based on the diversity of adversarial viewpoints generated by GMVFool.
1 code implementation • ICCV 2023 • Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei
Visual recognition models are not invariant to viewpoint changes in the 3D world, as different viewing directions can dramatically affect the predictions given the same object.
1 code implementation • 28 Jun 2023 • Xingxing Wei, Shouwei Ruan, Yinpeng Dong, Hang Su
In this paper, we propose the Distribution-Optimized Adversarial Patch (DOPatch), a novel method that optimizes a multimodal distribution of adversarial locations instead of individual ones.
no code implementations • 15 Jun 2023 • Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su, Xingxing Wei
In this paper, we propose DIFFender, a novel defense method that leverages a text-guided diffusion model to defend against adversarial patches.
1 code implementation • 8 Oct 2022 • Yinpeng Dong, Shouwei Ruan, Hang Su, Caixin Kang, Xingxing Wei, Jun Zhu
Recent studies have demonstrated that visual recognition models lack robustness to distribution shift.