1 code implementation • 19 Mar 2025 • Yuchen Ren, Zhengyu Zhao, Chenhao Lin, Bo Yang, Lu Zhou, Zhe Liu, Chao Shen
However, existing work on ViTs has restricted their surrogate refinement to backward propagation.
1 code implementation • 15 Mar 2025 • Chenhao Lin, Chenyang Zhao, Shiwei Wang, Longtian Wang, Chao Shen, Zhengyu Zhao
Backdoor attacks typically place a specific trigger on certain training data, such that the model makes prediction errors on inputs with that trigger during inference.
1 code implementation • 5 Mar 2025 • Songlong Xing, Zhengyu Zhao, Nicu Sebe
Our paradigm is simple and training-free, providing the first method to defend CLIP from adversarial attacks at test time, which is orthogonal to existing methods aiming to boost zero-shot adversarial robustness of CLIP.
1 code implementation • 25 Dec 2024 • Yuchen Ren, Zhengyu Zhao, Chenhao Lin, Bo Yang, Lu Zhou, Zhe Liu, Chao Shen
We propose the Multiple Monotonic Diversified Integrated Gradients (MuMoDIG) attack, which can generate highly transferable adversarial examples on different CNN and ViT models and defenses.
1 code implementation • 18 Dec 2024 • Le Yang, Ziwei Zheng, Boxu Chen, Zhengyu Zhao, Chenhao Lin, Chao Shen
By orthogonalizing the model weights, input features will be projected into the Null space of the HalluSpace to reduce OH, based on which we name our method Nullu.
1 code implementation • 5 Dec 2024 • Zhizhen Chen, Subrat Kishore Dutta, Zhengyu Zhao, Chenhao Lin, Chao Shen, Xiao Zhang
In a common clean-label setting, they are achieved by slightly perturbing a subset of training samples given access to those specific targets.
1 code implementation • 27 Aug 2024 • Hamid Bostani, Zhengyu Zhao, Veelasha Moonsamy
In particular, our defense can improve adversarial robustness by up to 55% against realistic evasion attacks compared to Sec-SVM.
1 code implementation • 21 Aug 2024 • Weipeng Jiang, Zhenting Wang, Juan Zhai, Shiqing Ma, Zhengyu Zhao, Chao Shen
Moreover, ECLIPSE is on par with template-based methods in ASR while offering superior attack efficiency, reducing the average attack overhead by 83%.
no code implementations • 15 Jul 2024 • Jingyi Deng, Chenhao Lin, Zhengyu Zhao, Shuai Liu, Qian Wang, Chao Shen
Deep generative models have demonstrated impressive performance in various computer vision applications, including image synthesis, video generation, and medical analysis.
no code implementations • 9 Jun 2024 • Chen Ma, Ningfei Wang, Zhengyu Zhao, Qian Wang, Qi Alfred Chen, Chao Shen
Extensive evaluations demonstrate the superior performance of ControlLoc, achieving an impressive average attack success rate of around 98. 1% across various AD visual perceptions and datasets, which is four times greater effectiveness than the existing hijacking attack.
no code implementations • 9 Jun 2024 • Chen Ma, Ningfei Wang, Zhengyu Zhao, Qi Alfred Chen, Chao Shen
Additionally, we conduct AD system-level impact assessments, such as vehicle collisions, using industry-grade AD systems with production-grade AD simulators with a 97% average rate.
1 code implementation • CVPR 2024 • Junhao Zheng, Chenhao Lin, Jiahao Sun, Zhengyu Zhao, Qian Li, Chao Shen
Deep learning-based monocular depth estimation (MDE), extensively applied in autonomous driving, is known to be vulnerable to adversarial attacks.
no code implementations • 27 Feb 2024 • Bo Yang, Hengwei Zhang, Jindong Wang, Yulong Yang, Chenhao Lin, Chao Shen, Zhengyu Zhao
Transferable adversarial examples cause practical security risks since they can mislead a target model without knowing its internal knowledge.
1 code implementation • 12 Dec 2023 • Qiwei Tian, Chenhao Lin, Zhengyu Zhao, Qian Li, Chao Shen
Furthermore, CA prevents the consequential model collapse, based on a novel metric, collapseness, which is incorporated into the optimization of perturbation.
1 code implementation • 18 Oct 2023 • Zhengyu Zhao, Hanwei Zhang, Renjue Li, Ronan Sicre, Laurent Amsaleg, Michael Backes, Qi Li, Chao Shen
Transferable adversarial examples raise critical security concerns in real-world, black-box attack scenarios.
no code implementations • 11 Oct 2023 • Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang
Specifically, the VPPTaaS provider optimizes a visual prompt given downstream data, and downstream users can use this prompt together with the large pre-trained model for prediction.
1 code implementation • 11 Oct 2023 • Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang
Such a Composite Backdoor Attack (CBA) is shown to be stealthier than implanting the same multiple trigger keys in only a single component.
no code implementations • 3 Sep 2023 • Weijie Wang, Zhengyu Zhao, Nicu Sebe, Bruno Lepri
Although effective deepfake detectors have been proposed, they are substantially vulnerable to adversarial attacks.
no code implementations • 13 Jun 2023 • Yihan Ma, Zhengyu Zhao, Xinlei He, Zheng Li, Michael Backes, Yang Zhang
In particular, to help the watermark survive the subject-driven synthesis, we incorporate the synthesis process in learning GenWatermark by fine-tuning the detector with synthesized images for a specific subject.
1 code implementation • 31 Jan 2023 • Zhuoran Liu, Zhengyu Zhao, Martha Larson
Perturbative availability poisons (PAPs) add small changes to images to prevent their use for model training.
1 code implementation • 17 Nov 2022 • Zhengyu Zhao, Hanwei Zhang, Renjue Li, Ronan Sicre, Laurent Amsaleg, Michael Backes
In this work, we design good practices to address these limitations, and we present the first comprehensive evaluation of transfer attacks, covering 23 representative attacks against 9 defenses on ImageNet.
1 code implementation • 2 Nov 2022 • Dirren van Vlijmen, Alex Kolmus, Zhuoran Liu, Zhengyu Zhao, Martha Larson
We introduce ShortcutGen, a new data poisoning attack that generates sample-dependent, error-minimizing perturbations by learning a generator.
1 code implementation • 31 Aug 2022 • Yiyong Liu, Zhengyu Zhao, Michael Backes, Yang Zhang
Machine learning models are vulnerable to membership inference attacks in which an adversary aims to predict whether or not a particular sample was contained in the target model's training dataset.
1 code implementation • 3 Jun 2022 • Zhengyu Zhao, Nga Dang, Martha Larson
In this paper, we propose that adversarial images should be evaluated based on semantic mismatch, rather than label mismatch, as used in current work.
1 code implementation • 30 May 2022 • Hamid Bostani, Zhengyu Zhao, Zhuoran Liu, Veelasha Moonsamy
The primary approach to identifying vulnerable regions involves investigating realizable AEs, but generating these feasible apps poses a challenge.
1 code implementation • 25 Nov 2021 • Zhuoran Liu, Zhengyu Zhao, Alex Kolmus, Tijn Berns, Twan van Laarhoven, Tom Heskes, Martha Larson
Recent work has shown that imperceptible perturbations can be applied to craft unlearnable examples (ULEs), i. e. images whose content cannot be used to improve a classifier during training.
4 code implementations • NeurIPS 2021 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
In particular, we, for the first time, identify that a simple logit loss can yield competitive results with the state of the arts.
1 code implementation • 12 Nov 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
In particular, our color filter space is explicitly specified so that we are able to provide a systematic analysis of model robustness against adversarial color transformations, from both the attack and defense perspectives.
1 code implementation • EMNLP 2020 • Haoyu Song, Yan Wang, Wei-Nan Zhang, Zhengyu Zhao, Ting Liu, Xiaojiang Liu
Maintaining a consistent attribute profile is crucial for dialogue agents to naturally converse with humans.
1 code implementation • 3 Feb 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
We introduce an approach that enhances images using a color filter in order to create adversarial effects, which fool neural networks into misclassification.
2 code implementations • CVPR 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
The success of image perturbations that are designed to fool image classifier is assessed in terms of both adversarial effect and visual imperceptibility.
1 code implementation • 29 Jan 2019 • Zhuoran Liu, Zhengyu Zhao, Martha Larson
An adversarial query is an image that has been modified to disrupt content-based image retrieval (CBIR) while appearing nearly untouched to the human eye.
1 code implementation • 23 Jul 2018 • Zhengyu Zhao, Martha Larson
As deep learning approaches to scene recognition emerge, they have continued to leverage discriminative regions at multiple scales, building on practices established by conventional image classification research.