no code implementations • 11 Dec 2023 • Shangbo Wu, Yu-an Tan, Yajie Wang, Ruinan Ma, Wencong Ma, Yuanzhang Li
To this end, we propose a transferable adversarial attack with fine-grained perturbation optimization in the frequency domain, creating centralized perturbation.
no code implementations • 14 Oct 2023 • Ruinan Ma, Yu-an Tan, Shangbo Wu, Tian Chen, Yajie Wang, Yuanzhang Li
In the first stage, we use an encoder to invisibly write the watermark image into the output images of the original AIGC tool, and reversely extract the watermark image through the corresponding decoder.
no code implementations • 12 Oct 2023 • Ruinan Ma, Canjie Zhu, Mingfeng Lu, Yunjie Li, Yu-an Tan, Ruibin Zhang, Ran Tao
We first propose an attack pipeline under the time-frequency images scenario and DITIMI-FGSM attack algorithm with high transferability.
no code implementations • 10 Jun 2022 • Nan Luo, Yuanzhang Li, Yajie Wang, Shangbo Wu, Yu-an Tan, Quanxin Zhang
Clean-label settings make the attack more stealthy due to the correct image-label pairs, but some problems still exist: first, traditional methods for poisoning training data are ineffective; second, traditional triggers are not stealthy which are still perceptible.
no code implementations • 13 May 2022 • Shuhao Li, Yajie Wang, Yuanzhang Li, Yu-an Tan
We name our attack \textbf{l-Leaks}.
1 code implementation • 27 Apr 2022 • Huipeng Zhou, Yu-an Tan, Yajie Wang, Haoran Lyu, Shangbo Wu, Yuanzhang Li
We attack the unique self-attention mechanism in ViTs by restructuring the embedded patches of the input.
no code implementations • 26 Apr 2022 • Haoran Lyu, Yajie Wang, Yu-an Tan, Huipeng Zhou, Yuhang Zhao, Quanxin Zhang
Our method can mask the part input of the Mixer layer, avoid overfitting of the adversarial examples to the source model, and improve the transferability of cross-architecture.
1 code implementation • 7 Dec 2021 • Yucheng Shi, Yahong Han, Yu-an Tan, Xiaohui Kuang
On the other hand, the neglect of noise sensitivity differences between image regions by existing decision-based attacks further compromises the efficiency of noise compression, especially for ViTs.
no code implementations • 3 Jul 2021 • Yajie Wang, Shangbo Wu, Wenyi Jiang, Shengang Hao, Yu-an Tan, Quanxin Zhang
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples.