2 code implementations • 12 Apr 2024 • Yang Li, Songlin Yang, Wei Wang, Ziwen He, Bo Peng, Jing Dong
We verify the effectiveness of the proposed explanations from two aspects: (1) Counterfactual Trace Visualization: the enhanced forgery images are useful to reveal artifacts by visually contrasting the original images and two different visualization methods; (2) Transferable Adversarial Attacks: the adversarial forgery images generated by attacking the detection model are able to mislead other detection models, implying the removed artifacts are general.
no code implementations • 31 Dec 2023 • Xiaoxuan Han, Songlin Yang, Wei Wang, Ziwen He, Jing Dong
To further investigate natural triggers, we propose a novel analysis-by-synthesis backdoor attack against face forgery detection models, which embeds natural triggers in the latent space.
no code implementations • 26 Jun 2023 • Yueming Lyu, Yue Jiang, Ziwen He, Bo Peng, Yunfan Liu, Jing Dong
The privacy and security of face data on social media are facing unprecedented challenges as it is vulnerable to unauthorized access and identification.
no code implementations • 30 May 2022 • Songlin Yang, Wei Wang, Chenye Xu, Ziwen He, Bo Peng, Jing Dong
These fine-grained adversarial examples can be used for selecting robust backbone networks and auxiliary features.
2 code implementations • CVPR 2022 • Ziwen He, Wei Wang, Jing Dong, Tieniu Tan
The experiment shows that our method has improved the transferability by a large margin under a similar sparsity setting compared with state-of-the-art methods.
no code implementations • 22 Feb 2020 • Ziwen He, Wei Wang, Jing Dong, Tieniu Tan
In this paper, we demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
no code implementations • 19 Dec 2019 • Ziwen He, Wei Wang, Xinsheng Xuan, Jing Dong, Tieniu Tan
Thus, in this paper, we propose a new attack mechanism which performs the non-targeted attack when the targeted attack fails.