Search Results for author: Ziwen He

Found 7 papers, 2 papers with code

Counterfactual Explanations for Face Forgery Detection via Adversarial Removal of Artifacts

2 code implementations12 Apr 2024 Yang Li, Songlin Yang, Wei Wang, Ziwen He, Bo Peng, Jing Dong

We verify the effectiveness of the proposed explanations from two aspects: (1) Counterfactual Trace Visualization: the enhanced forgery images are useful to reveal artifacts by visually contrasting the original images and two different visualization methods; (2) Transferable Adversarial Attacks: the adversarial forgery images generated by attacking the detection model are able to mislead other detection models, implying the removed artifacts are general.

Adversarial Attack counterfactual

Is It Possible to Backdoor Face Forgery Detection with Natural Triggers?

no code implementations31 Dec 2023 Xiaoxuan Han, Songlin Yang, Wei Wang, Ziwen He, Jing Dong

To further investigate natural triggers, we propose a novel analysis-by-synthesis backdoor attack against face forgery detection models, which embeds natural triggers in the latent space.

Backdoor Attack backdoor defense

3D-Aware Adversarial Makeup Generation for Facial Privacy Protection

no code implementations26 Jun 2023 Yueming Lyu, Yue Jiang, Ziwen He, Bo Peng, Yunfan Liu, Jing Dong

The privacy and security of face data on social media are facing unprecedented challenges as it is vulnerable to unauthorized access and identification.

Face Recognition Face Verification

Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models

no code implementations30 May 2022 Songlin Yang, Wei Wang, Chenye Xu, Ziwen He, Bo Peng, Jing Dong

These fine-grained adversarial examples can be used for selecting robust backbone networks and auxiliary features.

Adversarial Attack Adversarial Robustness +1

Transferable Sparse Adversarial Attack

2 code implementations CVPR 2022 Ziwen He, Wei Wang, Jing Dong, Tieniu Tan

The experiment shows that our method has improved the transferability by a large margin under a similar sparsity setting compared with state-of-the-art methods.

Adversarial Attack Quantization

Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition

no code implementations22 Feb 2020 Ziwen He, Wei Wang, Jing Dong, Tieniu Tan

In this paper, we demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.

Adversarial Attack Gait Recognition +1

A New Ensemble Method for Concessively Targeted Multi-model Attack

no code implementations19 Dec 2019 Ziwen He, Wei Wang, Xinsheng Xuan, Jing Dong, Tieniu Tan

Thus, in this paper, we propose a new attack mechanism which performs the non-targeted attack when the targeted attack fails.

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.