no code implementations • 14 Apr 2024 • Jiawei Chen, Xiao Yang, Yinpeng Dong, Hang Su, Jianteng Peng, Zhaoxia Yin
Motivated by the rich structural and detailed features of face generative models, we propose FaceCat which utilizes the face generative model as a pre-trained model to improve the performance of FAS and FAD.
no code implementations • 11 Apr 2024 • Zhenzhe Gao, Zhenjun Tang, Zhaoxia Yin, Baoyuan Wu, Yue Lu
Neural networks have increasingly influenced people's lives.
no code implementations • 22 Aug 2023 • Zhenzhe Gao, Zhaoxia Yin, Hongjian Zhan, Heng Yin, Yue Lu
Fragile watermarking is a technique used to identify tampering in AI models.
no code implementations • 4 Aug 2023 • Jiawei Chen, Xiao Yang, Heng Yin, Mingzhi Ma, Bihui Chen, Jianteng Peng, Yandong Guo, Zhaoxia Yin, Hang Su
Ensuring the reliability of face recognition systems against presentation attacks necessitates the deployment of face anti-spoofing techniques.
no code implementations • 13 May 2023 • Zhaoxia Yin, Heng Yin, Hang Su, Xinpeng Zhang, Zhenzhe Gao
Our method has some advantages: (1) the iterative update of samples is done in a decision-based black-box manner, relying solely on the predicted probability distribution of the target model, which reduces the risk of exposure to adversarial attacks, (2) the small-amplitude multiple iterations approach allows the fragile samples to perform well visually, with a PSNR of 55 dB in TinyImageNet compared to the original samples, (3) even with changes in the overall parameters of the model of magnitude 1e-4, the fragile samples can detect such changes, and (4) the method is independent of the specific model structure and dataset.
no code implementations • 8 May 2023 • Zhaoxia Yin, Shaowei Zhu, Hang Su, Jianteng Peng, Wanli Lyu, Bin Luo
However, numerous studies have proven that previous methods create detection or defense against certain attacks, which renders the method ineffective in the face of the latest unknown attack methods.
no code implementations • 16 Dec 2022 • Shaowei Zhu, Wanli Lyu, Bin Li, Zhaoxia Yin, Bin Luo
In addition, the proposed method does not modify any task model, which can be used as a preprocessing module, which significantly reduces the deployment cost in practical applications.
no code implementations • 16 Aug 2022 • Zhaoxia Yin, Heng Yin, Xinpeng Zhang
In the process of watermarking, we train a generative model with the specific loss function and secret key to generate triggers that are sensitive to the fine-tuning of the target classifier.
no code implementations • 22 Feb 2022 • Qingyu Wang, Guorui Feng, Zhaoxia Yin, Bin Luo
Firstly, the former is used to generate the UAP, which can learn the distribution of perturbations better, and then the latter is used to find the sensitive regions concerned by the RSI classification model.
no code implementations • 6 Oct 2021 • Li Chen, Shaowei Zhu, Zhaoxia Yin
Adding perturbations to images can mislead classification models to produce incorrect results.
no code implementations • 19 Jan 2021 • Jie Wang, Zhaoxia Yin, Jin Tang, Jing Jiang, Bin Luo
The studies on black-box adversarial attacks have become increasingly prevalent due to the intractable acquisition of the structural knowledge of deep neural networks (DNNs).
no code implementations • ICML Workshop AML 2021 • Jie Wang, Zhaoxia Yin, Jing Jiang, Yang Du
In this paper, we propose an attention-guided black-box adversarial attack based on the large-scale multiobjective evolutionary optimization, termed as LMOA.
no code implementations • 6 Nov 2019 • Zhaoxia Yin, Hua Wang, Li Chen, Jie Wang, Weiming Zhang
In order to prevent illegal or unauthorized access of image data such as human faces and ensure legitimate users can use authorization-protected data, reversible adversarial attack technique is rise.
no code implementations • 15 May 2019 • Hua Wang, Jie Wang, Zhaoxia Yin
Deep Neural Networks (DNNs) are vulnerable to adversarial examples generated by imposing subtle perturbations to inputs that lead a model to predict incorrect outputs.