Search Results for author: Zhaoxia Yin

Found 14 papers, 0 papers with code

FaceCat: Enhancing Face Recognition Security with a Unified Generative Model Framework

no code implementations14 Apr 2024 Jiawei Chen, Xiao Yang, Yinpeng Dong, Hang Su, Jianteng Peng, Zhaoxia Yin

Motivated by the rich structural and detailed features of face generative models, we propose FaceCat which utilizes the face generative model as a pre-trained model to improve the performance of FAS and FAD.

Face Anti-Spoofing Face Recognition +1

AdvFAS: A robust face anti-spoofing framework against adversarial examples

no code implementations4 Aug 2023 Jiawei Chen, Xiao Yang, Heng Yin, Mingzhi Ma, Bihui Chen, Jianteng Peng, Yandong Guo, Zhaoxia Yin, Hang Su

Ensuring the reliability of face recognition systems against presentation attacks necessitates the deployment of face anti-spoofing techniques.

Adversarial Defense Face Anti-Spoofing +1

Decision-based iterative fragile watermarking for model integrity verification

no code implementations13 May 2023 Zhaoxia Yin, Heng Yin, Hang Su, Xinpeng Zhang, Zhenzhe Gao

Our method has some advantages: (1) the iterative update of samples is done in a decision-based black-box manner, relying solely on the predicted probability distribution of the target model, which reduces the risk of exposure to adversarial attacks, (2) the small-amplitude multiple iterations approach allows the fragile samples to perform well visually, with a PSNR of 55 dB in TinyImageNet compared to the original samples, (3) even with changes in the overall parameters of the model of magnitude 1e-4, the fragile samples can detect such changes, and (4) the method is independent of the specific model structure and dataset.

Adversarial Examples Detection with Enhanced Image Difference Features based on Local Histogram Equalization

no code implementations8 May 2023 Zhaoxia Yin, Shaowei Zhu, Hang Su, Jianteng Peng, Wanli Lyu, Bin Luo

However, numerous studies have proven that previous methods create detection or defense against certain attacks, which renders the method ineffective in the face of the latest unknown attack methods.

Feature Compression

Adversarial Example Defense via Perturbation Grading Strategy

no code implementations16 Dec 2022 Shaowei Zhu, Wanli Lyu, Bin Li, Zhaoxia Yin, Bin Luo

In addition, the proposed method does not modify any task model, which can be used as a preprocessing module, which significantly reduces the deployment cost in practical applications.

Neural network fragile watermarking with no model performance degradation

no code implementations16 Aug 2022 Zhaoxia Yin, Heng Yin, Xinpeng Zhang

In the process of watermarking, we train a generative model with the specific loss function and secret key to generate triggers that are sensitive to the fine-tuning of the target classifier.

Data Poisoning

Universal adversarial perturbation for remote sensing images

no code implementations22 Feb 2022 Qingyu Wang, Guorui Feng, Zhaoxia Yin, Bin Luo

Firstly, the former is used to generate the UAP, which can learn the distribution of perturbations better, and then the latter is used to find the sensitive regions concerned by the RSI classification model.

Classification Object Recognition

Reversible Attack based on Local Visual Adversarial Perturbation

no code implementations6 Oct 2021 Li Chen, Shaowei Zhu, Zhaoxia Yin

Adding perturbations to images can mislead classification models to produce incorrect results.

Adversarial Attack Autonomous Driving +2

PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack

no code implementations19 Jan 2021 Jie Wang, Zhaoxia Yin, Jin Tang, Jing Jiang, Bin Luo

The studies on black-box adversarial attacks have become increasingly prevalent due to the intractable acquisition of the structural knowledge of deep neural networks (DNNs).

Adversarial Attack

Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization

no code implementations ICML Workshop AML 2021 Jie Wang, Zhaoxia Yin, Jing Jiang, Yang Du

In this paper, we propose an attention-guided black-box adversarial attack based on the large-scale multiobjective evolutionary optimization, termed as LMOA.

Adversarial Attack

Reversible Adversarial Attack based on Reversible Image Transformation

no code implementations6 Nov 2019 Zhaoxia Yin, Hua Wang, Li Chen, Jie Wang, Weiming Zhang

In order to prevent illegal or unauthorized access of image data such as human faces and ensure legitimate users can use authorization-protected data, reversible adversarial attack technique is rise.

Adversarial Attack Image Restoration

An Efficient Pre-processing Method to Eliminate Adversarial Effects

no code implementations15 May 2019 Hua Wang, Jie Wang, Zhaoxia Yin

Deep Neural Networks (DNNs) are vulnerable to adversarial examples generated by imposing subtle perturbations to inputs that lead a model to predict incorrect outputs.

General Classification Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.