Search Results for author: Peixin Zhang

Found 6 papers, 1 papers with code

Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System

no code implementations12 Sep 2023 Peixin Zhang, Jun Sun, Mingtian Tan, Xinyu Wang

In recent years, the security issues of artificial intelligence have become increasingly prominent due to the rapid development of deep learning research and applications.

Backdoor Attack Machine Unlearning

Fairness Testing of Deep Image Classification with Adequacy Metrics

no code implementations17 Nov 2021 Peixin Zhang, Jingyi Wang, Jun Sun, Xinyu Wang

DeepFAIT consists of several important components enabling effective fairness testing of deep image classification applications: 1) a neuron selection strategy to identify the fairness-related neurons; 2) a set of multi-granularity adequacy metrics to evaluate the model's fairness; 3) a test selection algorithm for fixing the fairness issues efficiently.

Classification Face Recognition +2

Automatic Fairness Testing of Neural Classifiers through Adversarial Sampling

no code implementations17 Jul 2021 Peixin Zhang, Jingyi Wang, Jun Sun, Xinyu Wang, Guoliang Dong, Xingen Wang, Ting Dai, Jin Song Dong

In this work, we bridge the gap by proposing a scalable and effective approach for systematically searching for discriminatory samples while extending existing fairness testing approaches to address a more challenging domain, i. e., text classification.

Fairness text-classification +1

There is Limited Correlation between Coverage and Robustness for Deep Neural Networks

no code implementations14 Nov 2019 Yizhen Dong, Peixin Zhang, Jingyi Wang, Shuang Liu, Jun Sun, Jianye Hao, Xinyu Wang, Li Wang, Jin Song Dong, Dai Ting

In this work, we conduct an empirical study to evaluate the relationship between coverage, robustness and attack/defense metrics for DNN.

Face Recognition Malware Detection

Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing

5 code implementations14 Dec 2018 Jingyi Wang, Guoliang Dong, Jun Sun, Xinyu Wang, Peixin Zhang

We thus first propose a measure of `sensitivity' and show empirically that normal samples and adversarial samples have distinguishable sensitivity.

Two-sample testing

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing

no code implementations14 May 2018 Jingyi Wang, Jun Sun, Peixin Zhang, Xinyu Wang

Recently, it has been shown that deep neural networks (DNN) are subject to attacks through adversarial samples.

Cannot find the paper you are looking for? You can Submit a new open access paper.