no code implementations • 12 Sep 2023 • Peixin Zhang, Jun Sun, Mingtian Tan, Xinyu Wang
In recent years, the security issues of artificial intelligence have become increasingly prominent due to the rapid development of deep learning research and applications.
no code implementations • 17 Nov 2021 • Peixin Zhang, Jingyi Wang, Jun Sun, Xinyu Wang
DeepFAIT consists of several important components enabling effective fairness testing of deep image classification applications: 1) a neuron selection strategy to identify the fairness-related neurons; 2) a set of multi-granularity adequacy metrics to evaluate the model's fairness; 3) a test selection algorithm for fixing the fairness issues efficiently.
no code implementations • 17 Jul 2021 • Peixin Zhang, Jingyi Wang, Jun Sun, Xinyu Wang, Guoliang Dong, Xingen Wang, Ting Dai, Jin Song Dong
In this work, we bridge the gap by proposing a scalable and effective approach for systematically searching for discriminatory samples while extending existing fairness testing approaches to address a more challenging domain, i. e., text classification.
no code implementations • 14 Nov 2019 • Yizhen Dong, Peixin Zhang, Jingyi Wang, Shuang Liu, Jun Sun, Jianye Hao, Xinyu Wang, Li Wang, Jin Song Dong, Dai Ting
In this work, we conduct an empirical study to evaluate the relationship between coverage, robustness and attack/defense metrics for DNN.
5 code implementations • 14 Dec 2018 • Jingyi Wang, Guoliang Dong, Jun Sun, Xinyu Wang, Peixin Zhang
We thus first propose a measure of `sensitivity' and show empirically that normal samples and adversarial samples have distinguishable sensitivity.
no code implementations • 14 May 2018 • Jingyi Wang, Jun Sun, Peixin Zhang, Xinyu Wang
Recently, it has been shown that deep neural networks (DNN) are subject to attacks through adversarial samples.