Search Results for author: Haibin Zheng

Found 25 papers, 10 papers with code

AIR: Threats of Adversarial Attacks on Deep Learning-Based Information Recovery

no code implementations17 Aug 2023 Jinyin Chen, Jie Ge, Shilian Zheng, Linhui Ye, Haibin Zheng, Weiguo Shen, Keqiang Yue, Xiaoniu Yang

It can also be found that the DeepReceiver is vulnerable to adversarial perturbations even with very low power and limited PAPR.

Adversarial Attack

CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space

no code implementations18 Jul 2023 Haibin Zheng, Jinyin Chen, Haibo Jin

Therefore, it is crucial to identify the misbehavior of DNN-based software and improve DNNs' quality.

AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking

no code implementations25 Mar 2023 Ruoxi Chen, Haibo Jin, Jinyin Chen, Haibin Zheng

To address the issues, we introduce the concept of local gradient, and reveal that adversarial examples have a quite larger bound of local gradient than the benign ones.

Edge Deep Learning Model Protection via Neuron Authorization

1 code implementation22 Mar 2023 Jinyin Chen, Haibin Zheng, Tao Liu, Rongchang Li, Yao Cheng, Xuhong Zhang, Shouling Ji

With the development of deep learning processors and accelerators, deep learning models have been widely deployed on edge devices as part of the Internet of Things.

FedRight: An Effective Model Copyright Protection for Federated Learning

no code implementations18 Mar 2023 Jinyin Chen, Mingjun Li, Haibin Zheng

For the first time, we formalize the problem of copyright protection for FL, and propose FedRight to protect model copyright based on model fingerprints, i. e., extracting model features by generating adversarial examples as model fingerprints.

Federated Learning

Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

1 code implementation25 Oct 2022 Haibin Zheng, Haiyang Xiong, Jinyin Chen, Haonan Ma, Guohan Huang

Most of the proposed studies launch the backdoor attack using a trigger that either is the randomly generated subgraph (e. g., erd\H{o}s-r\'enyi backdoor) for less computational burden, or the gradient-based generative subgraph (e. g., graph trojaning attack) to enable a more effective attack.

Backdoor Attack

Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection

1 code implementation14 Aug 2022 Haibin Zheng, Haiyang Xiong, Haonan Ma, Guohan Huang, Jinyin Chen

Consequently, the link prediction model trained on the backdoored dataset will predict the link with trigger to the target state.

Backdoor Attack Link Prediction

Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection

no code implementations17 Jun 2022 Jinyin Chen, Chengyu Jia, Haibin Zheng, Ruoxi Chen, Chenbo Fu

The proliferation of fake news and its serious negative social influence push fake news detection methods to become necessary tools for web managers.

Backdoor Attack Fake News Detection

Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency

1 code implementation11 Jun 2022 Jinyin Chen, Mingjun Li, Tao Liu, Haibin Zheng, Yao Cheng, Changting Lin

To address these challenges, we reconsider the defense from a novel perspective, i. e., model weight evolving frequency. Empirically, we gain a novel insight that during the FL's training, the model weight evolving frequency of free-riders and that of benign clients are significantly different.

Federated Learning Privacy Preserving

GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning

1 code implementation5 Apr 2022 Jinyin Chen, Shulong Hu, Haibin Zheng, Changyou Xing, Guomin Zhang

Addressing the challenges, for the first time, we introduce expert knowledge to guide the agent to make better decisions in RL-based PT and propose a Generative Adversarial Imitation Learning-based generic intelligent Penetration testing framework, denoted as GAIL-PT, to solve the problems of higher labor costs due to the involvement of security experts and high-dimensional discrete action space.

Imitation Learning Q-Learning

Excitement Surfeited Turns to Errors: Deep Learning Testing Framework Based on Excitable Neurons

1 code implementation12 Feb 2022 Haibo Jin, Ruoxi Chen, Haibin Zheng, Jinyin Chen, Yao Cheng, Yue Yu, Xianglong Liu

By maximizing the number of excitable neurons concerning various wrong behaviors of models, DeepSensor can generate testing examples that effectively trigger more errors due to adversarial inputs, polluted data and incomplete training.

Image Classification Speaker Recognition

NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification

1 code implementation25 Dec 2021 Haibin Zheng, Zhiqing Chen, Tianyu Du, Xuhong Zhang, Yao Cheng, Shouling Ji, Jingyi Wang, Yue Yu, Jinyin Chen

To overcome the challenges, we propose NeuronFair, a new DNN fairness testing framework that differs from previous work in several key aspects: (1) interpretable - it quantitatively interprets DNNs' fairness violations for the biased decision; (2) effective - it uses the interpretation results to guide the generation of more diverse instances in less time; (3) generic - it can handle both structured and unstructured data.

Fairness

NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks

no code implementations24 Dec 2021 Ruoxi Chen, Haibo Jin, Jinyin Chen, Haibin Zheng, Yue Yu, Shouling Ji

From the perspective of image feature space, some of them cannot reach satisfying results due to the shift of features.

Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction

no code implementations8 Oct 2021 Jinyin Chen, Haiyang Xiong, Haibin Zheng, Jian Zhang, Guodong Jiang, Yi Liu

Backdoor attacks induce the DLP methods to make wrong prediction by the malicious training data, i. e., generating a subgraph sequence as the trigger and embedding it to the training data.

Backdoor Attack Dynamic Link Prediction +1

Salient Feature Extractor for Adversarial Defense on Deep Neural Networks

1 code implementation14 May 2021 Jinyin Chen, Ruoxi Chen, Haibin Zheng, Zhaoyan Ming, Wenrong Jiang, Chen Cui

Motivated by the observation that adversarial examples are due to the non-robust feature learned from the original dataset by models, we propose the concepts of salient feature(SF) and trivial feature(TF).

Adversarial Defense Generative Adversarial Network

DeepPoison: Feature Transfer Based Stealthy Poisoning Attack

no code implementations6 Jan 2021 Jinyin Chen, Longyuan Zhang, Haibin Zheng, Xueke Wang, Zhaoyan Ming

As existing episodes mainly focused on attack success rate with patch-based samples, defense algorithms can easily detect these poisoning samples.

ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries

no code implementations18 Dec 2020 Jinyin Chen, Zhen Wang, Haibin Zheng, Jun Xiao, Zhaoyan Ming

This work proposes a generic evaluation metric ROBY, a novel attack-independent robustness measure based on the model's decision boundaries.

MGA: Momentum Gradient Attack on Network

no code implementations26 Feb 2020 Jinyin Chen, Yixian Chen, Haibin Zheng, Shijing Shen, Shanqing Yu, Dan Zhang, Qi Xuan

The adversarial attack methods based on gradient information can adequately find the perturbations, that is, the combinations of rewired links, thereby reducing the effectiveness of the deep learning model based graph embedding algorithms, but it is also easy to fall into a local optimum.

Social and Information Networks

POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm

no code implementations1 May 2019 Jinyin Chen, Mengmeng Su, Shijing Shen, Hui Xiong, Haibin Zheng

In this paper, comprehensive evaluation metrics are brought up for different adversarial attack methods.

Adversarial Attack

N2VSCDNNR: A Local Recommender System Based on Node2vec and Rich Information Network

no code implementations12 Apr 2019 Jinyin Chen, Yangyang Wu, Lu Fan, Xiang Lin, Haibin Zheng, Shanqing Yu, Qi Xuan

In particular, we use a bipartite network to construct the user-item network, and represent the interactions among users (or items) by the corresponding one-mode projection network.

Clustering Recommendation Systems

GC-LSTM: Graph Convolution Embedded LSTM for Dynamic Link Prediction

2 code implementations ‎‎‏‏‎ ‎ 2020 Jinyin Chen, Xuanheng Xu, Yangyang Wu, Haibin Zheng

To the best of our knowledge, it is the first time that GCN embedded LSTM is put forward for link prediction of dynamic networks.

Social and Information Networks Physics and Society

FineFool: Fine Object Contour Attack via Attention

no code implementations1 Dec 2018 Jinyin Chen, Haibin Zheng, Hui Xiong, Mengmeng Su

Inspired by the correlations between adversarial perturbations and object contour, slighter perturbations is produced via focusing on object contour features, which is more imperceptible and difficult to be defended, especially network add-on defense methods with the trade-off between perturbations filtering and contour feature loss.

Adversarial Attack Object

Link Prediction Adversarial Attack

no code implementations2 Oct 2018 Jinyin Chen, Ziqiang Shi, Yangyang Wu, Xuanheng Xu, Haibin Zheng

Deep neural network has shown remarkable performance in solving computer vision and some graph evolved tasks, such as node classification and link prediction.

Physics and Society Social and Information Networks

Fast Gradient Attack on Network Embedding

no code implementations8 Sep 2018 Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, Qi Xuan

Network embedding maps a network into a low-dimensional Euclidean space, and thus facilitate many network analysis tasks, such as node classification, link prediction and community detection etc, by utilizing machine learning methods.

Physics and Society Social and Information Networks

Cannot find the paper you are looking for? You can Submit a new open access paper.