Search Results for author: Xiaolin Chang

Found 6 papers, 0 papers with code

CRS-FL: Conditional Random Sampling for Communication-Efficient and Privacy-Preserving Federated Learning

no code implementations1 Jun 2023 Jianhua Wang, Xiaolin Chang, Jelena Mišić, Vojislav B. Mišić, Lin Li, Yingying Yao

Federated Learning (FL), a privacy-oriented distributed ML paradigm, is being gaining great interest in Internet of Things because of its capability to protect participants data privacy.

Federated Learning Privacy Preserving

DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks

no code implementations14 Oct 2021 Yixiang Wang, Jiqiang Liu, Xiaolin Chang, Jianhua Wang, Ricardo J. Rodríguez

In this paper, we propose an interpretable white-box AE attack approach, DI-AA, which explores the application of the interpretable approach of the deep Taylor decomposition in the selection of the most contributing features and adopts the Lagrangian relaxation optimization of the logit output and L_p norm to further decrease the perturbation.

IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks

no code implementations3 Feb 2021 Yixiang Wang, Jiqiang Liu, Xiaolin Chang, Jelena Mišić, Vojislav B. Mišić

To further make the perturbations more imperceptible, we propose to employ the restriction combination of $L_0$ and $L_1/L_2$ secondly, which can restrict the total perturbations and perturbation points simultaneously.

DNN Testing

Generalizing Adversarial Examples by AdaBelief Optimizer

no code implementations25 Jan 2021 Yixiang Wang, Jiqiang Liu, Xiaolin Chang

Recent research has proved that deep neural networks (DNNs) are vulnerable to adversarial examples, the legitimate input added with imperceptible and well-designed perturbations can fool DNNs easily in the testing stage.

Towards interpreting ML-based automated malware detection models: a survey

no code implementations15 Jan 2021 Yuzhou Lin, Xiaolin Chang

Then we investigate the interpretation methods towards malware detection, by addressing the importance of interpreting malware detectors, challenges faced by this field, solutions for migitating these challenges, and a new taxonomy for classifying all the state-of-the-art malware detection interpretability work in recent years.

Malware Detection

Towards Interpretable Ensemble Learning for Image-based Malware Detection

no code implementations13 Jan 2021 Yuzhou Lin, Xiaolin Chang

Moreover, experiment results indicate that IEMD interpretability increases with the increasing detection accuracy during the construction of IEMD.

Ensemble Learning Malware Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.