no code implementations • 1 Jun 2023 • Jianhua Wang, Xiaolin Chang, Jelena Mišić, Vojislav B. Mišić, Lin Li, Yingying Yao
Federated Learning (FL), a privacy-oriented distributed ML paradigm, is being gaining great interest in Internet of Things because of its capability to protect participants data privacy.
no code implementations • 14 Oct 2021 • Yixiang Wang, Jiqiang Liu, Xiaolin Chang, Jianhua Wang, Ricardo J. Rodríguez
In this paper, we propose an interpretable white-box AE attack approach, DI-AA, which explores the application of the interpretable approach of the deep Taylor decomposition in the selection of the most contributing features and adopts the Lagrangian relaxation optimization of the logit output and L_p norm to further decrease the perturbation.
no code implementations • 3 Feb 2021 • Yixiang Wang, Jiqiang Liu, Xiaolin Chang, Jelena Mišić, Vojislav B. Mišić
To further make the perturbations more imperceptible, we propose to employ the restriction combination of $L_0$ and $L_1/L_2$ secondly, which can restrict the total perturbations and perturbation points simultaneously.
no code implementations • 25 Jan 2021 • Yixiang Wang, Jiqiang Liu, Xiaolin Chang
Recent research has proved that deep neural networks (DNNs) are vulnerable to adversarial examples, the legitimate input added with imperceptible and well-designed perturbations can fool DNNs easily in the testing stage.
no code implementations • 15 Jan 2021 • Yuzhou Lin, Xiaolin Chang
Then we investigate the interpretation methods towards malware detection, by addressing the importance of interpreting malware detectors, challenges faced by this field, solutions for migitating these challenges, and a new taxonomy for classifying all the state-of-the-art malware detection interpretability work in recent years.
no code implementations • 13 Jan 2021 • Yuzhou Lin, Xiaolin Chang
Moreover, experiment results indicate that IEMD interpretability increases with the increasing detection accuracy during the construction of IEMD.