1 code implementation • 1 Feb 2024 • Zhibo Jin, Jiayu Zhang, Zhiyu Zhu, Huaming Chen
The robustness of deep learning models against adversarial attacks remains a pivotal concern.
1 code implementation • 11 Jan 2024 • Zhiyu Zhu, Huaming Chen, Xinyi Wang, Jiayu Zhang, Zhibo Jin, Kim-Kwang Raymond Choo, Jun Shen, Dong Yuan
With the functional and characteristic similarity analysis, we introduce a novel gradient editing (GE) mechanism and verify its feasibility in generating transferable samples on various models.
1 code implementation • 21 Dec 2023 • Zhiyu Zhu, Huaming Chen, Jiayu Zhang, Xinyi Wang, Zhibo Jin, Minhui Xue, Dongxiao Zhu, Kim-Kwang Raymond Choo
To better understand the output of deep neural networks (DNN), attribution based methods have been an important approach for model interpretability, which assign a score for each input dimension to indicate its importance towards the model outcome.
1 code implementation • 16 Oct 2023 • Zhibo Jin, Zhiyu Zhu, Xinyi Wang, Jiayu Zhang, Jun Shen, Huaming Chen
While deep neural networks have excellent results in many fields, they are susceptible to interference from attacking samples resulting in erroneous judgments.