no code implementations • ICCV 2023 • Han Fang, Jiyi Zhang, Yupeng Qiu, Ke Xu, Chengfang Fang, Ee-Chien Chang
In this paper, we take the role of investigators who want to trace the attack and identify the source, that is, the particular model which the adversarial examples are generated from.
no code implementations • 12 Dec 2022 • Ambra Demontis, Maura Pintor, Luca Demetrio, Kathrin Grosse, Hsiao-Ying Lin, Chengfang Fang, Battista Biggio, Fabio Roli
Reinforcement learning allows machines to learn from their own experience.
no code implementations • 30 Nov 2021 • Jiyi Zhang, Han Fang, Wesley Joon-Wie Tann, Ke Xu, Chengfang Fang, Ee-Chien Chang
We point out that by distributing different copies of the model to different buyers, we can mitigate the attack such that adversarial samples found on one copy would not work on another copy.
1 code implementation • 30 Oct 2021 • Lujia Shen, Shouling Ji, Xuhong Zhang, Jinfeng Li, Jing Chen, Jie Shi, Chengfang Fang, Jianwei Yin, Ting Wang
However, a pre-trained model with backdoor can be a severe threat to the applications.
no code implementations • 13 Apr 2021 • Xinyi Zhang, Chengfang Fang, Jie Shi
We find the effectiveness of existing techniques significantly affected by the absence of pre-trained models.
no code implementations • 12 Apr 2021 • An Zhang, Xiang Wang, Chengfang Fang, Jie Shi, Tat-Seng Chua, Zehua Chen
Gradient-based attribution methods can aid in the understanding of convolutional neural networks (CNNs).
no code implementations • 28 Sep 2020 • Chang Liao, Yao Cheng, Chengfang Fang, Jie Shi
This paper aims to provide a thorough study on the effectiveness of the transformation-based ensemble defence for image classification and its reasons.