no code implementations • 1 Aug 2023 • Ruoxi Qin, Linyuan Wang, Xuehui Du, Xingyuan Chen, Bin Yan
The deep neural network has attained significant efficiency in image recognition.
no code implementations • 6 Oct 2022 • Qi Peng, Wenlin Liu, Ruoxi Qin, Libin Hou, Bin Yan, Linyuan Wang
Adversarial attacks are considered the intrinsic vulnerability of CNNs.
no code implementations • AAAI Workshop AdvML 2022 • Qi Peng, Ruoxi Qin, Wenlin Liu, Libin Hou, Bin Yan, Linyuan Wang
Recent advances in adversarial attacks uncover the intrinsic vulnerability of modern deep neural networks (DNNs).
no code implementations • AAAI Workshop AdvML 2022 • Ruoxi Qin, Linyuan Wang, Xuehui Du, Bin Yan, Xingyuan Chen
A new constraints norm is proposed in model training based on these criteria to isolate adversarial transferability without any prior knowledge of adversarial samples.
no code implementations • 3 Jun 2021 • Pengfei Xie, Linyuan Wang, Ruoxi Qin, Kai Qiao, Shuhao Shi, Guoen Hu, Bin Yan
In this paper, we propose a new gradient iteration framework, which redefines the relationship between the above three.
no code implementations • 6 May 2021 • Ruoxi Qin, Linyuan Wang, Xingyuan Chen, Xuehui Du, Bin Yan
The defense strategies are particularly passive in these processes, and enhancing initiative of such strategies can be an effective way to get out of this arms race.
no code implementations • 12 Apr 2019 • Lingyun Jiang, Kai Qiao, Ruoxi Qin, Linyuan Wang, Jian Chen, Haibing Bu, Bin Yan
In image classification of deep learning, adversarial examples where inputs intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them.