1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
no code implementations • 2 Sep 2021 • Chuanbiao Song, Yanbo Fan, Yichen Yang, Baoyuan Wu, Yiming Li, Zhifeng Li, Kun He
Adversarial training (AT) has been demonstrated as one of the most promising defense methods against various adversarial attacks.
no code implementations • 26 Jun 2021 • Xiaosen Wang, Chuanbiao Song, LiWei Wang, Kun He
In this work, we aim to avoid the catastrophic overfitting by introducing multi-step adversarial examples during the single-step adversarial training.
no code implementations • 1 Jan 2021 • Xiaosen Wang, Kun He, Chuanbiao Song, LiWei Wang, John E. Hopcroft
A recent work targets unrestricted adversarial example using generative model but their method is based on a search in the neighborhood of input noise, so actually their output is still constrained by input.
1 code implementation • ICLR 2020 • Chuanbiao Song, Kun He, Jiadong Lin, Li-Wei Wang, John E. Hopcroft
We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples.
3 code implementations • ICLR 2020 • Jiadong Lin, Chuanbiao Song, Kun He, Li-Wei Wang, John E. Hopcroft
While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid "overfitting" on the white-box model being attacked and generate more transferable adversarial examples.
no code implementations • 16 Apr 2019 • Xiaosen Wang, Kun He, Chuanbiao Song, Li-Wei Wang, John E. Hopcroft
In this way, AT-GAN can learn the distribution of adversarial examples that is very close to the distribution of real data.
2 code implementations • ICLR 2019 • Chuanbiao Song, Kun He, Li-Wei Wang, John E. Hopcroft
Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples.