1 code implementation • 27 Nov 2022 • Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Zhi-Quan Luo
Specifically, we provide the first bound of adversarial Rademacher complexity of deep neural networks.
1 code implementation • 3 Oct 2022 • Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Jue Wang, Zhi-Quan Luo
In adversarial machine learning, deep neural networks can fit the adversarial examples on the training dataset but have poor generalization ability on the test set.
1 code implementation • 2 Oct 2022 • Jiancong Xiao, Zeyu Qin, Yanbo Fan, Baoyuan Wu, Jue Wang, Zhi-Quan Luo
Therefore, adversarial training for multiple perturbations (ATMP) is proposed to generalize the adversarial robustness over different perturbation types (in $\ell_1$, $\ell_2$, and $\ell_\infty$ norm-bounded perturbations).
1 code implementation • 2 Oct 2022 • Jiancong Xiao, Liusha Yang, Yanbo Fan, Jue Wang, Zhi-Quan Luo
On synthetic datasets, theoretically, We prove that on-manifold adversarial examples are powerful, yet adversarial training focuses on off-manifold directions and ignores the on-manifold adversarial examples.
1 code implementation • 1 Jan 2021 • Jiancong Xiao, Liusha Yang, Zhi-Quan Luo
Standard adversarial training increases model robustness by extending the data manifold boundary in the small variance directions, while on the contrary, adversarial training with generative adversarial examples increases model robustness by extending the data manifold boundary in the large variance directions.