Search Results for author: Jiancong Xiao

Found 5 papers, 5 papers with code

Adversarial Rademacher Complexity of Deep Neural Networks

1 code implementation27 Nov 2022 Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Zhi-Quan Luo

Specifically, we provide the first bound of adversarial Rademacher complexity of deep neural networks.

Stability Analysis and Generalization Bounds of Adversarial Training

1 code implementation3 Oct 2022 Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Jue Wang, Zhi-Quan Luo

In adversarial machine learning, deep neural networks can fit the adversarial examples on the training dataset but have poor generalization ability on the test set.

Generalization Bounds

Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis

1 code implementation2 Oct 2022 Jiancong Xiao, Zeyu Qin, Yanbo Fan, Baoyuan Wu, Jue Wang, Zhi-Quan Luo

Therefore, adversarial training for multiple perturbations (ATMP) is proposed to generalize the adversarial robustness over different perturbation types (in $\ell_1$, $\ell_2$, and $\ell_\infty$ norm-bounded perturbations).

Adversarial Robustness

Understanding Adversarial Robustness Against On-manifold Adversarial Examples

1 code implementation2 Oct 2022 Jiancong Xiao, Liusha Yang, Yanbo Fan, Jue Wang, Zhi-Quan Luo

On synthetic datasets, theoretically, We prove that on-manifold adversarial examples are powerful, yet adversarial training focuses on off-manifold directions and ignores the on-manifold adversarial examples.

Adversarial Robustness

Disentangling Adversarial Robustness in Directions of the Data Manifold

1 code implementation1 Jan 2021 Jiancong Xiao, Liusha Yang, Zhi-Quan Luo

Standard adversarial training increases model robustness by extending the data manifold boundary in the small variance directions, while on the contrary, adversarial training with generative adversarial examples increases model robustness by extending the data manifold boundary in the large variance directions.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.