1 code implementation • 20 Jun 2024 • Zhaozhe Hu, Jia-Li Yin, Bin Chen, Luojun Lin, Bo-Hao Chen, Ximeng Liu
Self-ensemble adversarial training methods improve model robustness by ensembling models at different training epochs, such as model weight averaging (WA).
no code implementations • 19 Jun 2024 • Jia-Li Yin, Haoyuan Zheng, Ximeng Liu
In this paper, we investigate robust UDA from a representation learning perspective and design a novel algorithm by utilizing the mutual information theory, dubbed MIRoUDA.
1 code implementation • 23 Nov 2023 • Luojun Lin, Zhifeng Shen, Jia-Li Yin, Qipeng Liu, Yuanlong Yu, WeiJie Chen
To this end, we propose a novel MetaFBP framework, in which we devise a universal feature extractor to capture the aesthetic commonality and then optimize to adapt the aesthetic individuality by shifting the decision boundary of the predictor via a meta-learning mechanism.
1 code implementation • ICCV 2023 • Bin Chen, Jia-Li Yin, Shukai Chen, Bo-Hao Chen, Ximeng Liu
Alternatively, model ensemble adversarial attacks are proposed to fuse outputs from surrogate models with diverse architectures to get an ensemble loss, making the generated adversarial example more likely to transfer to other models as it can fool multiple models concurrently.
1 code implementation • 12 Dec 2022 • Wanqing Zhu, Jia-Li Yin, Bo-Hao Chen, Ximeng Liu
In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models.
1 code implementation • 2 Dec 2022 • Jian-Nan Su, Min Gan, Guang-Yong Chen, Jia-Li Yin, C. L. Philip Chen
Utilizing this finding, we proposed a Global Learnable Attention (GLA) to adaptively modify similarity scores of non-local textures during training instead of only using a fixed similarity scoring function such as the dot product.
no code implementations • 1 Dec 2021 • Jia-Li Yin, Lehui Xie, Wanqing Zhu, Ximeng Liu, Bo-Hao Chen
However, most of the existing adversarial training methods focus on improving the robust accuracy by strengthening the adversarial examples but neglecting the increasing shift between natural data and adversarial examples, leading to a dramatic decrease in natural accuracy.
no code implementations • 5 Feb 2021 • Lehui Xie, Yaopeng Wang, Jia-Li Yin, Ximeng Liu
Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of catastrophic overfitting, where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100\% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0\% over a single epoch.