no code implementations • 6 Apr 2023 • Xuanzhe Xiao, Zeng Li, Chuanlong Xie, Fengwei Zhou
To capitalize on this discovery, we introduce a novel regularization technique, termed Heavy-Tailed Regularization, which explicitly promotes a more heavy-tailed spectrum in the weight matrix through regularization.
no code implementations • 1 Apr 2023 • Rui Sun, Fengwei Zhou, Zhenhua Dong, Chuanlong Xie, Lanqing Hong, Jiawei Li, Rui Zhang, Zhen Li, Zhenguo Li
By adjusting the perturbation strength in the direction of the paths, our proposed augmentation is controllable and auditable.
1 code implementation • 24 Dec 2022 • Feng Xue, Zi He, Chuanlong Xie, Falong Tan, Zhenguo Li
This advance raises a natural question: Can we leverage the diversity of multiple pre-trained models to improve the performance of post hoc detection methods?
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
no code implementations • 17 Oct 2022 • Qishi Dong, Awais Muhammad, Fengwei Zhou, Chuanlong Xie, Tianyang Hu, Yongxin Yang, Sung-Ho Bae, Zhenguo Li
We evaluate our paradigm on a diverse model zoo consisting of 35 models for various OoD tasks and demonstrate: (i) model ranking is better correlated with fine-tuning ranking than previous methods and up to 9859x faster than brute-force fine-tuning; (ii) OoD generalization after model ensemble with feature selection outperforms the state-of-the-art methods and the accuracy on most challenging task DomainNet is improved from 46. 5\% to 50. 6\%.
no code implementations • 9 Oct 2022 • Yao Zhu, Yuefeng Chen, Chuanlong Xie, Xiaodan Li, Rong Zhang, Hui Xue, Xiang Tian, Bolun Zheng, Yaowu Chen
Out-of-distribution (OOD) detection is a critical task for ensuring the reliability and safety of deep neural networks in real-world scenarios.
no code implementations • 1 Jun 2022 • Siyi Hu, Chuanlong Xie, Xiaodan Liang, Xiaojun Chang
In this study, we quantify the agent's behavior difference and build its relationship with the policy performance via {\bf Role Diversity}, a metric to measure the characteristics of MARL tasks.
no code implementations • NeurIPS 2021 • Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li
First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation.
no code implementations • 29 Sep 2021 • Siyi Hu, Chuanlong Xie, Xiaodan Liang, Xiaojun Chang
In addition, role diversity can help to find a better training strategy and increase performance in cooperative MARL.
no code implementations • ICCV 2021 • Hang Xu, Ning Kang, Gengwei Zhang, Chuanlong Xie, Xiaodan Liang, Zhenguo Li
Fine-tuning from pre-trained ImageNet models has been a simple, effective, and popular approach for various computer vision tasks.
no code implementations • NeurIPS 2021 • Haotian Ye, Chuanlong Xie, Tianle Cai, Ruichen Li, Zhenguo Li, LiWei Wang
We also introduce a new concept of expansion function, which characterizes to what extent the variance is amplified in the test domains over the training domains, and therefore give a quantitative meaning of invariant features.
no code implementations • 21 Jan 2021 • Haotian Ye, Chuanlong Xie, Yue Liu, Zhenguo Li
One of the definitions of OOD accuracy is worst-domain accuracy.
no code implementations • 22 Dec 2020 • Fengwei Zhou, Jiawei Li, Chuanlong Xie, Fei Chen, Lanqing Hong, Rui Sun, Zhenguo Li
Automated data augmentation has shown superior performance in image recognition.
no code implementations • 14 Aug 2020 • Zeng Li, Chuanlong Xie, Qinwen Wang
Furthermore, the finite-sample distribution and the confidence interval of the prediction risk are provided.
no code implementations • 13 Jun 2020 • Chuanlong Xie, Haotian Ye, Fei Chen, Yue Liu, Rui Sun, Zhenguo Li
The key of the out-of-distribution (OOD) generalization is to generalize invariance from training domains to target domains.