Search Results for author: Chuanlong Xie

Found 15 papers, 1 papers with code

Enhancing Out-of-Distribution Detection with Multitesting-based Layer-wise Feature Fusion

no code implementations16 Mar 2024 Jiawei Li, Sitong Li, Shanshan Wang, Yicheng Zeng, Falong Tan, Chuanlong Xie

When trained using KNN on CIFAR10, MLOD-Fisher significantly lowers the false positive rate (FPR) from 24. 09% to 7. 47% on average compared to merely utilizing the features of the last layer.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks

no code implementations6 Apr 2023 Xuanzhe Xiao, Zeng Li, Chuanlong Xie, Fengwei Zhou

To capitalize on this discovery, we introduce a novel regularization technique, termed Heavy-Tailed Regularization, which explicitly promotes a more heavy-tailed spectrum in the weight matrix through regularization.

Fair-CDA: Continuous and Directional Augmentation for Group Fairness

no code implementations1 Apr 2023 Rui Sun, Fengwei Zhou, Zhenhua Dong, Chuanlong Xie, Lanqing Hong, Jiawei Li, Rui Zhang, Zhen Li, Zhenguo Li

By adjusting the perturbation strength in the direction of the paths, our proposed augmentation is controllable and auditable.

Data Augmentation Disentanglement +1

Boosting Out-of-Distribution Detection with Multiple Pre-trained Models

1 code implementation24 Dec 2022 Feng Xue, Zi He, Chuanlong Xie, Falong Tan, Zhenguo Li

This advance raises a natural question: Can we leverage the diversity of multiple pre-trained models to improve the performance of post hoc detection methods?

Out-of-Distribution Detection Out of Distribution (OOD) Detection

ZooD: Exploiting Model Zoo for Out-of-Distribution Generalization

no code implementations17 Oct 2022 Qishi Dong, Awais Muhammad, Fengwei Zhou, Chuanlong Xie, Tianyang Hu, Yongxin Yang, Sung-Ho Bae, Zhenguo Li

We evaluate our paradigm on a diverse model zoo consisting of 35 models for various OoD tasks and demonstrate: (i) model ranking is better correlated with fine-tuning ranking than previous methods and up to 9859x faster than brute-force fine-tuning; (ii) OoD generalization after model ensemble with feature selection outperforms the state-of-the-art methods and the accuracy on most challenging task DomainNet is improved from 46. 5\% to 50. 6\%.

feature selection Out-of-Distribution Generalization

Boosting Out-of-distribution Detection with Typical Features

no code implementations9 Oct 2022 Yao Zhu, Yuefeng Chen, Chuanlong Xie, Xiaodan Li, Rong Zhang, Hui Xue, Xiang Tian, Bolun Zheng, Yaowu Chen

Out-of-distribution (OOD) detection is a critical task for ensuring the reliability and safety of deep neural networks in real-world scenarios.

Out-of-Distribution Detection

Policy Diagnosis via Measuring Role Diversity in Cooperative Multi-agent RL

no code implementations1 Jun 2022 Siyi Hu, Chuanlong Xie, Xiaodan Liang, Xiaojun Chang

In this study, we quantify the agent's behavior difference and build its relationship with the policy performance via {\bf Role Diversity}, a metric to measure the characteristics of MARL tasks.

SMAC+ Starcraft

MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps

no code implementations NeurIPS 2021 Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li

First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation.

Transfer Learning

Role Diversity Matters: A Study of Cooperative Training Strategies for Multi-Agent RL

no code implementations29 Sep 2021 Siyi Hu, Chuanlong Xie, Xiaodan Liang, Xiaojun Chang

In addition, role diversity can help to find a better training strategy and increase performance in cooperative MARL.

SMAC+ Starcraft +1

Towards a Theoretical Framework of Out-of-Distribution Generalization

no code implementations NeurIPS 2021 Haotian Ye, Chuanlong Xie, Tianle Cai, Ruichen Li, Zhenguo Li, LiWei Wang

We also introduce a new concept of expansion function, which characterizes to what extent the variance is amplified in the test domains over the training domains, and therefore give a quantitative meaning of invariant features.

Domain Generalization Model Selection +1

Provable More Data Hurt in High Dimensional Least Squares Estimator

no code implementations14 Aug 2020 Zeng Li, Chuanlong Xie, Qinwen Wang

Furthermore, the finite-sample distribution and the confidence interval of the prediction risk are provided.

Vocal Bursts Intensity Prediction

Risk Variance Penalization

no code implementations13 Jun 2020 Chuanlong Xie, Haotian Ye, Fei Chen, Yue Liu, Rui Sun, Zhenguo Li

The key of the out-of-distribution (OOD) generalization is to generalize invariance from training domains to target domains.

Cannot find the paper you are looking for? You can Submit a new open access paper.