Out-of-distribution Detection with Implicit Outlier Transformation

9 Mar 2023  ·  Qizhou Wang, Junjie Ye, Feng Liu, Quanyu Dai, Marcus Kalander, Tongliang Liu, Jianye Hao, Bo Han ·

Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection, enhancing detection capability via model fine-tuning with surrogate OOD data. However, surrogate data typically deviate from test OOD data. Thus, the performance of OE, when facing unseen OOD data, can be weakened. To address this issue, we propose a novel OE-based approach that makes the model perform well for unseen OOD situations, even for unseen OOD cases. It leads to a min-max learning scheme -- searching to synthesize OOD data that leads to worst judgments and learning from such OOD data for uniform performance in OOD detection. In our realization, these worst OOD data are synthesized by transforming original surrogate ones. Specifically, the associated transform functions are learned implicitly based on our novel insight that model perturbation leads to data transformation. Our methodology offers an efficient way of synthesizing OOD data, which can further benefit the detection model, besides the surrogate OOD data. We conduct extensive experiments under various OOD detection setups, demonstrating the effectiveness of our method against its advanced counterparts.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Out-of-Distribution Detection ImageNet-1k vs Curated OODs (avg.) DOE AUROC 83.54 # 14
FPR95 59.83 # 14
Out-of-Distribution Detection ImageNet-1k vs iNaturalist DOE FPR95 55.87 # 20
AUROC 85.98 # 20
Out-of-Distribution Detection ImageNet-1k vs Places DOE FPR95 67.84 # 19
AUROC 83.05 # 17
Out-of-Distribution Detection ImageNet-1k vs SUN DOE FPR95 80.94 # 20
AUROC 76.26 # 17
Out-of-Distribution Detection ImageNet-1k vs Textures DOE FPR95 34.67 # 12
AUROC 88.9 # 17

Methods