Boosting Out-of-distribution Detection with Typical Features

Out-of-distribution (OOD) detection is a critical task for ensuring the reliability and safety of deep neural networks in real-world scenarios. Different from most previous OOD detection methods that focus on designing OOD scores or introducing diverse outlier examples to retrain the model, we delve into the obstacle factors in OOD detection from the perspective of typicality and regard the feature's high-probability region of the deep model as the feature's typical set. We propose to rectify the feature into its typical set and calculate the OOD score with the typical features to achieve reliable uncertainty estimation. The feature rectification can be conducted as a {plug-and-play} module with various OOD scores. We evaluate the superiority of our method on both the commonly used benchmark (CIFAR) and the more challenging high-resolution benchmark with large label space (ImageNet). Notably, our approach outperforms state-of-the-art methods by up to 5.11$\%$ in the average FPR95 on the ImageNet benchmark.

PDF Abstract
No code implementations yet. Submit your code now
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Out-of-Distribution Detection ImageNet-1k vs Curated OODs (avg.) BATS (ResNet-50) AUROC 94.28 # 6
FPR95 27.11 # 6
Out-of-Distribution Detection ImageNet-1k vs iNaturalist BATS (ResNet-50) AUROC 97.67 # 7
Out-of-Distribution Detection ImageNet-1k vs Places BATS (ResNet-50) FPR95 34.34 # 3
AUROC 91.83 # 5
Out-of-Distribution Detection ImageNet-1k vs Textures BATS (ResNet-50) FPR95 38.9 # 15
AUROC 92.27 # 12

Methods


No methods listed for this paper. Add relevant methods here