MOS: Towards Scaling Out-of-distribution Detection for Large Semantic Space

CVPR 2021  ·  Rui Huang, Yixuan Li ·

Detecting out-of-distribution (OOD) inputs is a central challenge for safely deploying machine learning models in the real world. Existing solutions are mainly driven by small datasets, with low resolution and very few class labels (e.g., CIFAR). As a result, OOD detection for large-scale image classification tasks remains largely unexplored. In this paper, we bridge this critical gap by proposing a group-based OOD detection framework, along with a novel OOD scoring function termed MOS. Our key idea is to decompose the large semantic space into smaller groups with similar concepts, which allows simplifying the decision boundaries between in- vs. out-of-distribution data for effective OOD detection. Our method scales substantially better for high-dimensional class space than previous approaches. We evaluate models trained on ImageNet against four carefully curated OOD datasets, spanning diverse semantics. MOS establishes state-of-the-art performance, reducing the average FPR95 by 14.33% while achieving 6x speedup in inference compared to the previous best method.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Out-of-Distribution Detection ImageNet-1k vs Curated OODs (avg.) MOS (BiT-S-R101x1) AUROC 90.11 # 11
FPR95 39.97 # 11
Out-of-Distribution Detection ImageNet-1k vs iNaturalist MOS (BiT-S-R101x1) FPR95 9.28 # 3
AUROC 98.15 # 4
Out-of-Distribution Detection ImageNet-1k vs Places MOS (BiT-S-R101x1) FPR95 49.54 # 13
AUROC 89.06 # 11
Out-of-Distribution Detection ImageNet-1k vs SUN MOS (BiT-S-R101x1) FPR95 40.63 # 10
AUROC 92.01 # 9
Out-of-Distribution Detection ImageNet-1k vs Textures MOS (BiT-S-R101x1) FPR95 60.43 # 23
AUROC 81.23 # 24

Methods


No methods listed for this paper. Add relevant methods here