Sneakoscope: Revisiting Unsupervised Out-of-Distribution Detection

29 Sep 2021  ·  Tianji Cong, Atul Prakash ·

The problem of detecting out-of-distribution (OOD) examples in neural networks has been widely studied in the literature, with state-of-the-art techniques being supervised in that they require fine-tuning on OOD data to achieve high-quality OOD detection. But supervised OOD detection methods also have a disadvantage in that they require expensive training on OOD data, curating the OOD dataset so that it is distinguishable from the in-distribution data, and significant hyper-parameter tuning. In this work, we propose a unified evaluation suite, Sneakoscope, to revisit the problem with in-depth exploration of unsupervised OOD detection. Our surprising discovery shows that (1) model architectures play a significant role in unsupervised OOD detection performance; (2) unsupervised approaches applied on large-scale pre-trained models can achieve competitive performance compared to their supervised counterparts; and (3) unsupervised OOD detection based on Mahalanobis Distance with the support of a pre-trained model consistently outperforms other unsupervised methods by a large margin and compares favorably with results from state-of-the-art supervised OOD detection methods reported in the literature. We thus provide new baselines for unsupervised OOD detection methods.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here