RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection

6 Apr 2022  ·  Umar Khalid, Ashkan Esmaeili, Nazmul Karim, Nazanin Rahnavard ·

Recent studies have addressed the concern of detecting and rejecting the out-of-distribution (OOD) samples as a major challenge in the safe deployment of deep learning (DL) models. It is desired that the DL model should only be confident about the in-distribution (ID) data which reinforces the driving principle of the OOD detection. In this paper, we propose a simple yet effective generalized OOD detection method independent of out-of-distribution datasets. Our approach relies on self-supervised feature learning of the training samples, where the embeddings lie on a compact low-dimensional space. Motivated by the recent studies that show self-supervised adversarial contrastive learning helps robustify the model, we empirically show that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space. The method proposed in this work referred to as RODD outperforms SOTA detection performance on an extensive suite of benchmark datasets on OOD detection tasks. On the CIFAR-100 benchmarks, RODD achieves a 26.97 $\%$ lower false-positive rate (FPR@95) compared to SOTA methods.

PDF Abstract

Results from the Paper


 Ranked #1 on Out-of-Distribution Detection on cifar100 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Out-of-Distribution Detection cifar10 Wideresnet 40 AUROC 99.3 # 1
Out-of-Distribution Detection CIFAR10 Wide ResNet 40x2 AUROC 99.3 # 1
Out-of-Distribution Detection CIFAR-10 Wide ResNet 40x2 FPR95 3.87 # 4
AUROC 99.43 # 5
Out-of-Distribution Detection cifar100 Wide Resnet 40x2 AUROC 95.76 # 1

Methods