Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images

Due to the intractability of characterizing everything that looks unlike the normal data, anomaly detection (AD) is traditionally treated as an unsupervised problem utilizing only normal samples. However, it has recently been found that unsupervised image AD can be drastically improved through the utilization of huge corpora of random images to represent anomalousness; a technique which is known as Outlier Exposure. In this paper we show that specialized AD learning methods seem unnecessary for state-of-the-art performance, and furthermore one can achieve strong performance with just a small collection of Outlier Exposure data, contradicting common assumptions in the field of AD. We find that standard classifiers and semi-supervised one-class methods trained to discern between normal samples and relatively few random natural images are able to outperform the current state of the art on an established AD benchmark with ImageNet. Further experiments reveal that even one well-chosen outlier sample is sufficient to achieve decent performance on this benchmark (79.3% AUC). We investigate this phenomenon and find that one-class methods are more robust to the choice of training outliers, indicating that there are scenarios where these are still more useful than standard classifiers. Additionally, we include experiments that delineate the scenarios where our results hold. Lastly, no training samples are necessary when one uses the representations learned by CLIP, a recent foundation model, which achieves state-of-the-art AD results on CIFAR-10 and ImageNet in a zero-shot setting.

PDF Abstract

Results from the Paper


 Ranked #1 on Anomaly Detection on One-class CIFAR-10 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Anomaly Detection Leave-One-Class-Out CIFAR-10 DSVDD AUROC 52.2 # 6
Anomaly Detection Leave-One-Class-Out CIFAR-10 CLIP (zero shot) AUROC 92.2 # 2
Anomaly Detection Leave-One-Class-Out CIFAR-10 HSC AUROC 84.8 # 4
Anomaly Detection Leave-One-Class-Out CIFAR-10 DSAD AUROC 84.2 # 5
Anomaly Detection Leave-One-Class-Out CIFAR-10 BCE-CLIP AUROC 98.4 # 1
Anomaly Detection Leave-One-Class-Out CIFAR-10 Binary Cross Entropy (OE) AUROC 86.6 # 3
Anomaly Detection Leave-One-Class-Out ImageNet-30 Binary Cross Entropy (OE) AUROC 88.2 # 5
Anomaly Detection Leave-One-Class-Out ImageNet-30 BCE-CLIP (OE) AUROC 99.3 # 1
Anomaly Detection Leave-One-Class-Out ImageNet-30 DSVDD AUROC 49.7 # 6
Anomaly Detection Leave-One-Class-Out ImageNet-30 CLIP (zero shot) AUROC 97.8 # 2
Anomaly Detection Leave-One-Class-Out ImageNet-30 DSAD AUROC 88.8 # 3
Anomaly Detection Leave-One-Class-Out ImageNet-30 HSC (OE) AUROC 88.3 # 4
Anomaly Detection One-class CIFAR-10 CLIP (OE) AUROC 99.6 # 1
Anomaly Detection One-class CIFAR-10 CLIP (zero shot) AUROC 98.5 # 5
Anomaly Detection One-class ImageNet-30 CLIP (Zero Shot) AUROC 99.88 # 2
Anomaly Detection One-class ImageNet-30 BCE-Clip (OE) AUROC 99.90 # 1
Anomaly Detection One-class ImageNet-30 Binary Cross Entropy (OE) AUROC 97.7 # 3

Methods