Out-of-Distribution Detection
145 papers with code • 40 benchmarks • 15 datasets
Detect out-of-distribution or anomalous examples.
Libraries
Use these libraries to find Out-of-Distribution Detection models and implementationsDatasets
Most implemented papers
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers.
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution.
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets.
Deep Anomaly Detection with Outlier Exposure
We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.
Likelihood Ratios for Out-of-Distribution Detection
We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics.
Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices
We find that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates.
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.
Hierarchical VAEs Know What They Don't Know
Deep generative models have been demonstrated as state-of-the-art density estimators.
A Flexible and Adaptive Framework for Abstention Under Class Imbalance
Comparatively little attention was given to metrics such as area-under-the-curve or Cohen's Kappa, which are extremely relevant for imbalanced datasets.
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty
Self-supervision provides effective representations for downstream tasks without requiring labels.