Out-of-Distribution Detection

145 papers with code • 40 benchmarks • 15 datasets

Detect out-of-distribution or anomalous examples.


Use these libraries to find Out-of-Distribution Detection models and implementations

Most implemented papers

CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features

clovaai/CutMix-PyTorch ICCV 2019

Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers.

A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks

hendrycks/error-detection 7 Oct 2016

We consider the two related problems of detecting if an example is misclassified or out-of-distribution.

Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks

facebookresearch/odin ICLR 2018

We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets.

Deep Anomaly Detection with Outlier Exposure

hendrycks/outlier-exposure ICLR 2019

We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.

Likelihood Ratios for Out-of-Distribution Detection

google-research/google-research NeurIPS 2019

We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics.

Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices

VectorInstitute/gram-ood-detection 28 Dec 2019

We find that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates.

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

pokaxpoka/deep_Mahalanobis_detector NeurIPS 2018

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.

Hierarchical VAEs Know What They Don't Know

vlievin/biva-pytorch 16 Feb 2021

Deep generative models have been demonstrated as state-of-the-art density estimators.

A Flexible and Adaptive Framework for Abstention Under Class Imbalance

blindauth/abstention 20 Feb 2018

Comparatively little attention was given to metrics such as area-under-the-curve or Cohen's Kappa, which are extremely relevant for imbalanced datasets.

Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty

hendrycks/ss-ood NeurIPS 2019

Self-supervision provides effective representations for downstream tasks without requiring labels.