Out-of-Distribution Detection

318 papers with code • 50 benchmarks • 22 datasets

Detect out-of-distribution or anomalous examples.

Libraries

Use these libraries to find Out-of-Distribution Detection models and implementations

Most implemented papers

CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features

clovaai/CutMix-PyTorch ICCV 2019

Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers.

A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks

hendrycks/error-detection 7 Oct 2016

We consider the two related problems of detecting if an example is misclassified or out-of-distribution.

Deep Anomaly Detection with Outlier Exposure

hendrycks/outlier-exposure ICLR 2019

We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.

Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks

facebookresearch/odin ICLR 2018

We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets.

Likelihood Ratios for Out-of-Distribution Detection

google-research/google-research NeurIPS 2019

We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics.

Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices

VectorInstitute/gram-ood-detection 28 Dec 2019

We find that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates.

Energy-based Out-of-distribution Detection

wetliu/energy_ood NeurIPS 2020

We propose a unified framework for OOD detection that uses an energy score.

Learning Confidence for Out-of-Distribution Detection in Neural Networks

uoguelph-mlrg/confidence_estimation 13 Feb 2018

Modern neural networks are very powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong.

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

pokaxpoka/deep_Mahalanobis_detector NeurIPS 2018

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.

Probabilistic Autoencoder

VMBoehm/PAE Under review 2020

The PAE is fast and easy to train and achieves small reconstruction errors, high sample quality, and good performance in downstream tasks.