Out of Distribution (OOD) Detection

233 papers with code • 3 benchmarks • 9 datasets

Out of Distribution (OOD) Detection is the task of detecting instances that do not belong to the distribution the classifier has been trained on. OOD data is often referred to as "unseen" data, as the model has not encountered it during training.

OOD detection is typically performed by training a model to distinguish between in-distribution (ID) data, which the model has seen during training, and OOD data, which it has not seen. This can be done using a variety of techniques, such as training a separate OOD detector, or modifying the model's architecture or loss function to make it more sensitive to OOD data.

Libraries

Use these libraries to find Out of Distribution (OOD) Detection models and implementations

Most implemented papers

Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers

YU1ut/Ensemble-of-Leave-out-Classifiers ECCV 2018

In conjunction with the standard cross-entropy loss, we minimize the novel loss to train an ensemble of classifiers.

WAIC, but Why? Generative Ensembles for Robust Anomaly Detection

ericjang/odin 2 Oct 2018

Machine learning models encounter Out-of-Distribution (OoD) errors when the data seen at test time are generated from a different stochastic generator than the one used to generate the training data.

Analysis of Confident-Classifiers for Out-of-distribution Detection

sverneka/ConfidentClassifierICLR19 27 Apr 2019

Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution).

Contextual Out-of-Domain Utterance Handling With Counterfeit Data Augmentation

sungjinl/icassp2019-ood-dataset 24 May 2019

Neural dialog models often lack robustness to anomalous user input and produce inappropriate responses which leads to frustrating user experience.

Outlier Exposure with Confidence Control for Out-of-Distribution Detection

nazim1021/OOD-detection-using-OECC 8 Jun 2019

Deep neural networks have achieved great success in classification tasks during the last years.

Detecting semantic anomalies

Faruk-Ahmed/detecting_semantic_anomalies 13 Aug 2019

We critically appraise the recent interest in out-of-distribution (OOD) detection and question the practical relevance of existing benchmarks.

Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy

Mephisto405/Unsupervised-Out-of-Distribution-Detection-by-Maximum-Classifier-Discrepancy ICCV 2019

Unlike previous methods, we also utilize unlabeled data for unsupervised training and we use these unlabeled data to maximize the discrepancy between the decision boundaries of two classifiers to push OOD samples outside the manifold of the in-distribution (ID) samples, which enables us to detect OOD samples that are far from the support of the ID samples.

Isotropy Maximization Loss and Entropic Score: Accurate, Fast, Efficient, Scalable, and Turnkey Neural Networks Out-of-Distribution Detection Based on The Principle of Maximum Entropy

dlmacedo/entropic-out-of-distribution-detection 15 Aug 2019

Consequently, we propose IsoMax, a loss that is isotropic (distance-based) and produces high entropy (low confidence) posterior probability distributions despite still relying on cross-entropy minimization.

Out-of-Domain Detection for Low-Resource Text Classification Tasks

SLAD-ml/few-shot-ood IJCNLP 2019

Out-of-domain (OOD) detection for low-resource text classification is a realistic but understudied task.

Out-of-domain Detection for Natural Language Understanding in Dialog Systems

silverriver/ood4nlu 9 Sep 2019

Besides, we also demonstrate that the effectiveness of these pseudo OOD data can be further improved by efficiently utilizing unlabeled data.