Out of Distribution (OOD) Detection
233 papers with code • 3 benchmarks • 9 datasets
Out of Distribution (OOD) Detection is the task of detecting instances that do not belong to the distribution the classifier has been trained on. OOD data is often referred to as "unseen" data, as the model has not encountered it during training.
OOD detection is typically performed by training a model to distinguish between in-distribution (ID) data, which the model has seen during training, and OOD data, which it has not seen. This can be done using a variety of techniques, such as training a separate OOD detector, or modifying the model's architecture or loss function to make it more sensitive to OOD data.
Libraries
Use these libraries to find Out of Distribution (OOD) Detection models and implementationsDatasets
Most implemented papers
Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers
In conjunction with the standard cross-entropy loss, we minimize the novel loss to train an ensemble of classifiers.
WAIC, but Why? Generative Ensembles for Robust Anomaly Detection
Machine learning models encounter Out-of-Distribution (OoD) errors when the data seen at test time are generated from a different stochastic generator than the one used to generate the training data.
Analysis of Confident-Classifiers for Out-of-distribution Detection
Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution).
Contextual Out-of-Domain Utterance Handling With Counterfeit Data Augmentation
Neural dialog models often lack robustness to anomalous user input and produce inappropriate responses which leads to frustrating user experience.
Outlier Exposure with Confidence Control for Out-of-Distribution Detection
Deep neural networks have achieved great success in classification tasks during the last years.
Detecting semantic anomalies
We critically appraise the recent interest in out-of-distribution (OOD) detection and question the practical relevance of existing benchmarks.
Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy
Unlike previous methods, we also utilize unlabeled data for unsupervised training and we use these unlabeled data to maximize the discrepancy between the decision boundaries of two classifiers to push OOD samples outside the manifold of the in-distribution (ID) samples, which enables us to detect OOD samples that are far from the support of the ID samples.
Isotropy Maximization Loss and Entropic Score: Accurate, Fast, Efficient, Scalable, and Turnkey Neural Networks Out-of-Distribution Detection Based on The Principle of Maximum Entropy
Consequently, we propose IsoMax, a loss that is isotropic (distance-based) and produces high entropy (low confidence) posterior probability distributions despite still relying on cross-entropy minimization.
Out-of-Domain Detection for Low-Resource Text Classification Tasks
Out-of-domain (OOD) detection for low-resource text classification is a realistic but understudied task.
Out-of-domain Detection for Natural Language Understanding in Dialog Systems
Besides, we also demonstrate that the effectiveness of these pseudo OOD data can be further improved by efficiently utilizing unlabeled data.