Out of Distribution (OOD) Detection
308 papers with code • 3 benchmarks • 9 datasets
Out of Distribution (OOD) Detection is the task of detecting instances that do not belong to the distribution the classifier has been trained on. OOD data is often referred to as "unseen" data, as the model has not encountered it during training.
OOD detection is typically performed by training a model to distinguish between in-distribution (ID) data, which the model has seen during training, and OOD data, which it has not seen. This can be done using a variety of techniques, such as training a separate OOD detector, or modifying the model's architecture or loss function to make it more sensitive to OOD data.
Libraries
Use these libraries to find Out of Distribution (OOD) Detection models and implementationsDatasets
Most implemented papers
Deep Anomaly Detection with Outlier Exposure
We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.
Improved Contrastive Divergence Training of Energy Based Models
Contrastive divergence is a popular method of training energy-based models, but is known to have difficulties with training stability.
Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices
We find that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates.
Likelihood Ratios for Out-of-Distribution Detection
We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics.
Hierarchical VAEs Know What They Don't Know
Deep generative models have been demonstrated as state-of-the-art density estimators.
A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
Mahalanobis distance (MD) is a simple and popular post-processing method for detecting out-of-distribution (OOD) inputs in neural networks.
Generalized Out-of-Distribution Detection: A Survey
In this survey, we first present a unified framework called generalized OOD detection, which encompasses the five aforementioned problems, i. e., AD, ND, OSR, OOD detection, and OD.
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants.
OpenOOD: Benchmarking Generalized Out-of-Distribution Detection
Out-of-distribution (OOD) detection is vital to safety-critical machine learning applications and has thus been extensively studied, with a plethora of methods developed in the literature.
GL-MCM: Global and Local Maximum Concept Matching for Zero-Shot Out-of-Distribution Detection
Zero-shot out-of-distribution (OOD) detection is a task that detects OOD images during inference with only in-distribution (ID) class names.