Out of Distribution (OOD) Detection
214 papers with code • 5 benchmarks • 7 datasets
Out of Distribution (OOD) Detection is the task of detecting instances that do not belong to the distribution the classifier has been trained on. OOD data is often referred to as "unseen" data, as the model has not encountered it during training.
OOD detection is typically performed by training a model to distinguish between in-distribution (ID) data, which the model has seen during training, and OOD data, which it has not seen. This can be done using a variety of techniques, such as training a separate OOD detector, or modifying the model's architecture or loss function to make it more sensitive to OOD data.
We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics.
We find that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates.
Contrastive divergence is a popular method of training energy-based models, but is known to have difficulties with training stability.
An important application of generative modeling should be the ability to detect out-of-distribution (OOD) samples by setting a threshold on the likelihood.
Mahalanobis distance (MD) is a simple and popular post-processing method for detecting out-of-distribution (OOD) inputs in neural networks.
Our goal in this paper is to exploit heteroscedastic temperature scaling as a calibration strategy for out of distribution (OOD) detection.