Out-of-Distribution Detection
326 papers with code • 50 benchmarks • 22 datasets
Detect out-of-distribution or anomalous examples.
Libraries
Use these libraries to find Out-of-Distribution Detection models and implementationsDatasets
Latest papers
Out-of-Distribution Detection & Applications With Ablated Learned Temperature Energy
As deep neural networks become adopted in high-stakes domains, it is crucial to be able to identify when inference inputs are Out-of-Distribution (OOD) so that users can be alerted of likely drops in performance and calibration despite high confidence.
GOODAT: Towards Test-time Graph Out-of-Distribution Detection
To identify and reject OOD samples with GNNs, recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
Towards Reliable AI Model Deployments: Multiple Input Mixup for Out-of-Distribution Detection
With extensive experiments with CIFAR10 and CIFAR100 benchmarks that have been largely adopted in out-of-distribution detection fields, we have demonstrated our MIM shows comprehensively superior performance compared to the SOTA method.
Understanding normalization in contrastive representation learning and out-of-distribution detection
Our approach can be applied flexibly as an outlier exposure (OE) approach, where the out-of-distribution data is a huge collective of random images, or as a fully self-supervised learning approach, where the out-of-distribution data is self-generated by applying distribution-shifting transformations.
Out-of-Distribution Detection in Long-Tailed Recognition with Calibrated Outlier Class Learning
To this end, we introduce a novel calibrated outlier class learning (COCL) approach, in which 1) a debiased large margin learning method is introduced in the outlier class learning to distinguish OOD samples from both head and tail classes in the representation space and 2) an outlier-class-aware logit calibration method is defined to enhance the long-tailed classification confidence.
EAT: Towards Long-Tailed Out-of-Distribution Detection
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes, as the ability of a classifier to detect OOD instances is not strongly correlated with its accuracy on the in-distribution classes.
Navigating Open Set Scenarios for Skeleton-based Action Recognition
In real-world scenarios, human actions often fall outside the distribution of training data, making it crucial for models to recognize known actions and reject unknown ones.
Likelihood-Aware Semantic Alignment for Full-Spectrum Out-of-Distribution Detection
Full-spectrum out-of-distribution (F-OOD) detection aims to accurately recognize in-distribution (ID) samples while encountering semantic and covariate shifts simultaneously.
ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection
Out-of-distribution (OOD) detection methods often exploit auxiliary outliers to train model identifying OOD samples, especially discovering challenging outliers from auxiliary outliers dataset to improve OOD detection.
RankFeat&RankWeight: Rank-1 Feature/Weight Removal for Out-of-distribution Detection
This observation motivates us to propose \texttt{RankFeat}, a simple yet effective \emph{post hoc} approach for OOD detection by removing the rank-1 matrix composed of the largest singular value and the associated singular vectors from the high-level feature.