Out-of-Distribution Detection

326 papers with code • 50 benchmarks • 22 datasets

Detect out-of-distribution or anomalous examples.

Libraries

Use these libraries to find Out-of-Distribution Detection models and implementations

Most implemented papers

Hierarchical VAEs Know What They Don't Know

vlievin/biva-pytorch 16 Feb 2021

Deep generative models have been demonstrated as state-of-the-art density estimators.

MOS: Towards Scaling Out-of-distribution Detection for Large Semantic Space

deeplearning-wisc/large_scale_ood CVPR 2021

Detecting out-of-distribution (OOD) inputs is a central challenge for safely deploying machine learning models in the real world.

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise

hongxin001/ODNL NeurIPS 2021

Learning with noisy labels is a practically challenging problem in weakly supervised learning.

A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification

aangelopoulos/conformal-prediction 15 Jul 2021

Conformal prediction is a user-friendly paradigm for creating statistically rigorous uncertainty sets/intervals for the predictions of such models.

Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty

hendrycks/ss-ood NeurIPS 2019

Self-supervision provides effective representations for downstream tasks without requiring labels.

Natural Adversarial Examples

hendrycks/natural-adv-examples CVPR 2021

We also curate an adversarial out-of-distribution detection dataset called ImageNet-O, which is the first out-of-distribution detection dataset created for ImageNet models.

Scaling Out-of-Distribution Detection for Real-World Settings

hendrycks/anomaly-seg 25 Nov 2019

We conduct extensive experiments in these more realistic settings for out-of-distribution detection and find that a surprisingly simple detector based on the maximum logit outperforms prior methods in all the large-scale multi-class, multi-label, and segmentation tasks, establishing a simple new baseline for future work.

Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification

VLL-HD/FrEIA NeurIPS 2020

In this work, firstly, we develop the theory and methodology of IB-INNs, a class of conditional normalizing flows where INNs are trained using the IB objective: Introducing a small amount of {\em controlled} information loss allows for an asymptotically exact formulation of the IB, while keeping the INN's generative capabilities intact.

A Benchmark of Medical Out of Distribution Detection

caotians1/OD-test-master 8 Jul 2020

However it is unclear which OoDD method should be used in practice.

Masksembles for Uncertainty Estimation

nikitadurasov/masksembles CVPR 2021

Our central intuition is that there is a continuous spectrum of ensemble-like models of which MC-Dropout and Deep Ensembles are extreme examples.