About

Detect out-of-distribution or anomalous examples.

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Datasets

Greatest papers with code

Likelihood Ratios for Out-of-Distribution Detection

NeurIPS 2019 google-research/google-research

We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics.

OUT-OF-DISTRIBUTION DETECTION

A Systematic Comparison of Bayesian Deep Learning Robustness in Diabetic Retinopathy Tasks

22 Dec 2019OATML/bdl-benchmarks

From our comparison we conclude that some current techniques which solve benchmarks such as UCI `overfit' their uncertainty to the dataset---when evaluated on our benchmark these underperform in comparison to simpler baselines.

OUT-OF-DISTRIBUTION DETECTION

Deep Anomaly Detection with Outlier Exposure

ICLR 2019 hendrycks/outlier-exposure

We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.

Ranked #2 on Out-of-Distribution Detection on CIFAR-100 (using extra training data)

ANOMALY DETECTION OUT-OF-DISTRIBUTION DETECTION

Natural Adversarial Examples

16 Jul 2019hendrycks/natural-adv-examples

We also curate an adversarial out-of-distribution detection dataset called ImageNet-O, which is the first out-of-distribution detection dataset created for ImageNet models.

ADVERSARIAL ATTACK DATA AUGMENTATION DOMAIN GENERALIZATION OUT-OF-DISTRIBUTION DETECTION

Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification

NeurIPS 2020 VLL-HD/FrEIA

In this work, firstly, we develop the theory and methodology of IB-INNs, a class of conditional normalizing flows where INNs are trained using the IB objective: Introducing a small amount of {\em controlled} information loss allows for an asymptotically exact formulation of the IB, while keeping the INN's generative capabilities intact.

CLASSIFICATION OUT-OF-DISTRIBUTION DETECTION

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

NeurIPS 2018 pokaxpoka/deep_Mahalanobis_detector

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.

CLASS-INCREMENTAL LEARNING INCREMENTAL LEARNING OUT-OF-DISTRIBUTION DETECTION

Learning Confidence for Out-of-Distribution Detection in Neural Networks

13 Feb 2018uoguelph-mlrg/confidence_estimation

Modern neural networks are very powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong.

OUT-OF-DISTRIBUTION DETECTION