170 papers with code • 40 benchmarks • 14 datasets
Detect out-of-distribution or anomalous examples.
In this work, we aim at using these biases with domain-level knowledge of melanoma, to improve likelihood-based OOD detection of malignant images.
A knee cannot have lung disease: out-of-distribution detection with in-distribution voting using the medical example of chest X-ray classification
Deep learning models are being applied to more and more use cases with astonishing success stories, but how do they perform in the real world?
Detecting out-of-distribution (OOD) data at inference time is crucial for many applications of machine learning.
This paper proposes a novel out-of-distribution (OOD) detection framework named MoodCat for image classifiers.
Machine learning models are prone to making incorrect predictions on inputs that are far from the training distribution.
Experiments on diverse real-world benchmarks demonstrate that the SRS method is well-suited for time-series OOD detection when compared to baseline methods.
Heteroscedasticity here refers to the fact that the optimal temperature parameter for each sample can be different, as opposed to conventional approaches that use the same value for the entire distribution.
We study simple methods for out-of-distribution (OOD) image detection that are compatible with any already trained classifier, relying on only its predictions or learned representations.
Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition
However, in real-world applications, it is common for the training sets to have long-tailed distributions.