Paper

Robust Out-of-distribution Detection for Neural Networks

Detecting anomalous inputs is critical for safely deploying deep learning models in the real world. Existing approaches for detecting out-of-distribution (OOD) examples work well when evaluated on natural samples drawn from a sufficiently different distribution than the training data distribution... (read more)

Results in Papers With Code
(↓ scroll down to see all results)