Dense Out-of-Distribution Detection by Robust Learning on Synthetic Negative Data

23 Dec 2021  ·  Matej Grcić, Petra Bevandić, Zoran Kalafatić, Siniša Šegvić ·

Standard machine learning is unable to accommodate inputs which do not belong to the training distribution. The resulting models often give rise to confident incorrect predictions which may lead to devastating consequences. This problem is especially demanding in the context of dense prediction since input images may be only partially anomalous. Previous work has addressed dense out-of-distribution detection by discriminative training with respect to off-the-shelf negative datasets. However, real negative data are unlikely to cover all modes of the entire visual world. To this end, we extend this approach by generating synthetic negative patches along the border of the inlier manifold. We leverage a jointly trained normalizing flow due to coverage-oriented learning objective and the capability to generate samples at different resolutions. We detect anomalies according to a principled information-theoretic criterion which can be consistently applied through training and inference. The resulting models set the new state of the art on benchmarks for out-of-distribution detection in road-driving scenes and remote sensing imagery, in spite of minimal computational overhead.

PDF Abstract

Results from the Paper


Ranked #2 on Anomaly Detection on Fishyscapes L&F (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Anomaly Detection Fishyscapes L&F NFlowJS-GF (with extra inlier set: Vistas and Wilddash2) AP 69.43 # 2
FPR95 2.00 # 1
Anomaly Detection Fishyscapes L&F NFlow AP 39.36 # 11

Methods