Pixel-wise Anomaly Detection in Complex Driving Scenes

The inability of state-of-the-art semantic segmentation methods to detect anomaly instances hinders them from being deployed in safety-critical and complex applications, such as autonomous driving. Recent approaches have focused on either leveraging segmentation uncertainty to identify anomalous areas or re-synthesizing the image from the semantic label map to find dissimilarities with the input image. In this work, we demonstrate that these two methodologies contain complementary information and can be combined to produce robust predictions for anomaly segmentation. We present a pixel-wise anomaly detection framework that uses uncertainty maps to improve over existing re-synthesis methods in finding dissimilarities between the input and generated images. Our approach works as a general framework around already trained segmentation networks, which ensures anomaly detection without compromising segmentation accuracy, while significantly outperforming all similar methods. Top-2 performance across a range of different anomaly datasets shows the robustness of our approach to handling different anomaly instances.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Ranked #3 on Anomaly Detection on Lost and Found (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semantic Segmentation Cityscapes val SynBoost mIoU 83.5 # 21
Anomaly Detection Fishyscapes Synboost AP 72.59 # 4
FPR95 18.75 # 6
Anomaly Detection Fishyscapes L&F SynBoost AP 43.22 # 9
FPR95 15.79 # 11
Anomaly Detection Lost and Found SynBoost AP 70.43 # 3
FPR 4.89 # 2
Anomaly Detection Road Anomaly Synboost AP 41.83 # 6
FPR95 59.72 # 7

Methods


No methods listed for this paper. Add relevant methods here