Fast Unsupervised Anomaly Detection in Traffic Videos

CVPR 2020  ·  Keval Doshi, Yasin Yilmaz ·

Anomaly detection in traffic videos has been recently gaining attention due to its importance in intelligent transportation systems. Due to several factors such as weather, viewpoint, lighting conditions, etc. affecting the video quality of a real time traffic feed, it still remains a challenging problem. Even though the performance of state-of-the-art methods on the available benchmark dataset has been competitive, they demand a massive amount of external training data combined with significant computational resources. In this paper, we propose a fast unsupervised anomaly detection system comprising of three modules: preprocessing module, candidate selection module and backtracking anomaly detection module. The preprocessing module outputs stationary objects detected in a video. Then, the candidate selection module removes the misclassified stationary objects using a nearest neighbor approach and then uses K-means clustering to identify potential anomalous regions. Finally, the backtracking anomaly detection algorithm computes a similarity statistic and decides on the onset time of the anomaly. Experimental results on the Track 4 test set of the NVIDIA AI CITY 2020 challenge show the efficacy of the proposed framework as we achieve an F1-score of 0.5926 along with 8.2386 root mean square error (RMSE) and are ranked second in the competition

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here