Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision

Violence detection has been studied in computer vision for years. However, previous work are either superficial, e.g., classification of short-clips, and the single scenario, or undersupplied, e.g., the single modality, and hand-crafted features based multimodality. To address this problem, in this work we first release a large-scale and multi-scene dataset named XD-Violence with a total duration of 217 hours, containing 4754 untrimmed videos with audio signals and weak labels. Then we propose a neural network containing three parallel branches to capture different relations among video snippets and integrate features, where holistic branch captures long-range dependencies using similarity prior, localized branch captures local positional relation using proximity prior, and score branch dynamically captures the closeness of predicted score. Besides, our method also includes an approximator to meet the needs of online detection. Our method outperforms other state-of-the-art methods on our released dataset and other existing benchmark. Moreover, extensive experimental results also show the positive effect of multimodal (audio-visual) input and modeling relationships. The code and dataset will be released in https://roc-ng.github.io/XD-Violence/.

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract

Datasets


Introduced in the Paper:

XD-Violence

Used in the Paper:

UCF-Crime
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Anomaly Detection In Surveillance Videos XD-Violence A Neural Network Containing Three Parallel Branches (holistic, localized, and score branch) AP 78.64 # 8

Methods


No methods listed for this paper. Add relevant methods here