Human Interaction Learning on 3D Skeleton Point Clouds for Video Violence Recognition

ECCV 2020  ·  Yukun Su, Guosheng Lin, Jinhui Zhu, Qingyao Wu ·

This paper introduces a new method for recognizing violent behavior by learning contextual relationships between related people from human skeleton points. Unlike previous work, we first formulate 3D skeleton point clouds from human skeleton sequences extracted from videos and then perform interaction learning on these 3D skeleton point clouds. A novel extbf{S}keleton extbf{P}oints extbf{I}nteraction extbf{L}earning (SPIL) module, is proposed to model the interactions between skeleton points. Specifically, by constructing a specific weight distribution strategy between local regional points, SPIL aims to selectively focus on the most relevant parts of them based on their features and spatial-temporal position information. In order to capture diverse types of relation information, a multi-head mechanism is designed to aggregate different features from independent heads to jointly handle different types of relationships between points. Experimental results show that our model outperforms the existing networks and achieves new state-of-the-art performance on video violence datasets.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Activity Recognition RWF-2000 SPIL Convolution Accuracy 89.3 # 5

Methods


No methods listed for this paper. Add relevant methods here