MMNet: A Model-Based Multimodal Network for Human Action Recognition in RGB-D Videos

Human action recognition (HAR) in RGB-D videos has been widely investigated since the release of affordable depth sensors. Currently, unimodal approaches (e.g., skeleton-based and RGB video-based) have realized substantial improvements with increasingly larger datasets. However, multimodal methods specifically with model-level fusion have seldom been investigated. In this paper, we propose a model-based multimodal network (MMNet) that fuses skeleton and RGB modalities via a model-based approach. The objective of our method is to improve ensemble recognition accuracy by effectively applying mutually complementary information from different data modalities. For the model-based fusion scheme, we use a spatiotemporal graph convolution network for the skeleton modality to learn attention weights that will be transferred to the network of the RGB modality. Extensive experiments are conducted on five benchmark datasets: NTU RGB+D 60, NTU RGB+D 120, PKU-MMD, Northwestern-UCLA Multiview, and Toyota Smarthome. Upon aggregating the results of multiple modalities, our method is found to outperform state-of-the-art approaches on six evaluation protocols of the five datasets; thus, the proposed MMNet can effectively capture mutually complementary features in different RGB-D video modalities and provide more discriminative features for HAR. We also tested our MMNet on an RGB video dataset Kinetics 400 that contains more outdoor actions, which shows consistent results with those of RGB-D video datasets.

PDF Abstract

Results from the Paper


 Ranked #1 on Action Recognition In Videos on PKU-MMD (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Action Recognition NTU RGB+D MMNet (RGB + Pose) Accuracy (CS) 96.0 # 4
Accuracy (CV) 98.8 # 4
Action Recognition NTU RGB+D 120 MMNet (RGB + Pose) Accuracy (Cross-Subject) 92.9 # 3
Accuracy (Cross-Setup) 94.4 # 3
Skeleton Based Action Recognition N-UCLA MMNet (RGB + Pose) Accuracy 93.7 # 12
Action Recognition In Videos PKU-MMD MMNet X-Sub 97.4 # 1
X-View 98.6 # 1
Action Classification Toyota Smarthome dataset MMNet CS 70.1 # 2

Methods