Action Recognition Based on Optimal Joint Selection and Discriminative Depth Descriptor

This paper proposes a novel human action recognition using the decision-level fusion of both skeleton and depth sequence. Firstly, a state-of-the-art descriptor RBPL, relative body part locations, is adopted to represent skeleton. But the original RBPL employs all the available joints, which may introduce redundancy or noise. This paper proposes an adaptive optimal joint selection model based on the distance traveled by joints before RBPL for each different action, which can reduce redundant joints. Then we use dynamic time warping to handle temporal misalignment and adopt KELM, kernel-based extreme learning machine, for action recognition. Secondly, an efficient feature descriptor DMM-disLBP, depth motion maps-based discriminative local binary patterns, is constructed to describe depth sequences, and KELM is also used for classification. Finally, we present an effective decision fusion for action recognition based on the maximum sum of decision values from skeleton and depth maps. Comparing with the baseline methods, we improve the performance using either skeleton or depth information, and achieve the state-of-the-art average recognition accuracy on the public dataset MSR Action3D using proposed fusing strategy.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here