First-person hand activity recognition is a challenging task, especially when not enough data are available. In this paper, we tackle this challenge by proposing a new low-cost multi-stage learning pipeline for first-person RGB-based hand activity recognition on a limited amount of data. For a given RGB image activity sequence, in the first stage, the regions of interest are extracted using a pre-trained neural network (NN). Then, in the second stage, high-level spatial features are extracted using pre-trained deep NN. In the third stage, the temporal dependencies are learned. Finally, in the last stage, a hand activity sequence classifier is learned, using a post-fusion strategy, which is applied to the previously learned temporal dependencies. The experiments evaluated on two real-world data sets shows that our pipeline achieves the state-of-the-art. Moreover, it shows that the proposed pipeline achieves good results on limited data.

PDF
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Activity Recognition First-Person Hand Action Benchmark Boutaleb et al. 1:1 Accuracy 97.91 # 1

Methods


No methods listed for this paper. Add relevant methods here